intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit()
@ 2020-02-18 16:21 Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 02/12] drm/i915/gt: Show the cumulative context runtime in engine debug Chris Wilson
                   ` (13 more replies)
  0 siblings, 14 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx; +Cc: Matthew Auld

We only want to wait until the request has been submitted at least once;
that is it is either in flight, or has been.

References: fcf7df7aae24 ("drm/i915/selftests: Check for the error interrupt before we wait!")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 40c53cc1c7c0..7b303d5fd5b8 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -83,7 +83,7 @@ static int wait_for_submit(struct intel_engine_cs *engine,
 			return 0;
 		}
 
-		if (i915_request_completed(rq)) /* that was quick! */
+		if (i915_request_started(rq)) /* that was quick! */
 			return 0;
 	} while (time_before(jiffies, timeout));
 
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 02/12] drm/i915/gt: Show the cumulative context runtime in engine debug
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 20:45   ` Matthew Auld
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 03/12] drm/i915/execlists: Check the sentinel is alone in the ELSP Chris Wilson
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

As we have the total runtime known to us, show it when dumping the
engine state for debug.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index f6f5e1ec48fc..e46e55354e95 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1376,7 +1376,7 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
 		execlists_active_lock_bh(execlists);
 		rcu_read_lock();
 		for (port = execlists->active; (rq = *port); port++) {
-			char hdr[80];
+			char hdr[160];
 			int len;
 
 			len = snprintf(hdr, sizeof(hdr),
@@ -1386,10 +1386,12 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
 				struct intel_timeline *tl = get_timeline(rq);
 
 				len += snprintf(hdr + len, sizeof(hdr) - len,
-						"ring:{start:%08x, hwsp:%08x, seqno:%08x}, ",
+						"ring:{start:%08x, hwsp:%08x, seqno:%08x, runtime:%llums}, ",
 						i915_ggtt_offset(rq->ring->vma),
 						tl ? tl->hwsp_offset : 0,
-						hwsp_seqno(rq));
+						hwsp_seqno(rq),
+						DIV_ROUND_CLOSEST_ULL(intel_context_get_total_runtime_ns(rq->context),
+								      1000 * 1000));
 
 				if (tl)
 					intel_timeline_put(tl);
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 03/12] drm/i915/execlists: Check the sentinel is alone in the ELSP
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 02/12] drm/i915/gt: Show the cumulative context runtime in engine debug Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 04/12] drm/i915/gt: Fix up missing error propagation for heartbeat pulses Chris Wilson
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

We only use sentinel requests for "preempt-to-idle" passes, so assert
that they are the only request in a new submission.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index ba31cbe8c68e..f2c49dc1e6b4 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1448,6 +1448,7 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
 {
 	struct i915_request * const *port, *rq;
 	struct intel_context *ce = NULL;
+	bool sentinel = false;
 
 	trace_ports(execlists, msg, execlists->pending);
 
@@ -1481,6 +1482,26 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
 		}
 		ce = rq->context;
 
+		/*
+		 * Sentinels are supposed to be lonely so they flush the
+		 * current exection off the HW. Check that they are the
+		 * only request in the pending submission.
+		 */
+		if (sentinel) {
+			GEM_TRACE_ERR("context:%llx after sentinel in pending[%zd]\n",
+				      ce->timeline->fence_context,
+				      port - execlists->pending);
+			return false;
+		}
+
+		sentinel = i915_request_has_sentinel(rq);
+		if (sentinel && port != execlists->pending) {
+			GEM_TRACE_ERR("sentinel context:%llx not in prime position[%zd]\n",
+				      ce->timeline->fence_context,
+				      port - execlists->pending);
+			return false;
+		}
+
 		/* Hold tightly onto the lock to prevent concurrent retires! */
 		if (!spin_trylock_irqsave(&rq->lock, flags))
 			continue;
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 04/12] drm/i915/gt: Fix up missing error propagation for heartbeat pulses
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 02/12] drm/i915/gt: Show the cumulative context runtime in engine debug Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 03/12] drm/i915/execlists: Check the sentinel is alone in the ELSP Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 20:18   ` Matthew Auld
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 05/12] drm/i915/gt: Prevent allocation on a banned context Chris Wilson
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

Just missed setting err along an interruptible error path for the
intel_engine_pulse().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index 6c6fd185457c..dd825718e4e5 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -180,7 +180,7 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
 	struct i915_sched_attr attr = { .priority = I915_PRIORITY_BARRIER };
 	struct intel_context *ce = engine->kernel_context;
 	struct i915_request *rq;
-	int err = 0;
+	int err;
 
 	if (!intel_engine_has_preemption(engine))
 		return -ENODEV;
@@ -188,8 +188,10 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
 	if (!intel_engine_pm_get_if_awake(engine))
 		return 0;
 
-	if (mutex_lock_interruptible(&ce->timeline->mutex))
+	if (mutex_lock_interruptible(&ce->timeline->mutex)) {
+		err = -EINTR;
 		goto out_rpm;
+	}
 
 	intel_context_enter(ce);
 	rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN);
@@ -204,6 +206,8 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
 
 	__i915_request_commit(rq);
 	__i915_request_queue(rq, &attr);
+	GEM_BUG_ON(rq->sched.attr.priority < I915_PRIORITY_BARRIER);
+	err = 0;
 
 out_unlock:
 	mutex_unlock(&ce->timeline->mutex);
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 05/12] drm/i915/gt: Prevent allocation on a banned context
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (2 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 04/12] drm/i915/gt: Fix up missing error propagation for heartbeat pulses Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 20:22   ` Matthew Auld
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 06/12] drm/i915/gem: Check that the context wasn't closed during setup Chris Wilson
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

If a context is banned even before we submit our first request to it,
report the failure before we attempt to allocate any resources for the
context.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_context.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index 8bb444cda14f..01474d3a558b 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -51,6 +51,11 @@ int intel_context_alloc_state(struct intel_context *ce)
 		return -EINTR;
 
 	if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags)) {
+		if (intel_context_is_banned(ce)) {
+			err = -EIO;
+			goto unlock;
+		}
+
 		err = ce->ops->alloc(ce);
 		if (unlikely(err))
 			goto unlock;
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 06/12] drm/i915/gem: Check that the context wasn't closed during setup
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (3 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 05/12] drm/i915/gt: Prevent allocation on a banned context Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 20:38   ` Matthew Auld
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 07/12] drm/i915/gem: Consolidate ctx->engines[] release Chris Wilson
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

As setup takes a long time, the user may close the context during the
construction of the execbuf. In order to make sure we correctly track
all outstanding work with non-persistent contexts, we need to serialise
the submission with the context closure and mop up any leaks.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 87fa5f42c39a..b2311fe93ad8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2729,6 +2729,12 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 		goto err_batch_unpin;
 	}
 
+	/* Check that the context wasn't destroyed before setup */
+	if (!rcu_access_pointer(eb.context->gem_context)) {
+		err = -ENOENT;
+		goto err_request;
+	}
+
 	if (in_fence) {
 		err = i915_request_await_dma_fence(eb.request, in_fence);
 		if (err < 0)
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 07/12] drm/i915/gem: Consolidate ctx->engines[] release
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (4 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 06/12] drm/i915/gem: Check that the context wasn't closed during setup Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 08/12] drm/i915/selftest: Analyse timestamp behaviour across context switches Chris Wilson
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

Use the same engine_idle_release() routine for cleaning all old
ctx->engine[] state, closing any potential races with concurrent execbuf
submission.

Closes: https://gitlab.freedesktop.org/drm/intel/issues/1241
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
Reorder set-closed/engine_idle_release to avoid premature killing
Take a reference to prevent racing context free with engine cleanup
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 190 ++++++++++----------
 1 file changed, 100 insertions(+), 90 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 3e82739bdbc0..99206ec45876 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -243,7 +243,6 @@ static void __free_engines(struct i915_gem_engines *e, unsigned int count)
 		if (!e->engines[count])
 			continue;
 
-		RCU_INIT_POINTER(e->engines[count]->gem_context, NULL);
 		intel_context_put(e->engines[count]);
 	}
 	kfree(e);
@@ -270,8 +269,6 @@ static struct i915_gem_engines *default_engines(struct i915_gem_context *ctx)
 	if (!e)
 		return ERR_PTR(-ENOMEM);
 
-	e->ctx = ctx;
-
 	for_each_engine(engine, gt, id) {
 		struct intel_context *ce;
 
@@ -305,7 +302,6 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
 	list_del(&ctx->link);
 	spin_unlock(&ctx->i915->gem.contexts.lock);
 
-	free_engines(rcu_access_pointer(ctx->engines));
 	mutex_destroy(&ctx->engines_mutex);
 
 	if (ctx->timeline)
@@ -492,30 +488,110 @@ static void kill_engines(struct i915_gem_engines *engines)
 static void kill_stale_engines(struct i915_gem_context *ctx)
 {
 	struct i915_gem_engines *pos, *next;
-	unsigned long flags;
 
-	spin_lock_irqsave(&ctx->stale.lock, flags);
+	spin_lock_irq(&ctx->stale.lock);
+	GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
 	list_for_each_entry_safe(pos, next, &ctx->stale.engines, link) {
-		if (!i915_sw_fence_await(&pos->fence))
+		if (!i915_sw_fence_await(&pos->fence)) {
+			list_del_init(&pos->link);
 			continue;
+		}
 
-		spin_unlock_irqrestore(&ctx->stale.lock, flags);
+		spin_unlock_irq(&ctx->stale.lock);
 
 		kill_engines(pos);
 
-		spin_lock_irqsave(&ctx->stale.lock, flags);
+		spin_lock_irq(&ctx->stale.lock);
+		GEM_BUG_ON(i915_sw_fence_signaled(&pos->fence));
 		list_safe_reset_next(pos, next, link);
 		list_del_init(&pos->link); /* decouple from FENCE_COMPLETE */
 
 		i915_sw_fence_complete(&pos->fence);
 	}
-	spin_unlock_irqrestore(&ctx->stale.lock, flags);
+	spin_unlock_irq(&ctx->stale.lock);
 }
 
 static void kill_context(struct i915_gem_context *ctx)
 {
 	kill_stale_engines(ctx);
-	kill_engines(__context_engines_static(ctx));
+}
+
+static int engines_notify(struct i915_sw_fence *fence,
+			  enum i915_sw_fence_notify state)
+{
+	struct i915_gem_engines *engines =
+		container_of(fence, typeof(*engines), fence);
+
+	switch (state) {
+	case FENCE_COMPLETE:
+		if (!list_empty(&engines->link)) {
+			struct i915_gem_context *ctx = engines->ctx;
+			unsigned long flags;
+
+			spin_lock_irqsave(&ctx->stale.lock, flags);
+			list_del(&engines->link);
+			spin_unlock_irqrestore(&ctx->stale.lock, flags);
+		}
+		break;
+
+	case FENCE_FREE:
+		i915_gem_context_put(engines->ctx);
+		init_rcu_head(&engines->rcu);
+		call_rcu(&engines->rcu, free_engines_rcu);
+		break;
+	}
+
+	return NOTIFY_DONE;
+}
+
+static void engines_idle_release(struct i915_gem_context *ctx,
+				 struct i915_gem_engines *engines)
+{
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+
+	i915_sw_fence_init(&engines->fence, engines_notify);
+	INIT_LIST_HEAD(&engines->link);
+
+	engines->ctx = i915_gem_context_get(ctx);
+
+	for_each_gem_engine(ce, engines, it) {
+		int err = 0;
+
+		RCU_INIT_POINTER(ce->gem_context, NULL);
+
+		if (!ce->timeline) { /* XXX serialisation with execbuf? */
+			intel_context_set_banned(ce);
+			continue;
+		}
+
+		mutex_lock(&ce->timeline->mutex);
+		if (!list_empty(&ce->timeline->requests)) {
+			struct i915_request *rq;
+
+			rq = list_last_entry(&ce->timeline->requests,
+					     typeof(*rq),
+					     link);
+
+			err = i915_sw_fence_await_dma_fence(&engines->fence,
+							    &rq->fence, 0,
+							    GFP_KERNEL);
+		}
+		mutex_unlock(&ce->timeline->mutex);
+		if (err < 0)
+			goto kill;
+	}
+
+	spin_lock_irq(&engines->ctx->stale.lock);
+	if (!i915_gem_context_is_closed(engines->ctx))
+		list_add_tail(&engines->link, &engines->ctx->stale.engines);
+	spin_unlock_irq(&engines->ctx->stale.lock);
+
+kill:
+	if (list_empty(&engines->link)) /* raced, already closed */
+		kill_engines(engines);
+
+	i915_sw_fence_commit(&engines->fence);
 }
 
 static void set_closed_name(struct i915_gem_context *ctx)
@@ -539,11 +615,16 @@ static void context_close(struct i915_gem_context *ctx)
 {
 	struct i915_address_space *vm;
 
+	/* Flush any concurrent set_engines() */
+	mutex_lock(&ctx->engines_mutex);
+	engines_idle_release(ctx, rcu_replace_pointer(ctx->engines, NULL, 1));
 	i915_gem_context_set_closed(ctx);
-	set_closed_name(ctx);
+	mutex_unlock(&ctx->engines_mutex);
 
 	mutex_lock(&ctx->mutex);
 
+	set_closed_name(ctx);
+
 	vm = i915_gem_context_vm(ctx);
 	if (vm)
 		i915_vm_close(vm);
@@ -1562,77 +1643,6 @@ static const i915_user_extension_fn set_engines__extensions[] = {
 	[I915_CONTEXT_ENGINES_EXT_BOND] = set_engines__bond,
 };
 
-static int engines_notify(struct i915_sw_fence *fence,
-			  enum i915_sw_fence_notify state)
-{
-	struct i915_gem_engines *engines =
-		container_of(fence, typeof(*engines), fence);
-
-	switch (state) {
-	case FENCE_COMPLETE:
-		if (!list_empty(&engines->link)) {
-			struct i915_gem_context *ctx = engines->ctx;
-			unsigned long flags;
-
-			spin_lock_irqsave(&ctx->stale.lock, flags);
-			list_del(&engines->link);
-			spin_unlock_irqrestore(&ctx->stale.lock, flags);
-		}
-		break;
-
-	case FENCE_FREE:
-		init_rcu_head(&engines->rcu);
-		call_rcu(&engines->rcu, free_engines_rcu);
-		break;
-	}
-
-	return NOTIFY_DONE;
-}
-
-static void engines_idle_release(struct i915_gem_engines *engines)
-{
-	struct i915_gem_engines_iter it;
-	struct intel_context *ce;
-	unsigned long flags;
-
-	GEM_BUG_ON(!engines);
-	i915_sw_fence_init(&engines->fence, engines_notify);
-
-	INIT_LIST_HEAD(&engines->link);
-	spin_lock_irqsave(&engines->ctx->stale.lock, flags);
-	if (!i915_gem_context_is_closed(engines->ctx))
-		list_add(&engines->link, &engines->ctx->stale.engines);
-	spin_unlock_irqrestore(&engines->ctx->stale.lock, flags);
-	if (list_empty(&engines->link)) /* raced, already closed */
-		goto kill;
-
-	for_each_gem_engine(ce, engines, it) {
-		struct dma_fence *fence;
-		int err;
-
-		if (!ce->timeline)
-			continue;
-
-		fence = i915_active_fence_get(&ce->timeline->last_request);
-		if (!fence)
-			continue;
-
-		err = i915_sw_fence_await_dma_fence(&engines->fence,
-						    fence, 0,
-						    GFP_KERNEL);
-
-		dma_fence_put(fence);
-		if (err < 0)
-			goto kill;
-	}
-	goto out;
-
-kill:
-	kill_engines(engines);
-out:
-	i915_sw_fence_commit(&engines->fence);
-}
-
 static int
 set_engines(struct i915_gem_context *ctx,
 	    const struct drm_i915_gem_context_param *args)
@@ -1675,8 +1685,6 @@ set_engines(struct i915_gem_context *ctx,
 	if (!set.engines)
 		return -ENOMEM;
 
-	set.engines->ctx = ctx;
-
 	for (n = 0; n < num_engines; n++) {
 		struct i915_engine_class_instance ci;
 		struct intel_engine_cs *engine;
@@ -1729,6 +1737,11 @@ set_engines(struct i915_gem_context *ctx,
 
 replace:
 	mutex_lock(&ctx->engines_mutex);
+	if (i915_gem_context_is_closed(ctx)) {
+		mutex_unlock(&ctx->engines_mutex);
+		free_engines(set.engines);
+		return -ENOENT;
+	}
 	if (args->size)
 		i915_gem_context_set_user_engines(ctx);
 	else
@@ -1737,7 +1750,7 @@ set_engines(struct i915_gem_context *ctx,
 	mutex_unlock(&ctx->engines_mutex);
 
 	/* Keep track of old engine sets for kill_context() */
-	engines_idle_release(set.engines);
+	engines_idle_release(ctx, set.engines);
 
 	return 0;
 }
@@ -1995,8 +2008,6 @@ static int clone_engines(struct i915_gem_context *dst,
 	if (!clone)
 		goto err_unlock;
 
-	clone->ctx = dst;
-
 	for (n = 0; n < e->num_engines; n++) {
 		struct intel_engine_cs *engine;
 
@@ -2033,8 +2044,7 @@ static int clone_engines(struct i915_gem_context *dst,
 	i915_gem_context_unlock_engines(src);
 
 	/* Serialised by constructor */
-	free_engines(__context_engines_static(dst));
-	RCU_INIT_POINTER(dst->engines, clone);
+	engines_idle_release(dst, rcu_replace_pointer(dst->engines, clone, 1));
 	if (user_engines)
 		i915_gem_context_set_user_engines(dst);
 	else
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 08/12] drm/i915/selftest: Analyse timestamp behaviour across context switches
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (5 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 07/12] drm/i915/gem: Consolidate ctx->engines[] release Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 09/12] drm/i915: Read rawclk_freq earlier Chris Wilson
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

Check that the CTX_TIMESTAMP is monotonic across context save/restore
and upon preemption.

References: https://gitlab.freedesktop.org/drm/intel/issues/1233
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 230 +++++++++++++++++++++++++
 1 file changed, 230 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 7b303d5fd5b8..108dfba6deb3 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -4455,6 +4455,235 @@ static int live_gpr_clear(void *arg)
 	return err;
 }
 
+static struct i915_request *
+create_timestamp(struct intel_context *ce, void *slot, int idx)
+{
+	const u32 offset =
+		i915_ggtt_offset(ce->engine->status_page.vma) +
+		offset_in_page(slot);
+	struct i915_request *rq;
+	u32 *cs;
+	int err;
+
+	rq = intel_context_create_request(ce);
+	if (IS_ERR(rq))
+		return rq;
+
+	cs = intel_ring_begin(rq, 10);
+	if (IS_ERR(cs)) {
+		err = PTR_ERR(cs);
+		goto err;
+	}
+
+	*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
+	*cs++ = MI_NOOP;
+
+	*cs++ = MI_SEMAPHORE_WAIT |
+		MI_SEMAPHORE_GLOBAL_GTT |
+		MI_SEMAPHORE_POLL |
+		MI_SEMAPHORE_SAD_NEQ_SDD;
+	*cs++ = 0;
+	*cs++ = offset;
+	*cs++ = 0;
+
+	*cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
+	*cs++ = i915_mmio_reg_offset(RING_CTX_TIMESTAMP(rq->engine->mmio_base));
+	*cs++ = offset + idx * sizeof(u32);
+	*cs++ = 0;
+
+	intel_ring_advance(rq, cs);
+
+	rq->sched.attr.priority = I915_PRIORITY_MASK;
+	err = 0;
+err:
+	i915_request_get(rq);
+	i915_request_add(rq);
+	if (err) {
+		i915_request_put(rq);
+		return ERR_PTR(err);
+	}
+
+	return rq;
+}
+
+static int
+emit_timestamp_release(struct intel_context *ce, void *slot)
+{
+	const u32 offset =
+		i915_ggtt_offset(ce->engine->status_page.vma) +
+		offset_in_page(slot);
+	struct i915_request *rq;
+	u32 *cs;
+
+	rq = intel_context_create_request(ce);
+	if (IS_ERR(rq))
+		return PTR_ERR(rq);
+
+	cs = intel_ring_begin(rq, 4);
+	if (IS_ERR(cs)) {
+		i915_request_add(rq);
+		return PTR_ERR(cs);
+	}
+
+	*cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
+	*cs++ = offset;
+	*cs++ = 0;
+	*cs++ = 1;
+
+	intel_ring_advance(rq, cs);
+
+	rq->sched.attr.priority = I915_PRIORITY_BARRIER;
+	i915_request_add(rq);
+	return 0;
+}
+
+struct lrc_timestamp {
+	struct intel_engine_cs *engine;
+	struct intel_context *ce[2];
+	u32 poison;
+};
+
+static bool timestamp_advanced(u32 start, u32 end)
+{
+	return (s32)(end - start) > 0;
+}
+
+static int __lrc_timestamp(const struct lrc_timestamp *arg, bool preempt)
+{
+	u32 *slot = memset32(arg->engine->status_page.addr + 1000, 0, 4);
+	struct i915_request *rq;
+	u32 timestamp;
+	int err = 0;
+
+	arg->ce[0]->lrc_reg_state[CTX_TIMESTAMP] = arg->poison;
+	rq = create_timestamp(arg->ce[0], slot, 1);
+	if (IS_ERR(rq))
+		return PTR_ERR(rq);
+
+	err = wait_for_submit(rq->engine, rq, HZ / 2);
+	if (err)
+		goto err;
+
+	if (preempt) {
+		arg->ce[1]->lrc_reg_state[CTX_TIMESTAMP] = 0xdeadbeef;
+		err = emit_timestamp_release(arg->ce[1], slot);
+		if (err)
+			goto err;
+	} else {
+		slot[0] = 1;
+		wmb();
+	}
+
+	if (i915_request_wait(rq, 0, HZ / 2) < 0) {
+		err = -ETIME;
+		goto err;
+	}
+
+	/* and wait for switch to kernel */
+	if (igt_flush_test(arg->engine->i915)) {
+		err = -EIO;
+		goto err;
+	}
+
+	rmb();
+
+	if (!timestamp_advanced(arg->poison, slot[1])) {
+		pr_err("%s(%s): invalid timestamp on restore, context:%x, request:%x\n",
+		       arg->engine->name, preempt ? "preempt" : "simple",
+		       arg->poison, slot[1]);
+		err = -EINVAL;
+	}
+
+	timestamp = READ_ONCE(arg->ce[0]->lrc_reg_state[CTX_TIMESTAMP]);
+	if (!timestamp_advanced(slot[1], timestamp)) {
+		pr_err("%s(%s): invalid timestamp on save, request:%x, context:%x\n",
+		       arg->engine->name, preempt ? "preempt" : "simple",
+		       slot[1], timestamp);
+		err = -EINVAL;
+	}
+
+err:
+	memset32(slot, -1, 4);
+	i915_request_put(rq);
+	return err;
+}
+
+static int live_lrc_timestamp(void *arg)
+{
+	struct intel_gt *gt = arg;
+	enum intel_engine_id id;
+	struct lrc_timestamp data;
+	const u32 poison[] = {
+		0,
+		S32_MAX,
+		(u32)S32_MAX + 1,
+		U32_MAX,
+	};
+
+	/*
+	 * We want to verify that the timestamp is saved and restore across
+	 * context switches and is monotonic.
+	 *
+	 * So we do this with a little bit of LRC poisoning to check various
+	 * boundary conditions, and see what happens if we preempt the context
+	 * with a second request (carrying more poison into the timestamp).
+	 */
+
+	for_each_engine(data.engine, gt, id) {
+		unsigned long heartbeat;
+		int i, err = 0;
+
+		engine_heartbeat_disable(data.engine, &heartbeat);
+
+		for (i = 0; i < ARRAY_SIZE(data.ce); i++) {
+			struct intel_context *tmp;
+
+			tmp = intel_context_create(data.engine);
+			if (IS_ERR(tmp)) {
+				err = PTR_ERR(tmp);
+				goto err;
+			}
+
+			err = intel_context_pin(tmp);
+			if (err) {
+				intel_context_put(tmp);
+				goto err;
+			}
+
+			data.ce[i] = tmp;
+		}
+
+		for (i = 0; i < ARRAY_SIZE(poison); i++) {
+			data.poison = poison[i];
+
+			err = __lrc_timestamp(&data, false);
+			if (err)
+				break;
+
+			err = __lrc_timestamp(&data, true);
+			if (err)
+				break;
+		}
+
+err:
+		engine_heartbeat_enable(data.engine, heartbeat);
+		for (i = 0; i < ARRAY_SIZE(data.ce); i++) {
+			if (!data.ce[i])
+				break;
+
+			intel_context_unpin(data.ce[i]);
+			intel_context_put(data.ce[i]);
+		}
+
+		if (igt_flush_test(gt->i915))
+			err = -EIO;
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
 static int __live_pphwsp_runtime(struct intel_engine_cs *engine)
 {
 	struct intel_context *ce;
@@ -4552,6 +4781,7 @@ int intel_lrc_live_selftests(struct drm_i915_private *i915)
 		SUBTEST(live_lrc_fixed),
 		SUBTEST(live_lrc_state),
 		SUBTEST(live_gpr_clear),
+		SUBTEST(live_lrc_timestamp),
 		SUBTEST(live_pphwsp_runtime),
 	};
 
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 09/12] drm/i915: Read rawclk_freq earlier
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (6 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 08/12] drm/i915/selftest: Analyse timestamp behaviour across context switches Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability Chris Wilson
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

Read the rawclk_freq during runtime info probing, prior to its first use
in computing the CS timestamp frequency. Then store it in the runtime
info, and include it in the debug printouts.

Closes: https://gitlab.freedesktop.org/drm/intel/issues/834
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/display/intel_cdclk.c    | 19 ++++++++++---------
 drivers/gpu/drm/i915/display/intel_cdclk.h    |  2 +-
 .../drm/i915/display/intel_display_power.c    |  9 +++------
 drivers/gpu/drm/i915/display/intel_dp.c       | 10 ++++++----
 drivers/gpu/drm/i915/display/intel_panel.c    | 12 +++++++-----
 drivers/gpu/drm/i915/i915_drv.h               |  1 -
 drivers/gpu/drm/i915/intel_device_info.c      |  7 ++++++-
 drivers/gpu/drm/i915/intel_device_info.h      |  2 ++
 8 files changed, 35 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.c b/drivers/gpu/drm/i915/display/intel_cdclk.c
index 423c91b164b4..146c2b9bb7fb 100644
--- a/drivers/gpu/drm/i915/display/intel_cdclk.c
+++ b/drivers/gpu/drm/i915/display/intel_cdclk.c
@@ -2693,28 +2693,29 @@ static int g4x_hrawclk(struct drm_i915_private *dev_priv)
 }
 
 /**
- * intel_update_rawclk - Determine the current RAWCLK frequency
+ * intel_read_rawclk - Determine the current RAWCLK frequency
  * @dev_priv: i915 device
  *
  * Determine the current RAWCLK frequency. RAWCLK is a fixed
  * frequency clock so this needs to done only once.
  */
-void intel_update_rawclk(struct drm_i915_private *dev_priv)
+u32 intel_read_rawclk(struct drm_i915_private *dev_priv)
 {
+	u32 freq;
+
 	if (INTEL_PCH_TYPE(dev_priv) >= PCH_CNP)
-		dev_priv->rawclk_freq = cnp_rawclk(dev_priv);
+		freq = cnp_rawclk(dev_priv);
 	else if (HAS_PCH_SPLIT(dev_priv))
-		dev_priv->rawclk_freq = pch_rawclk(dev_priv);
+		freq = pch_rawclk(dev_priv);
 	else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
-		dev_priv->rawclk_freq = vlv_hrawclk(dev_priv);
+		freq = vlv_hrawclk(dev_priv);
 	else if (IS_G4X(dev_priv) || IS_PINEVIEW(dev_priv))
-		dev_priv->rawclk_freq = g4x_hrawclk(dev_priv);
+		freq = g4x_hrawclk(dev_priv);
 	else
 		/* no rawclk on other platforms, or no need to know it */
-		return;
+		return 0;
 
-	drm_dbg(&dev_priv->drm, "rawclk rate: %d kHz\n",
-		dev_priv->rawclk_freq);
+	return freq;
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.h b/drivers/gpu/drm/i915/display/intel_cdclk.h
index df21dbdcc575..5731806e4cee 100644
--- a/drivers/gpu/drm/i915/display/intel_cdclk.h
+++ b/drivers/gpu/drm/i915/display/intel_cdclk.h
@@ -61,7 +61,7 @@ void intel_cdclk_uninit_hw(struct drm_i915_private *i915);
 void intel_init_cdclk_hooks(struct drm_i915_private *dev_priv);
 void intel_update_max_cdclk(struct drm_i915_private *dev_priv);
 void intel_update_cdclk(struct drm_i915_private *dev_priv);
-void intel_update_rawclk(struct drm_i915_private *dev_priv);
+u32 intel_read_rawclk(struct drm_i915_private *dev_priv);
 bool intel_cdclk_needs_modeset(const struct intel_cdclk_config *a,
 			       const struct intel_cdclk_config *b);
 void intel_set_cdclk_pre_plane_update(struct intel_atomic_state *state);
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
index b9a9cbad8a03..722399fc2ace 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.c
+++ b/drivers/gpu/drm/i915/display/intel_display_power.c
@@ -1260,10 +1260,10 @@ static void vlv_init_display_clock_gating(struct drm_i915_private *dev_priv)
 		       MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
 	intel_de_write(dev_priv, CBR1_VLV, 0);
 
-	WARN_ON(dev_priv->rawclk_freq == 0);
-
+	WARN_ON(RUNTIME_INFO(dev_priv)->rawclk_freq == 0);
 	intel_de_write(dev_priv, RAWCLK_FREQ_VLV,
-		       DIV_ROUND_CLOSEST(dev_priv->rawclk_freq, 1000));
+		       DIV_ROUND_CLOSEST(RUNTIME_INFO(dev_priv)->rawclk_freq,
+					 1000));
 }
 
 static void vlv_display_power_well_init(struct drm_i915_private *dev_priv)
@@ -5236,9 +5236,6 @@ void intel_power_domains_init_hw(struct drm_i915_private *i915, bool resume)
 
 	power_domains->initializing = true;
 
-	/* Must happen before power domain init on VLV/CHV */
-	intel_update_rawclk(i915);
-
 	if (INTEL_GEN(i915) >= 11) {
 		icl_display_core_init(i915, resume);
 	} else if (IS_CANNONLAKE(i915)) {
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 9541ab11624d..82baf5aba84b 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1213,13 +1213,14 @@ static u32 g4x_get_aux_clock_divider(struct intel_dp *intel_dp, int index)
 	 * The clock divider is based off the hrawclk, and would like to run at
 	 * 2MHz.  So, take the hrawclk value and divide by 2000 and use that
 	 */
-	return DIV_ROUND_CLOSEST(dev_priv->rawclk_freq, 2000);
+	return DIV_ROUND_CLOSEST(RUNTIME_INFO(dev_priv)->rawclk_freq, 2000);
 }
 
 static u32 ilk_get_aux_clock_divider(struct intel_dp *intel_dp, int index)
 {
 	struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
 	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
+	u32 freq;
 
 	if (index)
 		return 0;
@@ -1230,9 +1231,10 @@ static u32 ilk_get_aux_clock_divider(struct intel_dp *intel_dp, int index)
 	 * divide by 2000 and use that
 	 */
 	if (dig_port->aux_ch == AUX_CH_A)
-		return DIV_ROUND_CLOSEST(dev_priv->cdclk.hw.cdclk, 2000);
+		freq = dev_priv->cdclk.hw.cdclk;
 	else
-		return DIV_ROUND_CLOSEST(dev_priv->rawclk_freq, 2000);
+		freq = RUNTIME_INFO(dev_priv)->rawclk_freq;
+	return DIV_ROUND_CLOSEST(freq, 2000);
 }
 
 static u32 hsw_get_aux_clock_divider(struct intel_dp *intel_dp, int index)
@@ -6883,7 +6885,7 @@ intel_dp_init_panel_power_sequencer_registers(struct intel_dp *intel_dp,
 {
 	struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
 	u32 pp_on, pp_off, port_sel = 0;
-	int div = dev_priv->rawclk_freq / 1000;
+	int div = RUNTIME_INFO(dev_priv)->rawclk_freq / 1000;
 	struct pps_registers regs;
 	enum port port = dp_to_dig_port(intel_dp)->base.port;
 	const struct edp_power_seq *seq = &intel_dp->pps_delays;
diff --git a/drivers/gpu/drm/i915/display/intel_panel.c b/drivers/gpu/drm/i915/display/intel_panel.c
index cba2f1c2557f..585688b6ebac 100644
--- a/drivers/gpu/drm/i915/display/intel_panel.c
+++ b/drivers/gpu/drm/i915/display/intel_panel.c
@@ -1406,7 +1406,8 @@ static u32 cnp_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
 {
 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
 
-	return DIV_ROUND_CLOSEST(KHz(dev_priv->rawclk_freq), pwm_freq_hz);
+	return DIV_ROUND_CLOSEST(KHz(RUNTIME_INFO(dev_priv)->rawclk_freq),
+				 pwm_freq_hz);
 }
 
 /*
@@ -1467,7 +1468,8 @@ static u32 pch_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
 {
 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
 
-	return DIV_ROUND_CLOSEST(KHz(dev_priv->rawclk_freq), pwm_freq_hz * 128);
+	return DIV_ROUND_CLOSEST(KHz(RUNTIME_INFO(dev_priv)->rawclk_freq),
+				 pwm_freq_hz * 128);
 }
 
 /*
@@ -1484,7 +1486,7 @@ static u32 i9xx_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
 	int clock;
 
 	if (IS_PINEVIEW(dev_priv))
-		clock = KHz(dev_priv->rawclk_freq);
+		clock = KHz(RUNTIME_INFO(dev_priv)->rawclk_freq);
 	else
 		clock = KHz(dev_priv->cdclk.hw.cdclk);
 
@@ -1502,7 +1504,7 @@ static u32 i965_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
 	int clock;
 
 	if (IS_G4X(dev_priv))
-		clock = KHz(dev_priv->rawclk_freq);
+		clock = KHz(RUNTIME_INFO(dev_priv)->rawclk_freq);
 	else
 		clock = KHz(dev_priv->cdclk.hw.cdclk);
 
@@ -1526,7 +1528,7 @@ static u32 vlv_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
 			clock = MHz(25);
 		mul = 16;
 	} else {
-		clock = KHz(dev_priv->rawclk_freq);
+		clock = KHz(RUNTIME_INFO(dev_priv)->rawclk_freq);
 		mul = 128;
 	}
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 3330b538d379..9928d00ea0b1 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -992,7 +992,6 @@ struct drm_i915_private {
 	unsigned int max_cdclk_freq;
 
 	unsigned int max_dotclk_freq;
-	unsigned int rawclk_freq;
 	unsigned int hpll_freq;
 	unsigned int fdi_pll_freq;
 	unsigned int czclk_freq;
diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c
index 18d9de488593..8e99ad097830 100644
--- a/drivers/gpu/drm/i915/intel_device_info.c
+++ b/drivers/gpu/drm/i915/intel_device_info.c
@@ -24,6 +24,7 @@
 
 #include <drm/drm_print.h>
 
+#include "display/intel_cdclk.h"
 #include "intel_device_info.h"
 #include "i915_drv.h"
 
@@ -132,6 +133,7 @@ void intel_device_info_print_runtime(const struct intel_runtime_info *info,
 {
 	sseu_dump(&info->sseu, p);
 
+	drm_printf(p, "rawclk rate: %u kHz\n", info->rawclk_freq);
 	drm_printf(p, "CS timestamp frequency: %u kHz\n",
 		   info->cs_timestamp_frequency_khz);
 }
@@ -743,7 +745,7 @@ static u32 read_timestamp_frequency(struct drm_i915_private *dev_priv)
 		 *      hclks." (through the “Clocking Configuration”
 		 *      (“CLKCFG”) MCHBAR register)
 		 */
-		return dev_priv->rawclk_freq / 16;
+		return RUNTIME_INFO(dev_priv)->rawclk_freq / 16;
 	} else if (INTEL_GEN(dev_priv) <= 8) {
 		/* PRMs say:
 		 *
@@ -1043,6 +1045,9 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
 		info->ppgtt_type = INTEL_PPGTT_NONE;
 	}
 
+	runtime->rawclk_freq = intel_read_rawclk(dev_priv);
+	drm_dbg(&dev_priv->drm, "rawclk rate: %d kHz\n", runtime->rawclk_freq);
+
 	/* Initialize command stream timestamp frequency */
 	runtime->cs_timestamp_frequency_khz =
 		read_timestamp_frequency(dev_priv);
diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h
index f8bfa26388c1..1ecb9df2de91 100644
--- a/drivers/gpu/drm/i915/intel_device_info.h
+++ b/drivers/gpu/drm/i915/intel_device_info.h
@@ -216,6 +216,8 @@ struct intel_runtime_info {
 	/* Slice/subslice/EU info */
 	struct sseu_dev_info sseu;
 
+	u32 rawclk_freq;
+
 	u32 cs_timestamp_frequency_khz;
 	u32 cs_timestamp_period_ns;
 
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (7 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 09/12] drm/i915: Read rawclk_freq earlier Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 18:26   ` Brian Welty
  2020-02-18 21:24   ` Daniele Ceraolo Spurio
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 11/12] drm/i915/gt: Declare when we enabled timeslicing Chris Wilson
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx

On dgfx, we only use l3cc and not mocs, but we share the table of
register definitions with Tigerlake (which includes the mocs). This
confuses our selftest that verifies that the registers do contain the
values in our tables after various events (idling, reset, activity etc).

When constructing the table of register definitions, also include the
flags for which registers are valid so that information is computed
centrally and available to all callers.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Brian Welty <brian.welty@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_mocs.c    | 72 +++++++++++++++++--------
 drivers/gpu/drm/i915/gt/selftest_mocs.c | 24 ++++++---
 2 files changed, 67 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_mocs.c b/drivers/gpu/drm/i915/gt/intel_mocs.c
index 0afc1eb3c20f..632e08a4592b 100644
--- a/drivers/gpu/drm/i915/gt/intel_mocs.c
+++ b/drivers/gpu/drm/i915/gt/intel_mocs.c
@@ -280,9 +280,32 @@ static const struct drm_i915_mocs_entry icl_mocs_table[] = {
 	GEN11_MOCS_ENTRIES
 };
 
-static bool get_mocs_settings(const struct drm_i915_private *i915,
-			      struct drm_i915_mocs_table *table)
+enum {
+	HAS_GLOBAL_MOCS = BIT(0),
+	HAS_ENGINE_MOCS = BIT(1),
+	HAS_RENDER_L3CC = BIT(2),
+};
+
+static bool has_l3cc(const struct drm_i915_private *i915)
 {
+	return true;
+}
+
+static bool has_global_mocs(const struct drm_i915_private *i915)
+{
+	return HAS_GLOBAL_MOCS_REGISTERS(i915);
+}
+
+static bool has_mocs(const struct drm_i915_private *i915)
+{
+	return !IS_DGFX(i915);
+}
+
+static unsigned int get_mocs_settings(const struct drm_i915_private *i915,
+				      struct drm_i915_mocs_table *table)
+{
+	unsigned int flags;
+
 	if (INTEL_GEN(i915) >= 12) {
 		table->size  = ARRAY_SIZE(tgl_mocs_table);
 		table->table = tgl_mocs_table;
@@ -302,11 +325,11 @@ static bool get_mocs_settings(const struct drm_i915_private *i915,
 	} else {
 		drm_WARN_ONCE(&i915->drm, INTEL_GEN(i915) >= 9,
 			      "Platform that should have a MOCS table does not.\n");
-		return false;
+		return 0;
 	}
 
 	if (GEM_DEBUG_WARN_ON(table->size > table->n_entries))
-		return false;
+		return 0;
 
 	/* WaDisableSkipCaching:skl,bxt,kbl,glk */
 	if (IS_GEN(i915, 9)) {
@@ -315,10 +338,20 @@ static bool get_mocs_settings(const struct drm_i915_private *i915,
 		for (i = 0; i < table->size; i++)
 			if (GEM_DEBUG_WARN_ON(table->table[i].l3cc_value &
 					      (L3_ESC(1) | L3_SCC(0x7))))
-				return false;
+				return 0;
 	}
 
-	return true;
+	flags = 0;
+	if (has_mocs(i915)) {
+		if (has_global_mocs(i915))
+			flags |= HAS_GLOBAL_MOCS;
+		else
+			flags |= HAS_ENGINE_MOCS;
+	}
+	if (has_l3cc(i915))
+		flags |= HAS_RENDER_L3CC;
+
+	return flags;
 }
 
 /*
@@ -411,18 +444,20 @@ static void init_l3cc_table(struct intel_engine_cs *engine,
 void intel_mocs_init_engine(struct intel_engine_cs *engine)
 {
 	struct drm_i915_mocs_table table;
+	unsigned int flags;
 
 	/* Called under a blanket forcewake */
 	assert_forcewakes_active(engine->uncore, FORCEWAKE_ALL);
 
-	if (!get_mocs_settings(engine->i915, &table))
+	flags = get_mocs_settings(engine->i915, &table);
+	if (!flags)
 		return;
 
 	/* Platforms with global MOCS do not need per-engine initialization. */
-	if (!HAS_GLOBAL_MOCS_REGISTERS(engine->i915))
+	if (flags & HAS_ENGINE_MOCS)
 		init_mocs_table(engine, &table);
 
-	if (engine->class == RENDER_CLASS)
+	if (flags & HAS_RENDER_L3CC && engine->class == RENDER_CLASS)
 		init_l3cc_table(engine, &table);
 }
 
@@ -431,26 +466,17 @@ static u32 global_mocs_offset(void)
 	return i915_mmio_reg_offset(GEN12_GLOBAL_MOCS(0));
 }
 
-static void init_global_mocs(struct intel_gt *gt)
+void intel_mocs_init(struct intel_gt *gt)
 {
 	struct drm_i915_mocs_table table;
+	unsigned int flags;
 
 	/*
 	 * LLC and eDRAM control values are not applicable to dgfx
 	 */
-	if (IS_DGFX(gt->i915))
-		return;
-
-	if (!get_mocs_settings(gt->i915, &table))
-		return;
-
-	__init_mocs_table(gt->uncore, &table, global_mocs_offset());
-}
-
-void intel_mocs_init(struct intel_gt *gt)
-{
-	if (HAS_GLOBAL_MOCS_REGISTERS(gt->i915))
-		init_global_mocs(gt);
+	flags = get_mocs_settings(gt->i915, &table);
+	if (flags & HAS_GLOBAL_MOCS)
+		__init_mocs_table(gt->uncore, &table, global_mocs_offset());
 }
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
index de1f83100fb6..8831ffee2061 100644
--- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
@@ -12,7 +12,8 @@
 #include "selftests/igt_spinner.h"
 
 struct live_mocs {
-	struct drm_i915_mocs_table table;
+	struct drm_i915_mocs_table mocs;
+	struct drm_i915_mocs_table l3cc;
 	struct i915_vma *scratch;
 	void *vaddr;
 };
@@ -70,11 +71,22 @@ static struct i915_vma *create_scratch(struct intel_gt *gt)
 
 static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt)
 {
+	struct drm_i915_mocs_table table;
+	unsigned int flags;
 	int err;
 
-	if (!get_mocs_settings(gt->i915, &arg->table))
+	memset(arg, 0, sizeof(*arg));
+
+	flags = get_mocs_settings(gt->i915, &table);
+	if (!flags)
 		return -EINVAL;
 
+	if (flags & HAS_RENDER_L3CC)
+		arg->l3cc = table;
+
+	if (flags & (HAS_GLOBAL_MOCS | HAS_ENGINE_MOCS))
+		arg->mocs = table;
+
 	arg->scratch = create_scratch(gt);
 	if (IS_ERR(arg->scratch))
 		return PTR_ERR(arg->scratch);
@@ -223,9 +235,9 @@ static int check_mocs_engine(struct live_mocs *arg,
 	/* Read the mocs tables back using SRM */
 	offset = i915_ggtt_offset(vma);
 	if (!err)
-		err = read_mocs_table(rq, &arg->table, &offset);
+		err = read_mocs_table(rq, &arg->mocs, &offset);
 	if (!err && ce->engine->class == RENDER_CLASS)
-		err = read_l3cc_table(rq, &arg->table, &offset);
+		err = read_l3cc_table(rq, &arg->l3cc, &offset);
 	offset -= i915_ggtt_offset(vma);
 	GEM_BUG_ON(offset > PAGE_SIZE);
 
@@ -236,9 +248,9 @@ static int check_mocs_engine(struct live_mocs *arg,
 	/* Compare the results against the expected tables */
 	vaddr = arg->vaddr;
 	if (!err)
-		err = check_mocs_table(ce->engine, &arg->table, &vaddr);
+		err = check_mocs_table(ce->engine, &arg->mocs, &vaddr);
 	if (!err && ce->engine->class == RENDER_CLASS)
-		err = check_l3cc_table(ce->engine, &arg->table, &vaddr);
+		err = check_l3cc_table(ce->engine, &arg->l3cc, &vaddr);
 	if (err)
 		return err;
 
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 11/12] drm/i915/gt: Declare when we enabled timeslicing
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (8 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 12/12] drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore Chris Wilson
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx; +Cc: Kenneth Graunke

Let userspace know if they can trust timeslicing by including it as part
of the I915_PARAM_HAS_SCHEDULER::I915_SCHEDULER_CAP_TIMESLICING

v2: Only declare timeslicing if we can safely preempt userspace.

Fixes: 8ee36e048c98 ("drm/i915/execlists: Minimalistic timeslicing")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Kenneth Graunke <kenneth@whitecape.org>
---
 drivers/gpu/drm/i915/gt/intel_engine.h      | 3 ++-
 drivers/gpu/drm/i915/gt/intel_engine_user.c | 5 +++++
 include/uapi/drm/i915_drm.h                 | 1 +
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index 29c8c03c5caa..a32dc82a90d4 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -326,7 +326,8 @@ intel_engine_has_timeslices(const struct intel_engine_cs *engine)
 	if (!IS_ACTIVE(CONFIG_DRM_I915_TIMESLICE_DURATION))
 		return false;
 
-	return intel_engine_has_semaphores(engine);
+	return (intel_engine_has_semaphores(engine) &&
+		intel_engine_has_preemption(engine));
 }
 
 #endif /* _INTEL_RINGBUFFER_H_ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_user.c b/drivers/gpu/drm/i915/gt/intel_engine_user.c
index 848decee9066..b84fdd722781 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_user.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_user.c
@@ -121,6 +121,11 @@ static void set_scheduler_caps(struct drm_i915_private *i915)
 			else
 				disabled |= BIT(map[i].sched);
 		}
+
+		if (intel_engine_has_timeslices(engine))
+			enabled |= I915_SCHEDULER_CAP_TIMESLICING;
+		else
+			disabled |= I915_SCHEDULER_CAP_TIMESLICING;
 	}
 
 	i915->caps.scheduler = enabled & ~disabled;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 829c0a48577f..f4d521f51258 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -523,6 +523,7 @@ typedef struct drm_i915_irq_wait {
 #define   I915_SCHEDULER_CAP_PREEMPTION	(1ul << 2)
 #define   I915_SCHEDULER_CAP_SEMAPHORES	(1ul << 3)
 #define   I915_SCHEDULER_CAP_ENGINE_BUSY_STATS	(1ul << 4)
+#define   I915_SCHEDULER_CAP_TIMESLICING	(1ul << 5)
 
 #define I915_PARAM_HUC_STATUS		 42
 
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Intel-gfx] [PATCH 12/12] drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (9 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 11/12] drm/i915/gt: Declare when we enabled timeslicing Chris Wilson
@ 2020-02-18 16:21 ` Chris Wilson
  2020-02-18 20:08 ` [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Matthew Auld
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 16:21 UTC (permalink / raw)
  To: intel-gfx; +Cc: Kenneth Graunke

If we find ourselves waiting on a MI_SEMAPHORE_WAIT, either within the
user batch or in our own preamble, the engine raises a
GT_WAIT_ON_SEMAPHORE interrupt. We can unmask that interrupt and so
respond to a semaphore wait by yielding the timeslice, if we have
another context to yield to!

The only real complication is that the interrupt is only generated for
the start of the semaphore wait, and is asynchronous to our
process_csb() -- that is, we may not have registered the timeslice before
we see the interrupt. To ensure we don't miss a potential semaphore
blocking forward progress (e.g. selftests/live_timeslice_preempt) we mark
the interrupt and apply it to the next timeslice regardless of whether it
was active at the time.

v2: We use semaphores in preempt-to-busy, within the timeslicing
implementation itself! Ergo, when we do insert a preemption due to an
expired timeslice, the new context may start with the missed semaphore
flagged by the retired context and be yielded, ad infinitum. To avoid
this, read the context id at the time of the semaphore interrupt and
only yield if that context is still active.

Fixes: 8ee36e048c98 ("drm/i915/execlists: Minimalistic timeslicing")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Kenneth Graunke <kenneth@whitecape.org>
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c    |  6 +++
 drivers/gpu/drm/i915/gt/intel_engine_types.h |  9 +++++
 drivers/gpu/drm/i915/gt/intel_gt_irq.c       | 13 ++++++-
 drivers/gpu/drm/i915/gt/intel_lrc.c          | 40 +++++++++++++++++---
 drivers/gpu/drm/i915/i915_reg.h              |  1 +
 5 files changed, 61 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index e46e55354e95..1a2e9610f1a8 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1288,6 +1288,12 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
 
 	if (engine->id == RENDER_CLASS && IS_GEN_RANGE(dev_priv, 4, 7))
 		drm_printf(m, "\tCCID: 0x%08x\n", ENGINE_READ(engine, CCID));
+	if (HAS_EXECLISTS(dev_priv)) {
+		drm_printf(m, "\tEL_STAT_HI: 0x%08x\n",
+			   ENGINE_READ(engine, RING_EXECLIST_STATUS_HI));
+		drm_printf(m, "\tEL_STAT_LO: 0x%08x\n",
+			   ENGINE_READ(engine, RING_EXECLIST_STATUS_LO));
+	}
 	drm_printf(m, "\tRING_START: 0x%08x\n",
 		   ENGINE_READ(engine, RING_START));
 	drm_printf(m, "\tRING_HEAD:  0x%08x\n",
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index b23366a81048..24cff658e6e5 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -156,6 +156,15 @@ struct intel_engine_execlists {
 	 */
 	struct i915_priolist default_priolist;
 
+	/**
+	 * @yield: CCID at the time of the last semaphore-wait interrupt.
+	 *
+	 * Instead of leaving a semaphore busy-spinning on an engine, we would
+	 * like to switch to another ready context, i.e. yielding the semaphore
+	 * timeslice.
+	 */
+	u32 yield;
+
 	/**
 	 * @error_interrupt: CS Master EIR
 	 *
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_irq.c b/drivers/gpu/drm/i915/gt/intel_gt_irq.c
index f0e7fd95165a..875bd0392ffc 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_irq.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_irq.c
@@ -39,6 +39,13 @@ cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
 		}
 	}
 
+	if (iir & GT_WAIT_SEMAPHORE_INTERRUPT) {
+		WRITE_ONCE(engine->execlists.yield,
+			   ENGINE_READ_FW(engine, RING_EXECLIST_STATUS_HI));
+		if (del_timer(&engine->execlists.timer))
+			tasklet = true;
+	}
+
 	if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
 		tasklet = true;
 
@@ -228,7 +235,8 @@ void gen11_gt_irq_postinstall(struct intel_gt *gt)
 	const u32 irqs =
 		GT_CS_MASTER_ERROR_INTERRUPT |
 		GT_RENDER_USER_INTERRUPT |
-		GT_CONTEXT_SWITCH_INTERRUPT;
+		GT_CONTEXT_SWITCH_INTERRUPT |
+		GT_WAIT_SEMAPHORE_INTERRUPT;
 	struct intel_uncore *uncore = gt->uncore;
 	const u32 dmask = irqs << 16 | irqs;
 	const u32 smask = irqs << 16;
@@ -366,7 +374,8 @@ void gen8_gt_irq_postinstall(struct intel_gt *gt)
 	const u32 irqs =
 		GT_CS_MASTER_ERROR_INTERRUPT |
 		GT_RENDER_USER_INTERRUPT |
-		GT_CONTEXT_SWITCH_INTERRUPT;
+		GT_CONTEXT_SWITCH_INTERRUPT |
+		GT_WAIT_SEMAPHORE_INTERRUPT;
 	const u32 gt_interrupts[] = {
 		irqs << GEN8_RCS_IRQ_SHIFT | irqs << GEN8_BCS_IRQ_SHIFT,
 		irqs << GEN8_VCS0_IRQ_SHIFT | irqs << GEN8_VCS1_IRQ_SHIFT,
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index f2c49dc1e6b4..e1f0d7f71787 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1749,7 +1749,8 @@ static void defer_active(struct intel_engine_cs *engine)
 }
 
 static bool
-need_timeslice(struct intel_engine_cs *engine, const struct i915_request *rq)
+need_timeslice(const struct intel_engine_cs *engine,
+	       const struct i915_request *rq)
 {
 	int hint;
 
@@ -1765,6 +1766,31 @@ need_timeslice(struct intel_engine_cs *engine, const struct i915_request *rq)
 	return hint >= effective_prio(rq);
 }
 
+static bool
+timeslice_yield(const struct intel_engine_execlists *el,
+		const struct i915_request *rq)
+{
+	/*
+	 * Once bitten, forever smitten!
+	 *
+	 * If the active context ever busy-waited on a semaphore,
+	 * it will be treated as a hog until the end of its timeslice.
+	 * The HW only sends an interrupt on the first miss, and we
+	 * do know if that semaphore has been signaled, or even if it
+	 * is now stuck on another semaphore. Play safe, yield if it
+	 * might be stuck -- it will be given a fresh timeslice in
+	 * the near future.
+	 */
+	return upper_32_bits(rq->context->lrc_desc) == READ_ONCE(el->yield);
+}
+
+static bool
+timeslice_expired(const struct intel_engine_execlists *el,
+		  const struct i915_request *rq)
+{
+	return timer_expired(&el->timer) || timeslice_yield(el, rq);
+}
+
 static int
 switch_prio(struct intel_engine_cs *engine, const struct i915_request *rq)
 {
@@ -1780,8 +1806,7 @@ timeslice(const struct intel_engine_cs *engine)
 	return READ_ONCE(engine->props.timeslice_duration_ms);
 }
 
-static unsigned long
-active_timeslice(const struct intel_engine_cs *engine)
+static unsigned long active_timeslice(const struct intel_engine_cs *engine)
 {
 	const struct i915_request *rq = *engine->execlists.active;
 
@@ -1924,13 +1949,14 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 
 			last = NULL;
 		} else if (need_timeslice(engine, last) &&
-			   timer_expired(&engine->execlists.timer)) {
+			   timeslice_expired(execlists, last)) {
 			ENGINE_TRACE(engine,
-				     "expired last=%llx:%lld, prio=%d, hint=%d\n",
+				     "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
 				     last->fence.context,
 				     last->fence.seqno,
 				     last->sched.attr.priority,
-				     execlists->queue_priority_hint);
+				     execlists->queue_priority_hint,
+				     yesno(timeslice_yield(execlists, last)));
 
 			ring_set_paused(engine, 1);
 			defer_active(engine);
@@ -2190,6 +2216,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 		}
 		clear_ports(port + 1, last_port - port);
 
+		WRITE_ONCE(execlists->yield, -1);
 		execlists_submit_ports(engine);
 		set_preempt_timeout(engine);
 	} else {
@@ -4430,6 +4457,7 @@ logical_ring_default_irqs(struct intel_engine_cs *engine)
 	engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift;
 	engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift;
 	engine->irq_keep_mask |= GT_CS_MASTER_ERROR_INTERRUPT << shift;
+	engine->irq_keep_mask |= GT_WAIT_SEMAPHORE_INTERRUPT << shift;
 }
 
 static void rcs_submission_override(struct intel_engine_cs *engine)
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index b09c1d6dc0aa..0f1fcc863f3d 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -3090,6 +3090,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define GT_BSD_CS_ERROR_INTERRUPT		(1 << 15)
 #define GT_BSD_USER_INTERRUPT			(1 << 12)
 #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT_S1	(1 << 11) /* hsw+; rsvd on snb, ivb, vlv */
+#define GT_WAIT_SEMAPHORE_INTERRUPT		REG_BIT(11) /* bdw+ */
 #define GT_CONTEXT_SWITCH_INTERRUPT		(1 <<  8)
 #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT	(1 <<  5) /* !snb */
 #define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT	(1 <<  4)
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability Chris Wilson
@ 2020-02-18 18:26   ` Brian Welty
  2020-02-18 18:33     ` Chris Wilson
  2020-02-18 21:24   ` Daniele Ceraolo Spurio
  1 sibling, 1 reply; 23+ messages in thread
From: Brian Welty @ 2020-02-18 18:26 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 2/18/2020 8:21 AM, Chris Wilson wrote:
> On dgfx, we only use l3cc and not mocs, but we share the table of
> register definitions with Tigerlake (which includes the mocs). This
> confuses our selftest that verifies that the registers do contain the
> values in our tables after various events (idling, reset, activity etc).
> 
> When constructing the table of register definitions, also include the
> flags for which registers are valid so that information is computed
> centrally and available to all callers.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Brian Welty <brian.welty@intel.com>
> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_mocs.c    | 72 +++++++++++++++++--------
>  drivers/gpu/drm/i915/gt/selftest_mocs.c | 24 ++++++---
>  2 files changed, 67 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_mocs.c b/drivers/gpu/drm/i915/gt/intel_mocs.c
> index 0afc1eb3c20f..632e08a4592b 100644
> --- a/drivers/gpu/drm/i915/gt/intel_mocs.c
> +++ b/drivers/gpu/drm/i915/gt/intel_mocs.c
> @@ -280,9 +280,32 @@ static const struct drm_i915_mocs_entry icl_mocs_table[] = {
>  	GEN11_MOCS_ENTRIES
>  };
>  
> -static bool get_mocs_settings(const struct drm_i915_private *i915,
> -			      struct drm_i915_mocs_table *table)
> +enum {
> +	HAS_GLOBAL_MOCS = BIT(0),
> +	HAS_ENGINE_MOCS = BIT(1),
> +	HAS_RENDER_L3CC = BIT(2),
> +};
> +
> +static bool has_l3cc(const struct drm_i915_private *i915)
>  {
> +	return true;
> +}
> +
> +static bool has_global_mocs(const struct drm_i915_private *i915)
> +{
> +	return HAS_GLOBAL_MOCS_REGISTERS(i915);
> +}
> +
> +static bool has_mocs(const struct drm_i915_private *i915)
> +{
> +	return !IS_DGFX(i915);
> +}
> +
> +static unsigned int get_mocs_settings(const struct drm_i915_private *i915,
> +				      struct drm_i915_mocs_table *table)
> +{
> +	unsigned int flags;
> +
>  	if (INTEL_GEN(i915) >= 12) {
>  		table->size  = ARRAY_SIZE(tgl_mocs_table);
>  		table->table = tgl_mocs_table;
> @@ -302,11 +325,11 @@ static bool get_mocs_settings(const struct drm_i915_private *i915,
>  	} else {
>  		drm_WARN_ONCE(&i915->drm, INTEL_GEN(i915) >= 9,
>  			      "Platform that should have a MOCS table does not.\n");
> -		return false;
> +		return 0;
>  	}
>  
>  	if (GEM_DEBUG_WARN_ON(table->size > table->n_entries))
> -		return false;
> +		return 0;
>  
>  	/* WaDisableSkipCaching:skl,bxt,kbl,glk */
>  	if (IS_GEN(i915, 9)) {
> @@ -315,10 +338,20 @@ static bool get_mocs_settings(const struct drm_i915_private *i915,
>  		for (i = 0; i < table->size; i++)
>  			if (GEM_DEBUG_WARN_ON(table->table[i].l3cc_value &
>  					      (L3_ESC(1) | L3_SCC(0x7))))
> -				return false;
> +				return 0;
>  	}
>  
> -	return true;
> +	flags = 0;
> +	if (has_mocs(i915)) {
> +		if (has_global_mocs(i915))
> +			flags |= HAS_GLOBAL_MOCS;
> +		else
> +			flags |= HAS_ENGINE_MOCS;
> +	}
> +	if (has_l3cc(i915))
> +		flags |= HAS_RENDER_L3CC;
> +
> +	return flags;
>  }
>  
>  /*
> @@ -411,18 +444,20 @@ static void init_l3cc_table(struct intel_engine_cs *engine,
>  void intel_mocs_init_engine(struct intel_engine_cs *engine)
>  {
>  	struct drm_i915_mocs_table table;
> +	unsigned int flags;
>  
>  	/* Called under a blanket forcewake */
>  	assert_forcewakes_active(engine->uncore, FORCEWAKE_ALL);
>  
> -	if (!get_mocs_settings(engine->i915, &table))
> +	flags = get_mocs_settings(engine->i915, &table);
> +	if (!flags)
>  		return;
>  
>  	/* Platforms with global MOCS do not need per-engine initialization. */
> -	if (!HAS_GLOBAL_MOCS_REGISTERS(engine->i915))
> +	if (flags & HAS_ENGINE_MOCS)
>  		init_mocs_table(engine, &table);
>  
> -	if (engine->class == RENDER_CLASS)
> +	if (flags & HAS_RENDER_L3CC && engine->class == RENDER_CLASS)
>  		init_l3cc_table(engine, &table);
>  }
>  
> @@ -431,26 +466,17 @@ static u32 global_mocs_offset(void)
>  	return i915_mmio_reg_offset(GEN12_GLOBAL_MOCS(0));
>  }
>  
> -static void init_global_mocs(struct intel_gt *gt)
> +void intel_mocs_init(struct intel_gt *gt)
>  {
>  	struct drm_i915_mocs_table table;
> +	unsigned int flags;
>  
>  	/*
>  	 * LLC and eDRAM control values are not applicable to dgfx
>  	 */
> -	if (IS_DGFX(gt->i915))
> -		return;
> -
> -	if (!get_mocs_settings(gt->i915, &table))
> -		return;
> -
> -	__init_mocs_table(gt->uncore, &table, global_mocs_offset());
> -}
> -
> -void intel_mocs_init(struct intel_gt *gt)
> -{
> -	if (HAS_GLOBAL_MOCS_REGISTERS(gt->i915))
> -		init_global_mocs(gt);
> +	flags = get_mocs_settings(gt->i915, &table);
> +	if (flags & HAS_GLOBAL_MOCS)
> +		__init_mocs_table(gt->uncore, &table, global_mocs_offset());
>  }
>  
>  #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
> index de1f83100fb6..8831ffee2061 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
> @@ -12,7 +12,8 @@
>  #include "selftests/igt_spinner.h"
>  
>  struct live_mocs {
> -	struct drm_i915_mocs_table table;
> +	struct drm_i915_mocs_table mocs;
> +	struct drm_i915_mocs_table l3cc;
>  	struct i915_vma *scratch;
>  	void *vaddr;
>  };
> @@ -70,11 +71,22 @@ static struct i915_vma *create_scratch(struct intel_gt *gt)
>  
>  static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt)
>  {
> +	struct drm_i915_mocs_table table;
> +	unsigned int flags;
>  	int err;
>  
> -	if (!get_mocs_settings(gt->i915, &arg->table))
> +	memset(arg, 0, sizeof(*arg));
> +
> +	flags = get_mocs_settings(gt->i915, &table);
> +	if (!flags)
>  		return -EINVAL;
>  
> +	if (flags & HAS_RENDER_L3CC)
> +		arg->l3cc = table;
> +
> +	if (flags & (HAS_GLOBAL_MOCS | HAS_ENGINE_MOCS))
> +		arg->mocs = table;
> +
>  	arg->scratch = create_scratch(gt);
>  	if (IS_ERR(arg->scratch))
>  		return PTR_ERR(arg->scratch);
> @@ -223,9 +235,9 @@ static int check_mocs_engine(struct live_mocs *arg,
>  	/* Read the mocs tables back using SRM */
>  	offset = i915_ggtt_offset(vma);
>  	if (!err)
> -		err = read_mocs_table(rq, &arg->table, &offset);
> +		err = read_mocs_table(rq, &arg->mocs, &offset);
>  	if (!err && ce->engine->class == RENDER_CLASS)
> -		err = read_l3cc_table(rq, &arg->table, &offset);
> +		err = read_l3cc_table(rq, &arg->l3cc, &offset);


Above functions will call read_regs().
I thought we'd still want to avoid this.  Shouldn't you test flags here?
Maybe store flags in the struct live_mocs and something like:

 	if (!err && (arg->flags & (HAS_GLOBAL_MOCS | HAS_ENGINE_MOCS)))
		err = read_mocs_table(rq, &arg->mocs, &offset);
 	if (!err && (arg->flags & HAS_RENDER_L3CC) && ce->engine->class == RENDER_CLASS)
		err = read_l3cc_table(rq, &arg->l3cc, &offset);


>  	offset -= i915_ggtt_offset(vma);
>  	GEM_BUG_ON(offset > PAGE_SIZE);
>  
> @@ -236,9 +248,9 @@ static int check_mocs_engine(struct live_mocs *arg,
>  	/* Compare the results against the expected tables */
>  	vaddr = arg->vaddr;
>  	if (!err)
> -		err = check_mocs_table(ce->engine, &arg->table, &vaddr);
> +		err = check_mocs_table(ce->engine, &arg->mocs, &vaddr);
>  	if (!err && ce->engine->class == RENDER_CLASS)
> -		err = check_l3cc_table(ce->engine, &arg->table, &vaddr);
> +		err = check_l3cc_table(ce->engine, &arg->l3cc, &vaddr);
>  	if (err)
>  		return err;
>  
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability
  2020-02-18 18:26   ` Brian Welty
@ 2020-02-18 18:33     ` Chris Wilson
  0 siblings, 0 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 18:33 UTC (permalink / raw)
  To: Brian Welty, intel-gfx

Quoting Brian Welty (2020-02-18 18:26:40)
> 
> On 2/18/2020 8:21 AM, Chris Wilson wrote:
> > @@ -223,9 +235,9 @@ static int check_mocs_engine(struct live_mocs *arg,
> >       /* Read the mocs tables back using SRM */
> >       offset = i915_ggtt_offset(vma);
> >       if (!err)
> > -             err = read_mocs_table(rq, &arg->table, &offset);
> > +             err = read_mocs_table(rq, &arg->mocs, &offset);
> >       if (!err && ce->engine->class == RENDER_CLASS)
> > -             err = read_l3cc_table(rq, &arg->table, &offset);
> > +             err = read_l3cc_table(rq, &arg->l3cc, &offset);
> 
> 
> Above functions will call read_regs().

And do nothing.

Perhaps using the flags for read_mocs_table() would be nice to eliminate
the HAS_GLOBAL_MOCS_REGISTERS(). Or creating global_mocs, engine_mocs,
render_l3cc. That might be overkill.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit()
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (10 preceding siblings ...)
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 12/12] drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore Chris Wilson
@ 2020-02-18 20:08 ` Matthew Auld
  2020-02-18 21:16 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] " Patchwork
  2020-02-18 21:53 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  13 siblings, 0 replies; 23+ messages in thread
From: Matthew Auld @ 2020-02-18 20:08 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development, Matthew Auld

On Tue, 18 Feb 2020 at 16:22, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> We only want to wait until the request has been submitted at least once;
> that is it is either in flight, or has been.
>
> References: fcf7df7aae24 ("drm/i915/selftests: Check for the error interrupt before we wait!")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 04/12] drm/i915/gt: Fix up missing error propagation for heartbeat pulses
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 04/12] drm/i915/gt: Fix up missing error propagation for heartbeat pulses Chris Wilson
@ 2020-02-18 20:18   ` Matthew Auld
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Auld @ 2020-02-18 20:18 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On Tue, 18 Feb 2020 at 16:22, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> Just missed setting err along an interruptible error path for the
> intel_engine_pulse().
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 05/12] drm/i915/gt: Prevent allocation on a banned context
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 05/12] drm/i915/gt: Prevent allocation on a banned context Chris Wilson
@ 2020-02-18 20:22   ` Matthew Auld
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Auld @ 2020-02-18 20:22 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On Tue, 18 Feb 2020 at 16:22, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> If a context is banned even before we submit our first request to it,
> report the failure before we attempt to allocate any resources for the
> context.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 06/12] drm/i915/gem: Check that the context wasn't closed during setup
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 06/12] drm/i915/gem: Check that the context wasn't closed during setup Chris Wilson
@ 2020-02-18 20:38   ` Matthew Auld
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Auld @ 2020-02-18 20:38 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On Tue, 18 Feb 2020 at 16:22, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> As setup takes a long time, the user may close the context during the
> construction of the execbuf. In order to make sure we correctly track
> all outstanding work with non-persistent contexts, we need to serialise
> the submission with the context closure and mop up any leaks.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 02/12] drm/i915/gt: Show the cumulative context runtime in engine debug
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 02/12] drm/i915/gt: Show the cumulative context runtime in engine debug Chris Wilson
@ 2020-02-18 20:45   ` Matthew Auld
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Auld @ 2020-02-18 20:45 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On Tue, 18 Feb 2020 at 16:22, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> As we have the total runtime known to us, show it when dumping the
> engine state for debug.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit()
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (11 preceding siblings ...)
  2020-02-18 20:08 ` [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Matthew Auld
@ 2020-02-18 21:16 ` Patchwork
  2020-02-18 21:53 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  13 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2020-02-18 21:16 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit()
URL   : https://patchwork.freedesktop.org/series/73583/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
181edb75b6c9 drm/i915/gt: Show the cumulative context runtime in engine debug
-:36: WARNING:LONG_LINE: line over 100 characters
#36: FILE: drivers/gpu/drm/i915/gt/intel_engine_cs.c:1393:
+						DIV_ROUND_CLOSEST_ULL(intel_context_get_total_runtime_ns(rq->context),

total: 0 errors, 1 warnings, 0 checks, 22 lines checked
a2db240005d4 drm/i915/execlists: Check the sentinel is alone in the ELSP
9ba40b15b759 drm/i915/gt: Prevent allocation on a banned context
541f72809fce drm/i915/gem: Check that the context wasn't closed during setup
cf261859e482 drm/i915/gem: Consolidate ctx->engines[] release
1d06fdaaf50f drm/i915/selftest: Analyse timestamp behaviour across context switches
-:137: WARNING:MEMORY_BARRIER: memory barrier without comment
#137: FILE: drivers/gpu/drm/i915/gt/selftest_lrc.c:4574:
+		wmb();

-:151: WARNING:MEMORY_BARRIER: memory barrier without comment
#151: FILE: drivers/gpu/drm/i915/gt/selftest_lrc.c:4588:
+	rmb();

total: 0 errors, 2 warnings, 0 checks, 242 lines checked
2c32b549cc8d drm/i915: Read rawclk_freq earlier
8fd6b67e363b drm/i915/gt: Refactor l3cc/mocs availability
cf0279a9dfdd drm/i915/gt: Declare when we enabled timeslicing
04c7e9bbd9a2 drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability
  2020-02-18 16:21 ` [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability Chris Wilson
  2020-02-18 18:26   ` Brian Welty
@ 2020-02-18 21:24   ` Daniele Ceraolo Spurio
  2020-02-18 21:38     ` Chris Wilson
  1 sibling, 1 reply; 23+ messages in thread
From: Daniele Ceraolo Spurio @ 2020-02-18 21:24 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx



On 2/18/20 8:21 AM, Chris Wilson wrote:
> On dgfx, we only use l3cc and not mocs, but we share the table of
> register definitions with Tigerlake (which includes the mocs). This

Just a small correction here: the problem is not that the Tigerlake 
definitions will be shared (which is not necessarily going to happen), 
but that our table entry definition contains both l3cc and mocs and 
there is currently no way to know if only one of the 2 is valid. We 
could split the table, but IMO that'd be overkill and it'll make things 
messier for integrated platforms that have both, so I prefer the 
approach in this patch.

> confuses our selftest that verifies that the registers do contain the
> values in our tables after various events (idling, reset, activity etc).
> 
> When constructing the table of register definitions, also include the
> flags for which registers are valid so that information is computed
> centrally and available to all callers.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Brian Welty <brian.welty@intel.com>
> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

Was confused for a moment by the uninitialized table passed to 
read_mocs_table(), but we're ok because we memset it to 0 and therefore 
table->n_entries is zero. Maybe worth adding a check to avoid calling 
ring_begin() and ring_advance() in read_regs() that scenario?

Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

Daniele

> ---
>   drivers/gpu/drm/i915/gt/intel_mocs.c    | 72 +++++++++++++++++--------
>   drivers/gpu/drm/i915/gt/selftest_mocs.c | 24 ++++++---
>   2 files changed, 67 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_mocs.c b/drivers/gpu/drm/i915/gt/intel_mocs.c
> index 0afc1eb3c20f..632e08a4592b 100644
> --- a/drivers/gpu/drm/i915/gt/intel_mocs.c
> +++ b/drivers/gpu/drm/i915/gt/intel_mocs.c
> @@ -280,9 +280,32 @@ static const struct drm_i915_mocs_entry icl_mocs_table[] = {
>   	GEN11_MOCS_ENTRIES
>   };
>   
> -static bool get_mocs_settings(const struct drm_i915_private *i915,
> -			      struct drm_i915_mocs_table *table)
> +enum {
> +	HAS_GLOBAL_MOCS = BIT(0),
> +	HAS_ENGINE_MOCS = BIT(1),
> +	HAS_RENDER_L3CC = BIT(2),
> +};
> +
> +static bool has_l3cc(const struct drm_i915_private *i915)
>   {
> +	return true;
> +}
> +
> +static bool has_global_mocs(const struct drm_i915_private *i915)
> +{
> +	return HAS_GLOBAL_MOCS_REGISTERS(i915);
> +}
> +
> +static bool has_mocs(const struct drm_i915_private *i915)
> +{
> +	return !IS_DGFX(i915);
> +}
> +
> +static unsigned int get_mocs_settings(const struct drm_i915_private *i915,
> +				      struct drm_i915_mocs_table *table)
> +{
> +	unsigned int flags;
> +
>   	if (INTEL_GEN(i915) >= 12) {
>   		table->size  = ARRAY_SIZE(tgl_mocs_table);
>   		table->table = tgl_mocs_table;
> @@ -302,11 +325,11 @@ static bool get_mocs_settings(const struct drm_i915_private *i915,
>   	} else {
>   		drm_WARN_ONCE(&i915->drm, INTEL_GEN(i915) >= 9,
>   			      "Platform that should have a MOCS table does not.\n");
> -		return false;
> +		return 0;
>   	}
>   
>   	if (GEM_DEBUG_WARN_ON(table->size > table->n_entries))
> -		return false;
> +		return 0;
>   
>   	/* WaDisableSkipCaching:skl,bxt,kbl,glk */
>   	if (IS_GEN(i915, 9)) {
> @@ -315,10 +338,20 @@ static bool get_mocs_settings(const struct drm_i915_private *i915,
>   		for (i = 0; i < table->size; i++)
>   			if (GEM_DEBUG_WARN_ON(table->table[i].l3cc_value &
>   					      (L3_ESC(1) | L3_SCC(0x7))))
> -				return false;
> +				return 0;
>   	}
>   
> -	return true;
> +	flags = 0;
> +	if (has_mocs(i915)) {
> +		if (has_global_mocs(i915))
> +			flags |= HAS_GLOBAL_MOCS;
> +		else
> +			flags |= HAS_ENGINE_MOCS;
> +	}
> +	if (has_l3cc(i915))
> +		flags |= HAS_RENDER_L3CC;
> +
> +	return flags;
>   }
>   
>   /*
> @@ -411,18 +444,20 @@ static void init_l3cc_table(struct intel_engine_cs *engine,
>   void intel_mocs_init_engine(struct intel_engine_cs *engine)
>   {
>   	struct drm_i915_mocs_table table;
> +	unsigned int flags;
>   
>   	/* Called under a blanket forcewake */
>   	assert_forcewakes_active(engine->uncore, FORCEWAKE_ALL);
>   
> -	if (!get_mocs_settings(engine->i915, &table))
> +	flags = get_mocs_settings(engine->i915, &table);
> +	if (!flags)
>   		return;
>   
>   	/* Platforms with global MOCS do not need per-engine initialization. */
> -	if (!HAS_GLOBAL_MOCS_REGISTERS(engine->i915))
> +	if (flags & HAS_ENGINE_MOCS)
>   		init_mocs_table(engine, &table);
>   
> -	if (engine->class == RENDER_CLASS)
> +	if (flags & HAS_RENDER_L3CC && engine->class == RENDER_CLASS)
>   		init_l3cc_table(engine, &table);
>   }
>   
> @@ -431,26 +466,17 @@ static u32 global_mocs_offset(void)
>   	return i915_mmio_reg_offset(GEN12_GLOBAL_MOCS(0));
>   }
>   
> -static void init_global_mocs(struct intel_gt *gt)
> +void intel_mocs_init(struct intel_gt *gt)
>   {
>   	struct drm_i915_mocs_table table;
> +	unsigned int flags;
>   
>   	/*
>   	 * LLC and eDRAM control values are not applicable to dgfx
>   	 */
> -	if (IS_DGFX(gt->i915))
> -		return;
> -
> -	if (!get_mocs_settings(gt->i915, &table))
> -		return;
> -
> -	__init_mocs_table(gt->uncore, &table, global_mocs_offset());
> -}
> -
> -void intel_mocs_init(struct intel_gt *gt)
> -{
> -	if (HAS_GLOBAL_MOCS_REGISTERS(gt->i915))
> -		init_global_mocs(gt);
> +	flags = get_mocs_settings(gt->i915, &table);
> +	if (flags & HAS_GLOBAL_MOCS)
> +		__init_mocs_table(gt->uncore, &table, global_mocs_offset());
>   }
>   
>   #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
> index de1f83100fb6..8831ffee2061 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
> @@ -12,7 +12,8 @@
>   #include "selftests/igt_spinner.h"
>   
>   struct live_mocs {
> -	struct drm_i915_mocs_table table;
> +	struct drm_i915_mocs_table mocs;
> +	struct drm_i915_mocs_table l3cc;
>   	struct i915_vma *scratch;
>   	void *vaddr;
>   };
> @@ -70,11 +71,22 @@ static struct i915_vma *create_scratch(struct intel_gt *gt)
>   
>   static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt)
>   {
> +	struct drm_i915_mocs_table table;
> +	unsigned int flags;
>   	int err;
>   
> -	if (!get_mocs_settings(gt->i915, &arg->table))
> +	memset(arg, 0, sizeof(*arg));
> +
> +	flags = get_mocs_settings(gt->i915, &table);
> +	if (!flags)
>   		return -EINVAL;
>   
> +	if (flags & HAS_RENDER_L3CC)
> +		arg->l3cc = table;
> +
> +	if (flags & (HAS_GLOBAL_MOCS | HAS_ENGINE_MOCS))
> +		arg->mocs = table;
> +
>   	arg->scratch = create_scratch(gt);
>   	if (IS_ERR(arg->scratch))
>   		return PTR_ERR(arg->scratch);
> @@ -223,9 +235,9 @@ static int check_mocs_engine(struct live_mocs *arg,
>   	/* Read the mocs tables back using SRM */
>   	offset = i915_ggtt_offset(vma);
>   	if (!err)
> -		err = read_mocs_table(rq, &arg->table, &offset);
> +		err = read_mocs_table(rq, &arg->mocs, &offset);
>   	if (!err && ce->engine->class == RENDER_CLASS)
> -		err = read_l3cc_table(rq, &arg->table, &offset);
> +		err = read_l3cc_table(rq, &arg->l3cc, &offset);
>   	offset -= i915_ggtt_offset(vma);
>   	GEM_BUG_ON(offset > PAGE_SIZE);
>   
> @@ -236,9 +248,9 @@ static int check_mocs_engine(struct live_mocs *arg,
>   	/* Compare the results against the expected tables */
>   	vaddr = arg->vaddr;
>   	if (!err)
> -		err = check_mocs_table(ce->engine, &arg->table, &vaddr);
> +		err = check_mocs_table(ce->engine, &arg->mocs, &vaddr);
>   	if (!err && ce->engine->class == RENDER_CLASS)
> -		err = check_l3cc_table(ce->engine, &arg->table, &vaddr);
> +		err = check_l3cc_table(ce->engine, &arg->l3cc, &vaddr);
>   	if (err)
>   		return err;
>   
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability
  2020-02-18 21:24   ` Daniele Ceraolo Spurio
@ 2020-02-18 21:38     ` Chris Wilson
  0 siblings, 0 replies; 23+ messages in thread
From: Chris Wilson @ 2020-02-18 21:38 UTC (permalink / raw)
  To: Daniele Ceraolo Spurio, intel-gfx

Quoting Daniele Ceraolo Spurio (2020-02-18 21:24:47)
> 
> 
> On 2/18/20 8:21 AM, Chris Wilson wrote:
> > On dgfx, we only use l3cc and not mocs, but we share the table of
> > register definitions with Tigerlake (which includes the mocs). This

-share the table of register definitions
+share the table containing both register definitions
 
> Just a small correction here: the problem is not that the Tigerlake 
> definitions will be shared (which is not necessarily going to happen), 
> but that our table entry definition contains both l3cc and mocs and 
> there is currently no way to know if only one of the 2 is valid. We 
> could split the table, but IMO that'd be overkill and it'll make things 
> messier for integrated platforms that have both, so I prefer the 
> approach in this patch.
> 
> > confuses our selftest that verifies that the registers do contain the
> > values in our tables after various events (idling, reset, activity etc).
> > 
> > When constructing the table of register definitions, also include the
> > flags for which registers are valid so that information is computed
> > centrally and available to all callers.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Brian Welty <brian.welty@intel.com>
> > Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> 
> Was confused for a moment by the uninitialized table passed to 
> read_mocs_table(), but we're ok because we memset it to 0 and therefore 

I did put the memset there to try and reassure :)

> table->n_entries is zero. Maybe worth adding a check to avoid calling 
> ring_begin() and ring_advance() in read_regs() that scenario?

ring_advance is just a debug aide; ring_begin becomes a no-op, after a
few twists and turns. (At worst it is a intel_ring_wrap.)

I liked the simple look of not having to special case 0.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for series starting with [01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit()
  2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
                   ` (12 preceding siblings ...)
  2020-02-18 21:16 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] " Patchwork
@ 2020-02-18 21:53 ` Patchwork
  13 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2020-02-18 21:53 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit()
URL   : https://patchwork.freedesktop.org/series/73583/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_7962 -> Patchwork_16600
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_16600 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_16600, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_16600:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@live_execlists:
    - fi-skl-6600u:       NOTRUN -> [INCOMPLETE][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-skl-6600u/igt@i915_selftest@live_execlists.html
    - fi-skl-guc:         NOTRUN -> [INCOMPLETE][2]
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-skl-guc/igt@i915_selftest@live_execlists.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_selftest@live_gt_lrc:
    - {fi-tgl-u}:         [INCOMPLETE][3] ([i915#1233]) -> [DMESG-FAIL][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-tgl-u/igt@i915_selftest@live_gt_lrc.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-tgl-u/igt@i915_selftest@live_gt_lrc.html
    - {fi-tgl-dsi}:       [INCOMPLETE][5] ([i915#1233]) -> [DMESG-FAIL][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-tgl-dsi/igt@i915_selftest@live_gt_lrc.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-tgl-dsi/igt@i915_selftest@live_gt_lrc.html

  * igt@runner@aborted:
    - {fi-tgl-dsi}:       [FAIL][7] ([i915#584]) -> [FAIL][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-tgl-dsi/igt@runner@aborted.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-tgl-dsi/igt@runner@aborted.html
    - {fi-tgl-u}:         [FAIL][9] ([i915#584]) -> [FAIL][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-tgl-u/igt@runner@aborted.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-tgl-u/igt@runner@aborted.html

  
Known issues
------------

  Here are the changes found in Patchwork_16600 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_close_race@basic-threads:
    - fi-byt-n2820:       [PASS][11] -> [INCOMPLETE][12] ([i915#45])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-byt-n2820/igt@gem_close_race@basic-threads.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-byt-n2820/igt@gem_close_race@basic-threads.html

  * igt@i915_selftest@live_execlists:
    - fi-glk-dsi:         [PASS][13] -> [INCOMPLETE][14] ([i915#58] / [k.org#198133])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-glk-dsi/igt@i915_selftest@live_execlists.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-glk-dsi/igt@i915_selftest@live_execlists.html
    - fi-icl-u3:          [PASS][15] -> [INCOMPLETE][16] ([i915#140])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-icl-u3/igt@i915_selftest@live_execlists.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-icl-u3/igt@i915_selftest@live_execlists.html

  * igt@i915_selftest@live_gem_contexts:
    - fi-cml-s:           [PASS][17] -> [DMESG-FAIL][18] ([i915#877])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-cml-s/igt@i915_selftest@live_gem_contexts.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-cml-s/igt@i915_selftest@live_gem_contexts.html

  * igt@i915_selftest@live_gtt:
    - fi-icl-u2:          [PASS][19] -> [TIMEOUT][20] ([fdo#112271])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-icl-u2/igt@i915_selftest@live_gtt.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-icl-u2/igt@i915_selftest@live_gtt.html
    - fi-bxt-dsi:         [PASS][21] -> [TIMEOUT][22] ([fdo#112271])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-bxt-dsi/igt@i915_selftest@live_gtt.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-bxt-dsi/igt@i915_selftest@live_gtt.html

  
#### Possible fixes ####

  * igt@gem_exec_parallel@basic:
    - {fi-ehl-1}:         [INCOMPLETE][23] ([i915#937]) -> [PASS][24]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-ehl-1/igt@gem_exec_parallel@basic.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-ehl-1/igt@gem_exec_parallel@basic.html

  * igt@i915_selftest@live_gtt:
    - fi-kbl-7500u:       [TIMEOUT][25] ([fdo#112271]) -> [PASS][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-kbl-7500u/igt@i915_selftest@live_gtt.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-kbl-7500u/igt@i915_selftest@live_gtt.html

  
#### Warnings ####

  * igt@gem_close_race@basic-threads:
    - fi-hsw-peppy:       [TIMEOUT][27] ([fdo#112271] / [i915#1084]) -> [INCOMPLETE][28] ([i915#694] / [i915#816])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7962/fi-hsw-peppy/igt@gem_close_race@basic-threads.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/fi-hsw-peppy/igt@gem_close_race@basic-threads.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#112271]: https://bugs.freedesktop.org/show_bug.cgi?id=112271
  [i915#1084]: https://gitlab.freedesktop.org/drm/intel/issues/1084
  [i915#1233]: https://gitlab.freedesktop.org/drm/intel/issues/1233
  [i915#140]: https://gitlab.freedesktop.org/drm/intel/issues/140
  [i915#45]: https://gitlab.freedesktop.org/drm/intel/issues/45
  [i915#58]: https://gitlab.freedesktop.org/drm/intel/issues/58
  [i915#584]: https://gitlab.freedesktop.org/drm/intel/issues/584
  [i915#694]: https://gitlab.freedesktop.org/drm/intel/issues/694
  [i915#816]: https://gitlab.freedesktop.org/drm/intel/issues/816
  [i915#877]: https://gitlab.freedesktop.org/drm/intel/issues/877
  [i915#937]: https://gitlab.freedesktop.org/drm/intel/issues/937
  [k.org#198133]: https://bugzilla.kernel.org/show_bug.cgi?id=198133


Participating hosts (46 -> 43)
------------------------------

  Additional (5): fi-skl-guc fi-ilk-650 fi-gdg-551 fi-skl-6600u fi-snb-2600 
  Missing    (8): fi-ilk-m540 fi-byt-squawks fi-bsw-cyan fi-kbl-guc fi-ctg-p8600 fi-kbl-8809g fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_7962 -> Patchwork_16600

  CI-20190529: 20190529
  CI_DRM_7962: ee8b2f14a46e30de565d49ed4ac743c2e9d0027d @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5448: 116020b1f83c1b3994c76882df7f77b6731d78ba @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_16600: 04c7e9bbd9a224c118e35ed4417c7407fee94063 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

04c7e9bbd9a2 drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore
cf0279a9dfdd drm/i915/gt: Declare when we enabled timeslicing
8fd6b67e363b drm/i915/gt: Refactor l3cc/mocs availability
2c32b549cc8d drm/i915: Read rawclk_freq earlier
1d06fdaaf50f drm/i915/selftest: Analyse timestamp behaviour across context switches
cf261859e482 drm/i915/gem: Consolidate ctx->engines[] release
541f72809fce drm/i915/gem: Check that the context wasn't closed during setup
9ba40b15b759 drm/i915/gt: Prevent allocation on a banned context
a2db240005d4 drm/i915/execlists: Check the sentinel is alone in the ELSP
181edb75b6c9 drm/i915/gt: Show the cumulative context runtime in engine debug

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16600/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2020-02-18 21:53 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-18 16:21 [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Chris Wilson
2020-02-18 16:21 ` [Intel-gfx] [PATCH 02/12] drm/i915/gt: Show the cumulative context runtime in engine debug Chris Wilson
2020-02-18 20:45   ` Matthew Auld
2020-02-18 16:21 ` [Intel-gfx] [PATCH 03/12] drm/i915/execlists: Check the sentinel is alone in the ELSP Chris Wilson
2020-02-18 16:21 ` [Intel-gfx] [PATCH 04/12] drm/i915/gt: Fix up missing error propagation for heartbeat pulses Chris Wilson
2020-02-18 20:18   ` Matthew Auld
2020-02-18 16:21 ` [Intel-gfx] [PATCH 05/12] drm/i915/gt: Prevent allocation on a banned context Chris Wilson
2020-02-18 20:22   ` Matthew Auld
2020-02-18 16:21 ` [Intel-gfx] [PATCH 06/12] drm/i915/gem: Check that the context wasn't closed during setup Chris Wilson
2020-02-18 20:38   ` Matthew Auld
2020-02-18 16:21 ` [Intel-gfx] [PATCH 07/12] drm/i915/gem: Consolidate ctx->engines[] release Chris Wilson
2020-02-18 16:21 ` [Intel-gfx] [PATCH 08/12] drm/i915/selftest: Analyse timestamp behaviour across context switches Chris Wilson
2020-02-18 16:21 ` [Intel-gfx] [PATCH 09/12] drm/i915: Read rawclk_freq earlier Chris Wilson
2020-02-18 16:21 ` [Intel-gfx] [PATCH 10/12] drm/i915/gt: Refactor l3cc/mocs availability Chris Wilson
2020-02-18 18:26   ` Brian Welty
2020-02-18 18:33     ` Chris Wilson
2020-02-18 21:24   ` Daniele Ceraolo Spurio
2020-02-18 21:38     ` Chris Wilson
2020-02-18 16:21 ` [Intel-gfx] [PATCH 11/12] drm/i915/gt: Declare when we enabled timeslicing Chris Wilson
2020-02-18 16:21 ` [Intel-gfx] [PATCH 12/12] drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore Chris Wilson
2020-02-18 20:08 ` [Intel-gfx] [PATCH 01/12] drm/i915/selftests: Check for any sign of request starting in wait_for_submit() Matthew Auld
2020-02-18 21:16 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] " Patchwork
2020-02-18 21:53 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).