All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
@ 2019-11-17 21:03 ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx; +Cc: Matthew Auld

The general concept was that intel_timeline.active_count was locked by
the intel_timeline.mutex. The exception was for power management, where
the engine->kernel_context->timeline could be manipulated under the
global wakeref.mutex.

This was quite solid, as we always manipulated the timeline only while
we held an engine wakeref.

And then we started retiring requests outside of struct_mutex, only
using the timelines.active_list and the timeline->mutex. There we
started manipulating intel_timeline.active_count outside of an engine
wakeref, and so introduced a race between __engine_park() and
intel_gt_retire_requests(), a race that could result in the
engine->kernel_context not being added to the active timelines and so
losing requests, which caused us to keep the system permanently powered
up [and unloadable].

The race would be easy to close if we could take the engine wakeref for
the timeline before we retire -- except timelines are not bound to any
engine and so we would need to keep all active engines awake. The
alternative is to guard intel_timeline_enter/intel_timeline_exit for use
outside of the timeline->mutex.

Fixes: e5dadff4b093 ("drm/i915: Protect request retirement with timeline->mutex")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gt_requests.c   |  8 ++---
 drivers/gpu/drm/i915/gt/intel_timeline.c      | 34 +++++++++++++++----
 .../gpu/drm/i915/gt/intel_timeline_types.h    |  2 +-
 3 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index a79e6efb31a2..7559d6373f49 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -49,8 +49,8 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 			continue;
 
 		intel_timeline_get(tl);
-		GEM_BUG_ON(!tl->active_count);
-		tl->active_count++; /* pin the list element */
+		GEM_BUG_ON(!atomic_read(&tl->active_count));
+		atomic_inc(&tl->active_count); /* pin the list element */
 		spin_unlock_irqrestore(&timelines->lock, flags);
 
 		if (timeout > 0) {
@@ -71,14 +71,14 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 
 		/* Resume iteration after dropping lock */
 		list_safe_reset_next(tl, tn, link);
-		if (!--tl->active_count)
+		if (atomic_dec_and_test(&tl->active_count))
 			list_del(&tl->link);
 
 		mutex_unlock(&tl->mutex);
 
 		/* Defer the final release to after the spinlock */
 		if (refcount_dec_and_test(&tl->kref.refcount)) {
-			GEM_BUG_ON(tl->active_count);
+			GEM_BUG_ON(atomic_read(&tl->active_count));
 			list_add(&tl->link, &free);
 		}
 	}
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 16a9e88d93de..4f914f0d5eab 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -334,15 +334,33 @@ void intel_timeline_enter(struct intel_timeline *tl)
 	struct intel_gt_timelines *timelines = &tl->gt->timelines;
 	unsigned long flags;
 
+	/*
+	 * Pretend we are serialised by the timeline->mutex.
+	 *
+	 * While generally true, there are a few exceptions to the rule
+	 * for the engine->kernel_context being used to manage power
+	 * transitions. As the engine_park may be called from under any
+	 * timeline, it uses the power mutex as a global serialisation
+	 * lock to prevent any other request entering its timeline.
+	 *
+	 * The rule is generally tl->mutex, otherwise engine->wakeref.mutex.
+	 *
+	 * However, intel_gt_retire_request() does not know which engine
+	 * it is retiring along and so cannot partake in the engine-pm
+	 * barrier, and there we use the tl->active_count as a means to
+	 * pin the timeline in the active_list while the locks are dropped.
+	 * Ergo, as that is outside of the engine-pm barrier, we need to
+	 * use atomic to manipulate tl->active_count.
+	 */
 	lockdep_assert_held(&tl->mutex);
-
 	GEM_BUG_ON(!atomic_read(&tl->pin_count));
-	if (tl->active_count++)
+
+	if (atomic_add_unless(&tl->active_count, 1, 0))
 		return;
-	GEM_BUG_ON(!tl->active_count); /* overflow? */
 
 	spin_lock_irqsave(&timelines->lock, flags);
-	list_add(&tl->link, &timelines->active_list);
+	if (!atomic_fetch_inc(&tl->active_count))
+		list_add(&tl->link, &timelines->active_list);
 	spin_unlock_irqrestore(&timelines->lock, flags);
 }
 
@@ -351,14 +369,16 @@ void intel_timeline_exit(struct intel_timeline *tl)
 	struct intel_gt_timelines *timelines = &tl->gt->timelines;
 	unsigned long flags;
 
+	/* See intel_timeline_enter() */
 	lockdep_assert_held(&tl->mutex);
 
-	GEM_BUG_ON(!tl->active_count);
-	if (--tl->active_count)
+	GEM_BUG_ON(!atomic_read(&tl->active_count));
+	if (atomic_add_unless(&tl->active_count, -1, 1))
 		return;
 
 	spin_lock_irqsave(&timelines->lock, flags);
-	list_del(&tl->link);
+	if (atomic_dec_and_test(&tl->active_count))
+		list_del(&tl->link);
 	spin_unlock_irqrestore(&timelines->lock, flags);
 
 	/*
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
index 98d9ee166379..5244615ed1cb 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
@@ -42,7 +42,7 @@ struct intel_timeline {
 	 * from the intel_context caller plus internal atomicity.
 	 */
 	atomic_t pin_count;
-	unsigned int active_count;
+	atomic_t active_count;
 
 	const u32 *hwsp_seqno;
 	struct i915_vma *hwsp_ggtt;
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
@ 2019-11-17 21:03 ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx; +Cc: Matthew Auld

The general concept was that intel_timeline.active_count was locked by
the intel_timeline.mutex. The exception was for power management, where
the engine->kernel_context->timeline could be manipulated under the
global wakeref.mutex.

This was quite solid, as we always manipulated the timeline only while
we held an engine wakeref.

And then we started retiring requests outside of struct_mutex, only
using the timelines.active_list and the timeline->mutex. There we
started manipulating intel_timeline.active_count outside of an engine
wakeref, and so introduced a race between __engine_park() and
intel_gt_retire_requests(), a race that could result in the
engine->kernel_context not being added to the active timelines and so
losing requests, which caused us to keep the system permanently powered
up [and unloadable].

The race would be easy to close if we could take the engine wakeref for
the timeline before we retire -- except timelines are not bound to any
engine and so we would need to keep all active engines awake. The
alternative is to guard intel_timeline_enter/intel_timeline_exit for use
outside of the timeline->mutex.

Fixes: e5dadff4b093 ("drm/i915: Protect request retirement with timeline->mutex")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gt_requests.c   |  8 ++---
 drivers/gpu/drm/i915/gt/intel_timeline.c      | 34 +++++++++++++++----
 .../gpu/drm/i915/gt/intel_timeline_types.h    |  2 +-
 3 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index a79e6efb31a2..7559d6373f49 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -49,8 +49,8 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 			continue;
 
 		intel_timeline_get(tl);
-		GEM_BUG_ON(!tl->active_count);
-		tl->active_count++; /* pin the list element */
+		GEM_BUG_ON(!atomic_read(&tl->active_count));
+		atomic_inc(&tl->active_count); /* pin the list element */
 		spin_unlock_irqrestore(&timelines->lock, flags);
 
 		if (timeout > 0) {
@@ -71,14 +71,14 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 
 		/* Resume iteration after dropping lock */
 		list_safe_reset_next(tl, tn, link);
-		if (!--tl->active_count)
+		if (atomic_dec_and_test(&tl->active_count))
 			list_del(&tl->link);
 
 		mutex_unlock(&tl->mutex);
 
 		/* Defer the final release to after the spinlock */
 		if (refcount_dec_and_test(&tl->kref.refcount)) {
-			GEM_BUG_ON(tl->active_count);
+			GEM_BUG_ON(atomic_read(&tl->active_count));
 			list_add(&tl->link, &free);
 		}
 	}
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 16a9e88d93de..4f914f0d5eab 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -334,15 +334,33 @@ void intel_timeline_enter(struct intel_timeline *tl)
 	struct intel_gt_timelines *timelines = &tl->gt->timelines;
 	unsigned long flags;
 
+	/*
+	 * Pretend we are serialised by the timeline->mutex.
+	 *
+	 * While generally true, there are a few exceptions to the rule
+	 * for the engine->kernel_context being used to manage power
+	 * transitions. As the engine_park may be called from under any
+	 * timeline, it uses the power mutex as a global serialisation
+	 * lock to prevent any other request entering its timeline.
+	 *
+	 * The rule is generally tl->mutex, otherwise engine->wakeref.mutex.
+	 *
+	 * However, intel_gt_retire_request() does not know which engine
+	 * it is retiring along and so cannot partake in the engine-pm
+	 * barrier, and there we use the tl->active_count as a means to
+	 * pin the timeline in the active_list while the locks are dropped.
+	 * Ergo, as that is outside of the engine-pm barrier, we need to
+	 * use atomic to manipulate tl->active_count.
+	 */
 	lockdep_assert_held(&tl->mutex);
-
 	GEM_BUG_ON(!atomic_read(&tl->pin_count));
-	if (tl->active_count++)
+
+	if (atomic_add_unless(&tl->active_count, 1, 0))
 		return;
-	GEM_BUG_ON(!tl->active_count); /* overflow? */
 
 	spin_lock_irqsave(&timelines->lock, flags);
-	list_add(&tl->link, &timelines->active_list);
+	if (!atomic_fetch_inc(&tl->active_count))
+		list_add(&tl->link, &timelines->active_list);
 	spin_unlock_irqrestore(&timelines->lock, flags);
 }
 
@@ -351,14 +369,16 @@ void intel_timeline_exit(struct intel_timeline *tl)
 	struct intel_gt_timelines *timelines = &tl->gt->timelines;
 	unsigned long flags;
 
+	/* See intel_timeline_enter() */
 	lockdep_assert_held(&tl->mutex);
 
-	GEM_BUG_ON(!tl->active_count);
-	if (--tl->active_count)
+	GEM_BUG_ON(!atomic_read(&tl->active_count));
+	if (atomic_add_unless(&tl->active_count, -1, 1))
 		return;
 
 	spin_lock_irqsave(&timelines->lock, flags);
-	list_del(&tl->link);
+	if (atomic_dec_and_test(&tl->active_count))
+		list_del(&tl->link);
 	spin_unlock_irqrestore(&timelines->lock, flags);
 
 	/*
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
index 98d9ee166379..5244615ed1cb 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
@@ -42,7 +42,7 @@ struct intel_timeline {
 	 * from the intel_context caller plus internal atomicity.
 	 */
 	atomic_t pin_count;
-	unsigned int active_count;
+	atomic_t active_count;
 
 	const u32 *hwsp_seqno;
 	struct i915_vma *hwsp_ggtt;
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 02/14] drm/i915: Mark up the calling context for intel_wakeref_put()
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Previously, we assumed we could use mutex_trylock() within an atomic
context, falling back to a working if contended. However, such trickery
is illegal inside interrupt context, and so we need to always use a
worker under such circumstances. As we normally are in process context,
we can typically use a plain mutex, and only defer to a work when we
know we are caller from an interrupt path.

Fixes: 51fbd8de87dc ("drm/i915/pmu: Atomically acquire the gt_pm wakeref")
References: a0855d24fc22d ("locking/mutex: Complain upon mutex API misuse in IRQ contexts")
References: https://bugs.freedesktop.org/show_bug.cgi?id=111626
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_engine_pm.c    |  3 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.h    | 10 +++++
 drivers/gpu/drm/i915/gt/intel_gt_pm.c        |  1 -
 drivers/gpu/drm/i915/gt/intel_gt_pm.h        |  5 +++
 drivers/gpu/drm/i915/gt/intel_lrc.c          |  2 +-
 drivers/gpu/drm/i915/gt/intel_reset.c        |  2 +-
 drivers/gpu/drm/i915/gt/selftest_engine_pm.c |  7 ++--
 drivers/gpu/drm/i915/i915_active.c           |  5 ++-
 drivers/gpu/drm/i915/i915_pmu.c              |  6 +--
 drivers/gpu/drm/i915/intel_wakeref.c         |  8 ++--
 drivers/gpu/drm/i915/intel_wakeref.h         | 44 ++++++++++++++++----
 11 files changed, 68 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 3c0f490ff2c7..d1e9b7fec3a6 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -177,7 +177,8 @@ static int __engine_park(struct intel_wakeref *wf)
 
 	engine->execlists.no_priolist = false;
 
-	intel_gt_pm_put(engine->gt);
+	/* While we call i915_vma_parked, we have to break the lock cycle */
+	intel_gt_pm_put_async(engine->gt);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.h b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
index 739c50fefcef..467475fce9c6 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
@@ -31,6 +31,16 @@ static inline void intel_engine_pm_put(struct intel_engine_cs *engine)
 	intel_wakeref_put(&engine->wakeref);
 }
 
+static inline void intel_engine_pm_put_async(struct intel_engine_cs *engine)
+{
+	intel_wakeref_put_async(&engine->wakeref);
+}
+
+static inline void intel_engine_pm_unlock_wait(struct intel_engine_cs *engine)
+{
+	intel_wakeref_unlock_wait(&engine->wakeref);
+}
+
 void intel_engine_init__pm(struct intel_engine_cs *engine);
 
 #endif /* INTEL_ENGINE_PM_H */
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index e61f752a3cd5..7a9044ac4b75 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -105,7 +105,6 @@ static int __gt_park(struct intel_wakeref *wf)
 static const struct intel_wakeref_ops wf_ops = {
 	.get = __gt_unpark,
 	.put = __gt_park,
-	.flags = INTEL_WAKEREF_PUT_ASYNC,
 };
 
 void intel_gt_pm_init_early(struct intel_gt *gt)
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
index b3e17399be9b..990efc27a4e4 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
@@ -32,6 +32,11 @@ static inline void intel_gt_pm_put(struct intel_gt *gt)
 	intel_wakeref_put(&gt->wakeref);
 }
 
+static inline void intel_gt_pm_put_async(struct intel_gt *gt)
+{
+	intel_wakeref_put_async(&gt->wakeref);
+}
+
 static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt)
 {
 	return intel_wakeref_wait_for_idle(&gt->wakeref);
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 33ce258d484f..b65bc06855b0 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1172,7 +1172,7 @@ __execlists_schedule_out(struct i915_request *rq,
 
 	intel_engine_context_out(engine);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
-	intel_gt_pm_put(engine->gt);
+	intel_gt_pm_put_async(engine->gt);
 
 	/*
 	 * If this is part of a virtual engine, its next request may
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index b7007cd78c6f..0388f9375366 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -1125,7 +1125,7 @@ int intel_engine_reset(struct intel_engine_cs *engine, const char *msg)
 out:
 	intel_engine_cancel_stop_cs(engine);
 	reset_finish_engine(engine);
-	intel_engine_pm_put(engine);
+	intel_engine_pm_put_async(engine);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
index 20b9c83f43ad..851a6c4f65c6 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
@@ -51,11 +51,12 @@ static int live_engine_pm(void *arg)
 				pr_err("intel_engine_pm_get_if_awake(%s) failed under %s\n",
 				       engine->name, p->name);
 			else
-				intel_engine_pm_put(engine);
-			intel_engine_pm_put(engine);
+				intel_engine_pm_put_async(engine);
+			intel_engine_pm_put_async(engine);
 			p->critical_section_end();
 
-			/* engine wakeref is sync (instant) */
+			intel_engine_pm_unlock_wait(engine);
+
 			if (intel_engine_pm_is_awake(engine)) {
 				pr_err("%s is still awake after flushing pm\n",
 				       engine->name);
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 5448f37c8102..dca15ace88f6 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -672,12 +672,13 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 	 * populated by i915_request_add_active_barriers() to point to the
 	 * request that will eventually release them.
 	 */
-	spin_lock_irqsave_nested(&ref->tree_lock, flags, SINGLE_DEPTH_NESTING);
 	llist_for_each_safe(pos, next, take_preallocated_barriers(ref)) {
 		struct active_node *node = barrier_from_ll(pos);
 		struct intel_engine_cs *engine = barrier_to_engine(node);
 		struct rb_node **p, *parent;
 
+		spin_lock_irqsave_nested(&ref->tree_lock, flags,
+					 SINGLE_DEPTH_NESTING);
 		parent = NULL;
 		p = &ref->tree.rb_node;
 		while (*p) {
@@ -693,12 +694,12 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 		}
 		rb_link_node(&node->node, parent, p);
 		rb_insert_color(&node->node, &ref->tree);
+		spin_unlock_irqrestore(&ref->tree_lock, flags);
 
 		GEM_BUG_ON(!intel_engine_pm_is_awake(engine));
 		llist_add(barrier_to_ll(node), &engine->barrier_tasks);
 		intel_engine_pm_put(engine);
 	}
-	spin_unlock_irqrestore(&ref->tree_lock, flags);
 }
 
 void i915_request_add_active_barriers(struct i915_request *rq)
diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index 9b02be0ad4e6..95e824a78d4d 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -190,7 +190,7 @@ static u64 get_rc6(struct intel_gt *gt)
 	val = 0;
 	if (intel_gt_pm_get_if_awake(gt)) {
 		val = __get_rc6(gt);
-		intel_gt_pm_put(gt);
+		intel_gt_pm_put_async(gt);
 	}
 
 	spin_lock_irqsave(&pmu->lock, flags);
@@ -360,7 +360,7 @@ engines_sample(struct intel_gt *gt, unsigned int period_ns)
 skip:
 		if (unlikely(mmio_lock))
 			spin_unlock_irqrestore(mmio_lock, flags);
-		intel_engine_pm_put(engine);
+		intel_engine_pm_put_async(engine);
 	}
 }
 
@@ -398,7 +398,7 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns)
 			if (stat)
 				val = intel_get_cagf(rps, stat);
 
-			intel_gt_pm_put(gt);
+			intel_gt_pm_put_async(gt);
 		}
 
 		add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_ACT],
diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c
index 868cc78048d0..9b29176cc1ca 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.c
+++ b/drivers/gpu/drm/i915/intel_wakeref.c
@@ -54,7 +54,8 @@ int __intel_wakeref_get_first(struct intel_wakeref *wf)
 
 static void ____intel_wakeref_put_last(struct intel_wakeref *wf)
 {
-	if (!atomic_dec_and_test(&wf->count))
+	INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0);
+	if (unlikely(!atomic_dec_and_test(&wf->count)))
 		goto unlock;
 
 	/* ops->put() must reschedule its own release on error/deferral */
@@ -67,13 +68,12 @@ static void ____intel_wakeref_put_last(struct intel_wakeref *wf)
 	mutex_unlock(&wf->mutex);
 }
 
-void __intel_wakeref_put_last(struct intel_wakeref *wf)
+void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags)
 {
 	INTEL_WAKEREF_BUG_ON(work_pending(&wf->work));
 
 	/* Assume we are not in process context and so cannot sleep. */
-	if (wf->ops->flags & INTEL_WAKEREF_PUT_ASYNC ||
-	    !mutex_trylock(&wf->mutex)) {
+	if (flags & INTEL_WAKEREF_PUT_ASYNC || !mutex_trylock(&wf->mutex)) {
 		schedule_work(&wf->work);
 		return;
 	}
diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index 5f0c972a80fb..688b9b536a69 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -9,6 +9,7 @@
 
 #include <linux/atomic.h>
 #include <linux/bits.h>
+#include <linux/lockdep.h>
 #include <linux/mutex.h>
 #include <linux/refcount.h>
 #include <linux/stackdepot.h>
@@ -29,9 +30,6 @@ typedef depot_stack_handle_t intel_wakeref_t;
 struct intel_wakeref_ops {
 	int (*get)(struct intel_wakeref *wf);
 	int (*put)(struct intel_wakeref *wf);
-
-	unsigned long flags;
-#define INTEL_WAKEREF_PUT_ASYNC BIT(0)
 };
 
 struct intel_wakeref {
@@ -57,7 +55,7 @@ void __intel_wakeref_init(struct intel_wakeref *wf,
 } while (0)
 
 int __intel_wakeref_get_first(struct intel_wakeref *wf);
-void __intel_wakeref_put_last(struct intel_wakeref *wf);
+void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags);
 
 /**
  * intel_wakeref_get: Acquire the wakeref
@@ -100,10 +98,9 @@ intel_wakeref_get_if_active(struct intel_wakeref *wf)
 }
 
 /**
- * intel_wakeref_put: Release the wakeref
- * @i915: the drm_i915_private device
+ * intel_wakeref_put_flags: Release the wakeref
  * @wf: the wakeref
- * @fn: callback for releasing the wakeref, called only on final release.
+ * @flags: control flags
  *
  * Release our hold on the wakeref. When there are no more users,
  * the runtime pm wakeref will be released after the @fn callback is called
@@ -116,11 +113,25 @@ intel_wakeref_get_if_active(struct intel_wakeref *wf)
  * code otherwise.
  */
 static inline void
-intel_wakeref_put(struct intel_wakeref *wf)
+__intel_wakeref_put(struct intel_wakeref *wf, unsigned long flags)
+#define INTEL_WAKEREF_PUT_ASYNC BIT(0)
 {
 	INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0);
 	if (unlikely(!atomic_add_unless(&wf->count, -1, 1)))
-		__intel_wakeref_put_last(wf);
+		__intel_wakeref_put_last(wf, flags);
+}
+
+static inline void
+intel_wakeref_put(struct intel_wakeref *wf)
+{
+	might_sleep();
+	__intel_wakeref_put(wf, 0);
+}
+
+static inline void
+intel_wakeref_put_async(struct intel_wakeref *wf)
+{
+	__intel_wakeref_put(wf, INTEL_WAKEREF_PUT_ASYNC);
 }
 
 /**
@@ -151,6 +162,20 @@ intel_wakeref_unlock(struct intel_wakeref *wf)
 	mutex_unlock(&wf->mutex);
 }
 
+/**
+ * intel_wakeref_unlock_wait: Wait until the active callback is complete
+ * @wf: the wakeref
+ *
+ * Waits for the active callback (under the @wf->mtuex) is complete.
+ */
+static inline void
+intel_wakeref_unlock_wait(struct intel_wakeref *wf)
+{
+	mutex_lock(&wf->mutex);
+	mutex_unlock(&wf->mutex);
+	flush_work(&wf->work);
+}
+
 /**
  * intel_wakeref_is_active: Query whether the wakeref is currently held
  * @wf: the wakeref
@@ -170,6 +195,7 @@ intel_wakeref_is_active(const struct intel_wakeref *wf)
 static inline void
 __intel_wakeref_defer_park(struct intel_wakeref *wf)
 {
+	lockdep_assert_held(&wf->mutex);
 	INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count));
 	atomic_set_release(&wf->count, 1);
 }
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 02/14] drm/i915: Mark up the calling context for intel_wakeref_put()
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Previously, we assumed we could use mutex_trylock() within an atomic
context, falling back to a working if contended. However, such trickery
is illegal inside interrupt context, and so we need to always use a
worker under such circumstances. As we normally are in process context,
we can typically use a plain mutex, and only defer to a work when we
know we are caller from an interrupt path.

Fixes: 51fbd8de87dc ("drm/i915/pmu: Atomically acquire the gt_pm wakeref")
References: a0855d24fc22d ("locking/mutex: Complain upon mutex API misuse in IRQ contexts")
References: https://bugs.freedesktop.org/show_bug.cgi?id=111626
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_engine_pm.c    |  3 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.h    | 10 +++++
 drivers/gpu/drm/i915/gt/intel_gt_pm.c        |  1 -
 drivers/gpu/drm/i915/gt/intel_gt_pm.h        |  5 +++
 drivers/gpu/drm/i915/gt/intel_lrc.c          |  2 +-
 drivers/gpu/drm/i915/gt/intel_reset.c        |  2 +-
 drivers/gpu/drm/i915/gt/selftest_engine_pm.c |  7 ++--
 drivers/gpu/drm/i915/i915_active.c           |  5 ++-
 drivers/gpu/drm/i915/i915_pmu.c              |  6 +--
 drivers/gpu/drm/i915/intel_wakeref.c         |  8 ++--
 drivers/gpu/drm/i915/intel_wakeref.h         | 44 ++++++++++++++++----
 11 files changed, 68 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 3c0f490ff2c7..d1e9b7fec3a6 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -177,7 +177,8 @@ static int __engine_park(struct intel_wakeref *wf)
 
 	engine->execlists.no_priolist = false;
 
-	intel_gt_pm_put(engine->gt);
+	/* While we call i915_vma_parked, we have to break the lock cycle */
+	intel_gt_pm_put_async(engine->gt);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.h b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
index 739c50fefcef..467475fce9c6 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
@@ -31,6 +31,16 @@ static inline void intel_engine_pm_put(struct intel_engine_cs *engine)
 	intel_wakeref_put(&engine->wakeref);
 }
 
+static inline void intel_engine_pm_put_async(struct intel_engine_cs *engine)
+{
+	intel_wakeref_put_async(&engine->wakeref);
+}
+
+static inline void intel_engine_pm_unlock_wait(struct intel_engine_cs *engine)
+{
+	intel_wakeref_unlock_wait(&engine->wakeref);
+}
+
 void intel_engine_init__pm(struct intel_engine_cs *engine);
 
 #endif /* INTEL_ENGINE_PM_H */
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index e61f752a3cd5..7a9044ac4b75 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -105,7 +105,6 @@ static int __gt_park(struct intel_wakeref *wf)
 static const struct intel_wakeref_ops wf_ops = {
 	.get = __gt_unpark,
 	.put = __gt_park,
-	.flags = INTEL_WAKEREF_PUT_ASYNC,
 };
 
 void intel_gt_pm_init_early(struct intel_gt *gt)
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
index b3e17399be9b..990efc27a4e4 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
@@ -32,6 +32,11 @@ static inline void intel_gt_pm_put(struct intel_gt *gt)
 	intel_wakeref_put(&gt->wakeref);
 }
 
+static inline void intel_gt_pm_put_async(struct intel_gt *gt)
+{
+	intel_wakeref_put_async(&gt->wakeref);
+}
+
 static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt)
 {
 	return intel_wakeref_wait_for_idle(&gt->wakeref);
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 33ce258d484f..b65bc06855b0 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1172,7 +1172,7 @@ __execlists_schedule_out(struct i915_request *rq,
 
 	intel_engine_context_out(engine);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
-	intel_gt_pm_put(engine->gt);
+	intel_gt_pm_put_async(engine->gt);
 
 	/*
 	 * If this is part of a virtual engine, its next request may
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index b7007cd78c6f..0388f9375366 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -1125,7 +1125,7 @@ int intel_engine_reset(struct intel_engine_cs *engine, const char *msg)
 out:
 	intel_engine_cancel_stop_cs(engine);
 	reset_finish_engine(engine);
-	intel_engine_pm_put(engine);
+	intel_engine_pm_put_async(engine);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
index 20b9c83f43ad..851a6c4f65c6 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
@@ -51,11 +51,12 @@ static int live_engine_pm(void *arg)
 				pr_err("intel_engine_pm_get_if_awake(%s) failed under %s\n",
 				       engine->name, p->name);
 			else
-				intel_engine_pm_put(engine);
-			intel_engine_pm_put(engine);
+				intel_engine_pm_put_async(engine);
+			intel_engine_pm_put_async(engine);
 			p->critical_section_end();
 
-			/* engine wakeref is sync (instant) */
+			intel_engine_pm_unlock_wait(engine);
+
 			if (intel_engine_pm_is_awake(engine)) {
 				pr_err("%s is still awake after flushing pm\n",
 				       engine->name);
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 5448f37c8102..dca15ace88f6 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -672,12 +672,13 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 	 * populated by i915_request_add_active_barriers() to point to the
 	 * request that will eventually release them.
 	 */
-	spin_lock_irqsave_nested(&ref->tree_lock, flags, SINGLE_DEPTH_NESTING);
 	llist_for_each_safe(pos, next, take_preallocated_barriers(ref)) {
 		struct active_node *node = barrier_from_ll(pos);
 		struct intel_engine_cs *engine = barrier_to_engine(node);
 		struct rb_node **p, *parent;
 
+		spin_lock_irqsave_nested(&ref->tree_lock, flags,
+					 SINGLE_DEPTH_NESTING);
 		parent = NULL;
 		p = &ref->tree.rb_node;
 		while (*p) {
@@ -693,12 +694,12 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 		}
 		rb_link_node(&node->node, parent, p);
 		rb_insert_color(&node->node, &ref->tree);
+		spin_unlock_irqrestore(&ref->tree_lock, flags);
 
 		GEM_BUG_ON(!intel_engine_pm_is_awake(engine));
 		llist_add(barrier_to_ll(node), &engine->barrier_tasks);
 		intel_engine_pm_put(engine);
 	}
-	spin_unlock_irqrestore(&ref->tree_lock, flags);
 }
 
 void i915_request_add_active_barriers(struct i915_request *rq)
diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index 9b02be0ad4e6..95e824a78d4d 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -190,7 +190,7 @@ static u64 get_rc6(struct intel_gt *gt)
 	val = 0;
 	if (intel_gt_pm_get_if_awake(gt)) {
 		val = __get_rc6(gt);
-		intel_gt_pm_put(gt);
+		intel_gt_pm_put_async(gt);
 	}
 
 	spin_lock_irqsave(&pmu->lock, flags);
@@ -360,7 +360,7 @@ engines_sample(struct intel_gt *gt, unsigned int period_ns)
 skip:
 		if (unlikely(mmio_lock))
 			spin_unlock_irqrestore(mmio_lock, flags);
-		intel_engine_pm_put(engine);
+		intel_engine_pm_put_async(engine);
 	}
 }
 
@@ -398,7 +398,7 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns)
 			if (stat)
 				val = intel_get_cagf(rps, stat);
 
-			intel_gt_pm_put(gt);
+			intel_gt_pm_put_async(gt);
 		}
 
 		add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_ACT],
diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c
index 868cc78048d0..9b29176cc1ca 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.c
+++ b/drivers/gpu/drm/i915/intel_wakeref.c
@@ -54,7 +54,8 @@ int __intel_wakeref_get_first(struct intel_wakeref *wf)
 
 static void ____intel_wakeref_put_last(struct intel_wakeref *wf)
 {
-	if (!atomic_dec_and_test(&wf->count))
+	INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0);
+	if (unlikely(!atomic_dec_and_test(&wf->count)))
 		goto unlock;
 
 	/* ops->put() must reschedule its own release on error/deferral */
@@ -67,13 +68,12 @@ static void ____intel_wakeref_put_last(struct intel_wakeref *wf)
 	mutex_unlock(&wf->mutex);
 }
 
-void __intel_wakeref_put_last(struct intel_wakeref *wf)
+void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags)
 {
 	INTEL_WAKEREF_BUG_ON(work_pending(&wf->work));
 
 	/* Assume we are not in process context and so cannot sleep. */
-	if (wf->ops->flags & INTEL_WAKEREF_PUT_ASYNC ||
-	    !mutex_trylock(&wf->mutex)) {
+	if (flags & INTEL_WAKEREF_PUT_ASYNC || !mutex_trylock(&wf->mutex)) {
 		schedule_work(&wf->work);
 		return;
 	}
diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index 5f0c972a80fb..688b9b536a69 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -9,6 +9,7 @@
 
 #include <linux/atomic.h>
 #include <linux/bits.h>
+#include <linux/lockdep.h>
 #include <linux/mutex.h>
 #include <linux/refcount.h>
 #include <linux/stackdepot.h>
@@ -29,9 +30,6 @@ typedef depot_stack_handle_t intel_wakeref_t;
 struct intel_wakeref_ops {
 	int (*get)(struct intel_wakeref *wf);
 	int (*put)(struct intel_wakeref *wf);
-
-	unsigned long flags;
-#define INTEL_WAKEREF_PUT_ASYNC BIT(0)
 };
 
 struct intel_wakeref {
@@ -57,7 +55,7 @@ void __intel_wakeref_init(struct intel_wakeref *wf,
 } while (0)
 
 int __intel_wakeref_get_first(struct intel_wakeref *wf);
-void __intel_wakeref_put_last(struct intel_wakeref *wf);
+void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags);
 
 /**
  * intel_wakeref_get: Acquire the wakeref
@@ -100,10 +98,9 @@ intel_wakeref_get_if_active(struct intel_wakeref *wf)
 }
 
 /**
- * intel_wakeref_put: Release the wakeref
- * @i915: the drm_i915_private device
+ * intel_wakeref_put_flags: Release the wakeref
  * @wf: the wakeref
- * @fn: callback for releasing the wakeref, called only on final release.
+ * @flags: control flags
  *
  * Release our hold on the wakeref. When there are no more users,
  * the runtime pm wakeref will be released after the @fn callback is called
@@ -116,11 +113,25 @@ intel_wakeref_get_if_active(struct intel_wakeref *wf)
  * code otherwise.
  */
 static inline void
-intel_wakeref_put(struct intel_wakeref *wf)
+__intel_wakeref_put(struct intel_wakeref *wf, unsigned long flags)
+#define INTEL_WAKEREF_PUT_ASYNC BIT(0)
 {
 	INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0);
 	if (unlikely(!atomic_add_unless(&wf->count, -1, 1)))
-		__intel_wakeref_put_last(wf);
+		__intel_wakeref_put_last(wf, flags);
+}
+
+static inline void
+intel_wakeref_put(struct intel_wakeref *wf)
+{
+	might_sleep();
+	__intel_wakeref_put(wf, 0);
+}
+
+static inline void
+intel_wakeref_put_async(struct intel_wakeref *wf)
+{
+	__intel_wakeref_put(wf, INTEL_WAKEREF_PUT_ASYNC);
 }
 
 /**
@@ -151,6 +162,20 @@ intel_wakeref_unlock(struct intel_wakeref *wf)
 	mutex_unlock(&wf->mutex);
 }
 
+/**
+ * intel_wakeref_unlock_wait: Wait until the active callback is complete
+ * @wf: the wakeref
+ *
+ * Waits for the active callback (under the @wf->mtuex) is complete.
+ */
+static inline void
+intel_wakeref_unlock_wait(struct intel_wakeref *wf)
+{
+	mutex_lock(&wf->mutex);
+	mutex_unlock(&wf->mutex);
+	flush_work(&wf->work);
+}
+
 /**
  * intel_wakeref_is_active: Query whether the wakeref is currently held
  * @wf: the wakeref
@@ -170,6 +195,7 @@ intel_wakeref_is_active(const struct intel_wakeref *wf)
 static inline void
 __intel_wakeref_defer_park(struct intel_wakeref *wf)
 {
+	lockdep_assert_held(&wf->mutex);
 	INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count));
 	atomic_set_release(&wf->count, 1);
 }
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 03/14] drm/i915/gem: Merge GGTT vma flush into a single loop
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

We only need the one loop to find the dirty vma flush them, and their
chipset.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index db103d3c8760..63bd3ff84f5e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -279,18 +279,12 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
 
 	switch (obj->write_domain) {
 	case I915_GEM_DOMAIN_GTT:
-		for_each_ggtt_vma(vma, obj)
-			intel_gt_flush_ggtt_writes(vma->vm->gt);
-
-		intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
-
 		for_each_ggtt_vma(vma, obj) {
-			if (vma->iomap)
-				continue;
-
-			i915_vma_unset_ggtt_write(vma);
+			if (i915_vma_unset_ggtt_write(vma))
+				intel_gt_flush_ggtt_writes(vma->vm->gt);
 		}
 
+		intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 		break;
 
 	case I915_GEM_DOMAIN_WC:
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 03/14] drm/i915/gem: Merge GGTT vma flush into a single loop
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

We only need the one loop to find the dirty vma flush them, and their
chipset.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index db103d3c8760..63bd3ff84f5e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -279,18 +279,12 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
 
 	switch (obj->write_domain) {
 	case I915_GEM_DOMAIN_GTT:
-		for_each_ggtt_vma(vma, obj)
-			intel_gt_flush_ggtt_writes(vma->vm->gt);
-
-		intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
-
 		for_each_ggtt_vma(vma, obj) {
-			if (vma->iomap)
-				continue;
-
-			i915_vma_unset_ggtt_write(vma);
+			if (i915_vma_unset_ggtt_write(vma))
+				intel_gt_flush_ggtt_writes(vma->vm->gt);
 		}
 
+		intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 		break;
 
 	case I915_GEM_DOMAIN_WC:
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 04/14] drm/i915/gt: Only wait for register chipset flush if active
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Only serialise with the chipset using an mmio if the chipset is
currently active. We expect that any writes into the chipset range will
simply be forgotten until it wakes up.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_gt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
index b5a9b87e4ec9..c4fd8d65b8a3 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -304,7 +304,7 @@ void intel_gt_flush_ggtt_writes(struct intel_gt *gt)
 
 	intel_gt_chipset_flush(gt);
 
-	with_intel_runtime_pm(uncore->rpm, wakeref) {
+	with_intel_runtime_pm_if_in_use(uncore->rpm, wakeref) {
 		unsigned long flags;
 
 		spin_lock_irqsave(&uncore->lock, flags);
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 04/14] drm/i915/gt: Only wait for register chipset flush if active
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Only serialise with the chipset using an mmio if the chipset is
currently active. We expect that any writes into the chipset range will
simply be forgotten until it wakes up.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_gt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
index b5a9b87e4ec9..c4fd8d65b8a3 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -304,7 +304,7 @@ void intel_gt_flush_ggtt_writes(struct intel_gt *gt)
 
 	intel_gt_chipset_flush(gt);
 
-	with_intel_runtime_pm(uncore->rpm, wakeref) {
+	with_intel_runtime_pm_if_in_use(uncore->rpm, wakeref) {
 		unsigned long flags;
 
 		spin_lock_irqsave(&uncore->lock, flags);
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 05/14] drm/i915: Protect the obj->vma.list during iteration
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Take the obj->vma.lock to prevent modifications to the list as we
iterate, to avoid the dreaded the NULL pointer.

<1>[  347.820823] BUG: kernel NULL pointer dereference, address: 0000000000000150
<1>[  347.820856] #PF: supervisor read access in kernel mode
<1>[  347.820874] #PF: error_code(0x0000) - not-present page
<6>[  347.820892] PGD 0 P4D 0
<4>[  347.820908] Oops: 0000 [#1] PREEMPT SMP NOPTI
<4>[  347.820926] CPU: 3 PID: 1303 Comm: gem_persistent_ Tainted: G     U            5.4.0-rc7-CI-CI_DRM_7352+ #1
<4>[  347.820956] Hardware name:  /NUC6CAYB, BIOS AYAPLCEL.86A.0049.2018.0508.1356 05/08/2018
<4>[  347.821132] RIP: 0010:i915_gem_object_flush_write_domain+0xd9/0x1d0 [i915]
<4>[  347.821157] Code: 0f 84 e9 00 00 00 48 8b 80 e0 fd ff ff f6 c4 40 75 11 e9 ed 00 00 00 48 8b 80 e0 fd ff ff f6 c4 40 74 26 48 8b 83 b0 00 00 00 <48> 8b b8 50 01 00 00 e8 fb 20 fb ff 48 8b 83 30 03 00 00 49 39 c4
<4>[  347.821210] RSP: 0018:ffffc90000a1f8f8 EFLAGS: 00010202
<4>[  347.821229] RAX: 0000000000000000 RBX: ffffc900008479a0 RCX: 0000000000000018
<4>[  347.821252] RDX: 0000000000000000 RSI: 000000000000000d RDI: ffff888275a090b0
<4>[  347.821274] RBP: ffff8882673c8040 R08: ffff88825991b8d0 R09: 0000000000000000
<4>[  347.821297] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8882673c8280
<4>[  347.821319] R13: ffff8882673c8368 R14: 0000000000000000 R15: ffff888266a54000
<4>[  347.821343] FS:  00007f75865f4240(0000) GS:ffff888277b80000(0000) knlGS:0000000000000000
<4>[  347.821368] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[  347.821389] CR2: 0000000000000150 CR3: 000000025aee0000 CR4: 00000000003406e0
<4>[  347.821411] Call Trace:
<4>[  347.821555]  i915_gem_object_prepare_read+0xea/0x2a0 [i915]
<4>[  347.821706]  intel_engine_cmd_parser+0x5ce/0xe90 [i915]
<4>[  347.821834]  ? __i915_sw_fence_complete+0x1a0/0x250 [i915]
<4>[  347.821990]  i915_gem_do_execbuffer+0xb4c/0x2550 [i915]

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 63bd3ff84f5e..458945e1823e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -279,10 +279,12 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
 
 	switch (obj->write_domain) {
 	case I915_GEM_DOMAIN_GTT:
+		spin_lock(&obj->vma.lock);
 		for_each_ggtt_vma(vma, obj) {
 			if (i915_vma_unset_ggtt_write(vma))
 				intel_gt_flush_ggtt_writes(vma->vm->gt);
 		}
+		spin_unlock(&obj->vma.lock);
 
 		intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 		break;
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 05/14] drm/i915: Protect the obj->vma.list during iteration
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Take the obj->vma.lock to prevent modifications to the list as we
iterate, to avoid the dreaded the NULL pointer.

<1>[  347.820823] BUG: kernel NULL pointer dereference, address: 0000000000000150
<1>[  347.820856] #PF: supervisor read access in kernel mode
<1>[  347.820874] #PF: error_code(0x0000) - not-present page
<6>[  347.820892] PGD 0 P4D 0
<4>[  347.820908] Oops: 0000 [#1] PREEMPT SMP NOPTI
<4>[  347.820926] CPU: 3 PID: 1303 Comm: gem_persistent_ Tainted: G     U            5.4.0-rc7-CI-CI_DRM_7352+ #1
<4>[  347.820956] Hardware name:  /NUC6CAYB, BIOS AYAPLCEL.86A.0049.2018.0508.1356 05/08/2018
<4>[  347.821132] RIP: 0010:i915_gem_object_flush_write_domain+0xd9/0x1d0 [i915]
<4>[  347.821157] Code: 0f 84 e9 00 00 00 48 8b 80 e0 fd ff ff f6 c4 40 75 11 e9 ed 00 00 00 48 8b 80 e0 fd ff ff f6 c4 40 74 26 48 8b 83 b0 00 00 00 <48> 8b b8 50 01 00 00 e8 fb 20 fb ff 48 8b 83 30 03 00 00 49 39 c4
<4>[  347.821210] RSP: 0018:ffffc90000a1f8f8 EFLAGS: 00010202
<4>[  347.821229] RAX: 0000000000000000 RBX: ffffc900008479a0 RCX: 0000000000000018
<4>[  347.821252] RDX: 0000000000000000 RSI: 000000000000000d RDI: ffff888275a090b0
<4>[  347.821274] RBP: ffff8882673c8040 R08: ffff88825991b8d0 R09: 0000000000000000
<4>[  347.821297] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8882673c8280
<4>[  347.821319] R13: ffff8882673c8368 R14: 0000000000000000 R15: ffff888266a54000
<4>[  347.821343] FS:  00007f75865f4240(0000) GS:ffff888277b80000(0000) knlGS:0000000000000000
<4>[  347.821368] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[  347.821389] CR2: 0000000000000150 CR3: 000000025aee0000 CR4: 00000000003406e0
<4>[  347.821411] Call Trace:
<4>[  347.821555]  i915_gem_object_prepare_read+0xea/0x2a0 [i915]
<4>[  347.821706]  intel_engine_cmd_parser+0x5ce/0xe90 [i915]
<4>[  347.821834]  ? __i915_sw_fence_complete+0x1a0/0x250 [i915]
<4>[  347.821990]  i915_gem_do_execbuffer+0xb4c/0x2550 [i915]

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 63bd3ff84f5e..458945e1823e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -279,10 +279,12 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
 
 	switch (obj->write_domain) {
 	case I915_GEM_DOMAIN_GTT:
+		spin_lock(&obj->vma.lock);
 		for_each_ggtt_vma(vma, obj) {
 			if (i915_vma_unset_ggtt_write(vma))
 				intel_gt_flush_ggtt_writes(vma->vm->gt);
 		}
+		spin_unlock(&obj->vma.lock);
 
 		intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 		break;
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 06/14] drm/i915: Wait until the intel_wakeref idle callback is complete
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

When waiting for idle, serialise with any ongoing callback so that it
will have completed before completing the wait.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_wakeref.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c
index 9b29176cc1ca..91feb53b2942 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.c
+++ b/drivers/gpu/drm/i915/intel_wakeref.c
@@ -109,8 +109,15 @@ void __intel_wakeref_init(struct intel_wakeref *wf,
 
 int intel_wakeref_wait_for_idle(struct intel_wakeref *wf)
 {
-	return wait_var_event_killable(&wf->wakeref,
-				       !intel_wakeref_is_active(wf));
+	int err;
+
+	err = wait_var_event_killable(&wf->wakeref,
+				      !intel_wakeref_is_active(wf));
+	if (err)
+		return err;
+
+	intel_wakeref_unlock_wait(wf);
+	return 0;
 }
 
 static void wakeref_auto_timeout(struct timer_list *t)
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 06/14] drm/i915: Wait until the intel_wakeref idle callback is complete
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

When waiting for idle, serialise with any ongoing callback so that it
will have completed before completing the wait.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_wakeref.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c
index 9b29176cc1ca..91feb53b2942 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.c
+++ b/drivers/gpu/drm/i915/intel_wakeref.c
@@ -109,8 +109,15 @@ void __intel_wakeref_init(struct intel_wakeref *wf,
 
 int intel_wakeref_wait_for_idle(struct intel_wakeref *wf)
 {
-	return wait_var_event_killable(&wf->wakeref,
-				       !intel_wakeref_is_active(wf));
+	int err;
+
+	err = wait_var_event_killable(&wf->wakeref,
+				      !intel_wakeref_is_active(wf));
+	if (err)
+		return err;
+
+	intel_wakeref_unlock_wait(wf);
+	return 0;
 }
 
 static void wakeref_auto_timeout(struct timer_list *t)
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 07/14] drm/i915/gt: Declare timeline.lock to be irq-free
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Now that we never allow the intel_wakeref callbacks to be invoked from
interrupt context, we do not need the irqsafe spinlock for the timeline.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_gt_requests.c |  9 ++++-----
 drivers/gpu/drm/i915/gt/intel_reset.c       |  9 ++++-----
 drivers/gpu/drm/i915/gt/intel_timeline.c    | 10 ++++------
 3 files changed, 12 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 7559d6373f49..74356db43325 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -33,7 +33,6 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 {
 	struct intel_gt_timelines *timelines = &gt->timelines;
 	struct intel_timeline *tl, *tn;
-	unsigned long flags;
 	bool interruptible;
 	LIST_HEAD(free);
 
@@ -43,7 +42,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 
 	flush_submission(gt); /* kick the ksoftirqd tasklets */
 
-	spin_lock_irqsave(&timelines->lock, flags);
+	spin_lock(&timelines->lock);
 	list_for_each_entry_safe(tl, tn, &timelines->active_list, link) {
 		if (!mutex_trylock(&tl->mutex))
 			continue;
@@ -51,7 +50,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 		intel_timeline_get(tl);
 		GEM_BUG_ON(!atomic_read(&tl->active_count));
 		atomic_inc(&tl->active_count); /* pin the list element */
-		spin_unlock_irqrestore(&timelines->lock, flags);
+		spin_unlock(&timelines->lock);
 
 		if (timeout > 0) {
 			struct dma_fence *fence;
@@ -67,7 +66,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 
 		retire_requests(tl);
 
-		spin_lock_irqsave(&timelines->lock, flags);
+		spin_lock(&timelines->lock);
 
 		/* Resume iteration after dropping lock */
 		list_safe_reset_next(tl, tn, link);
@@ -82,7 +81,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 			list_add(&tl->link, &free);
 		}
 	}
-	spin_unlock_irqrestore(&timelines->lock, flags);
+	spin_unlock(&timelines->lock);
 
 	list_for_each_entry_safe(tl, tn, &free, link)
 		__intel_timeline_free(&tl->kref);
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index 0388f9375366..36189238e13c 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -831,7 +831,6 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 {
 	struct intel_gt_timelines *timelines = &gt->timelines;
 	struct intel_timeline *tl;
-	unsigned long flags;
 	bool ok;
 
 	if (!test_bit(I915_WEDGED, &gt->reset.flags))
@@ -853,7 +852,7 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 	 *
 	 * No more can be submitted until we reset the wedged bit.
 	 */
-	spin_lock_irqsave(&timelines->lock, flags);
+	spin_lock(&timelines->lock);
 	list_for_each_entry(tl, &timelines->active_list, link) {
 		struct dma_fence *fence;
 
@@ -861,7 +860,7 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 		if (!fence)
 			continue;
 
-		spin_unlock_irqrestore(&timelines->lock, flags);
+		spin_unlock(&timelines->lock);
 
 		/*
 		 * All internal dependencies (i915_requests) will have
@@ -874,10 +873,10 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 		dma_fence_put(fence);
 
 		/* Restart iteration after droping lock */
-		spin_lock_irqsave(&timelines->lock, flags);
+		spin_lock(&timelines->lock);
 		tl = list_entry(&timelines->active_list, typeof(*tl), link);
 	}
-	spin_unlock_irqrestore(&timelines->lock, flags);
+	spin_unlock(&timelines->lock);
 
 	/* We must reset pending GPU events before restoring our submission */
 	ok = !HAS_EXECLISTS(gt->i915); /* XXX better agnosticism desired */
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 4f914f0d5eab..bd973d950064 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -332,7 +332,6 @@ int intel_timeline_pin(struct intel_timeline *tl)
 void intel_timeline_enter(struct intel_timeline *tl)
 {
 	struct intel_gt_timelines *timelines = &tl->gt->timelines;
-	unsigned long flags;
 
 	/*
 	 * Pretend we are serialised by the timeline->mutex.
@@ -358,16 +357,15 @@ void intel_timeline_enter(struct intel_timeline *tl)
 	if (atomic_add_unless(&tl->active_count, 1, 0))
 		return;
 
-	spin_lock_irqsave(&timelines->lock, flags);
+	spin_lock(&timelines->lock);
 	if (!atomic_fetch_inc(&tl->active_count))
 		list_add(&tl->link, &timelines->active_list);
-	spin_unlock_irqrestore(&timelines->lock, flags);
+	spin_unlock(&timelines->lock);
 }
 
 void intel_timeline_exit(struct intel_timeline *tl)
 {
 	struct intel_gt_timelines *timelines = &tl->gt->timelines;
-	unsigned long flags;
 
 	/* See intel_timeline_enter() */
 	lockdep_assert_held(&tl->mutex);
@@ -376,10 +374,10 @@ void intel_timeline_exit(struct intel_timeline *tl)
 	if (atomic_add_unless(&tl->active_count, -1, 1))
 		return;
 
-	spin_lock_irqsave(&timelines->lock, flags);
+	spin_lock(&timelines->lock);
 	if (atomic_dec_and_test(&tl->active_count))
 		list_del(&tl->link);
-	spin_unlock_irqrestore(&timelines->lock, flags);
+	spin_unlock(&timelines->lock);
 
 	/*
 	 * Since this timeline is idle, all bariers upon which we were waiting
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 07/14] drm/i915/gt: Declare timeline.lock to be irq-free
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Now that we never allow the intel_wakeref callbacks to be invoked from
interrupt context, we do not need the irqsafe spinlock for the timeline.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_gt_requests.c |  9 ++++-----
 drivers/gpu/drm/i915/gt/intel_reset.c       |  9 ++++-----
 drivers/gpu/drm/i915/gt/intel_timeline.c    | 10 ++++------
 3 files changed, 12 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 7559d6373f49..74356db43325 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -33,7 +33,6 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 {
 	struct intel_gt_timelines *timelines = &gt->timelines;
 	struct intel_timeline *tl, *tn;
-	unsigned long flags;
 	bool interruptible;
 	LIST_HEAD(free);
 
@@ -43,7 +42,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 
 	flush_submission(gt); /* kick the ksoftirqd tasklets */
 
-	spin_lock_irqsave(&timelines->lock, flags);
+	spin_lock(&timelines->lock);
 	list_for_each_entry_safe(tl, tn, &timelines->active_list, link) {
 		if (!mutex_trylock(&tl->mutex))
 			continue;
@@ -51,7 +50,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 		intel_timeline_get(tl);
 		GEM_BUG_ON(!atomic_read(&tl->active_count));
 		atomic_inc(&tl->active_count); /* pin the list element */
-		spin_unlock_irqrestore(&timelines->lock, flags);
+		spin_unlock(&timelines->lock);
 
 		if (timeout > 0) {
 			struct dma_fence *fence;
@@ -67,7 +66,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 
 		retire_requests(tl);
 
-		spin_lock_irqsave(&timelines->lock, flags);
+		spin_lock(&timelines->lock);
 
 		/* Resume iteration after dropping lock */
 		list_safe_reset_next(tl, tn, link);
@@ -82,7 +81,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
 			list_add(&tl->link, &free);
 		}
 	}
-	spin_unlock_irqrestore(&timelines->lock, flags);
+	spin_unlock(&timelines->lock);
 
 	list_for_each_entry_safe(tl, tn, &free, link)
 		__intel_timeline_free(&tl->kref);
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index 0388f9375366..36189238e13c 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -831,7 +831,6 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 {
 	struct intel_gt_timelines *timelines = &gt->timelines;
 	struct intel_timeline *tl;
-	unsigned long flags;
 	bool ok;
 
 	if (!test_bit(I915_WEDGED, &gt->reset.flags))
@@ -853,7 +852,7 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 	 *
 	 * No more can be submitted until we reset the wedged bit.
 	 */
-	spin_lock_irqsave(&timelines->lock, flags);
+	spin_lock(&timelines->lock);
 	list_for_each_entry(tl, &timelines->active_list, link) {
 		struct dma_fence *fence;
 
@@ -861,7 +860,7 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 		if (!fence)
 			continue;
 
-		spin_unlock_irqrestore(&timelines->lock, flags);
+		spin_unlock(&timelines->lock);
 
 		/*
 		 * All internal dependencies (i915_requests) will have
@@ -874,10 +873,10 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 		dma_fence_put(fence);
 
 		/* Restart iteration after droping lock */
-		spin_lock_irqsave(&timelines->lock, flags);
+		spin_lock(&timelines->lock);
 		tl = list_entry(&timelines->active_list, typeof(*tl), link);
 	}
-	spin_unlock_irqrestore(&timelines->lock, flags);
+	spin_unlock(&timelines->lock);
 
 	/* We must reset pending GPU events before restoring our submission */
 	ok = !HAS_EXECLISTS(gt->i915); /* XXX better agnosticism desired */
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 4f914f0d5eab..bd973d950064 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -332,7 +332,6 @@ int intel_timeline_pin(struct intel_timeline *tl)
 void intel_timeline_enter(struct intel_timeline *tl)
 {
 	struct intel_gt_timelines *timelines = &tl->gt->timelines;
-	unsigned long flags;
 
 	/*
 	 * Pretend we are serialised by the timeline->mutex.
@@ -358,16 +357,15 @@ void intel_timeline_enter(struct intel_timeline *tl)
 	if (atomic_add_unless(&tl->active_count, 1, 0))
 		return;
 
-	spin_lock_irqsave(&timelines->lock, flags);
+	spin_lock(&timelines->lock);
 	if (!atomic_fetch_inc(&tl->active_count))
 		list_add(&tl->link, &timelines->active_list);
-	spin_unlock_irqrestore(&timelines->lock, flags);
+	spin_unlock(&timelines->lock);
 }
 
 void intel_timeline_exit(struct intel_timeline *tl)
 {
 	struct intel_gt_timelines *timelines = &tl->gt->timelines;
-	unsigned long flags;
 
 	/* See intel_timeline_enter() */
 	lockdep_assert_held(&tl->mutex);
@@ -376,10 +374,10 @@ void intel_timeline_exit(struct intel_timeline *tl)
 	if (atomic_add_unless(&tl->active_count, -1, 1))
 		return;
 
-	spin_lock_irqsave(&timelines->lock, flags);
+	spin_lock(&timelines->lock);
 	if (atomic_dec_and_test(&tl->active_count))
 		list_del(&tl->link);
-	spin_unlock_irqrestore(&timelines->lock, flags);
+	spin_unlock(&timelines->lock);
 
 	/*
 	 * Since this timeline is idle, all bariers upon which we were waiting
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 08/14] drm/i915/gt: Move new timelines to the end of active_list
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

When adding a new active timeline, place it at the end of the list. This
allows for intel_gt_retire_requests() to pick up the newcomer more
quickly and hopefully complete the retirement sooner.

References: 7936a22dd466 ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_timeline.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index bd973d950064..b190a5d9ab02 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -359,7 +359,7 @@ void intel_timeline_enter(struct intel_timeline *tl)
 
 	spin_lock(&timelines->lock);
 	if (!atomic_fetch_inc(&tl->active_count))
-		list_add(&tl->link, &timelines->active_list);
+		list_add_tail(&tl->link, &timelines->active_list);
 	spin_unlock(&timelines->lock);
 }
 
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 08/14] drm/i915/gt: Move new timelines to the end of active_list
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

When adding a new active timeline, place it at the end of the list. This
allows for intel_gt_retire_requests() to pick up the newcomer more
quickly and hopefully complete the retirement sooner.

References: 7936a22dd466 ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_timeline.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index bd973d950064..b190a5d9ab02 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -359,7 +359,7 @@ void intel_timeline_enter(struct intel_timeline *tl)
 
 	spin_lock(&timelines->lock);
 	if (!atomic_fetch_inc(&tl->active_count))
-		list_add(&tl->link, &timelines->active_list);
+		list_add_tail(&tl->link, &timelines->active_list);
 	spin_unlock(&timelines->lock);
 }
 
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 09/14] drm/i915/gt: Schedule next retirement worker first
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

As we may park the gt during request retirement, we may then cancel the
retirement worker only to then program the delayed worker once more.

If we schedule the next delayed retirement worker first, if we then park
the gt, the work remain cancelled [until we unpark].

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_gt_requests.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 74356db43325..4dc3cbeb1b36 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -109,9 +109,9 @@ static void retire_work_handler(struct work_struct *work)
 	struct intel_gt *gt =
 		container_of(work, typeof(*gt), requests.retire_work.work);
 
-	intel_gt_retire_requests(gt);
 	schedule_delayed_work(&gt->requests.retire_work,
 			      round_jiffies_up_relative(HZ));
+	intel_gt_retire_requests(gt);
 }
 
 void intel_gt_init_requests(struct intel_gt *gt)
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 09/14] drm/i915/gt: Schedule next retirement worker first
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

As we may park the gt during request retirement, we may then cancel the
retirement worker only to then program the delayed worker once more.

If we schedule the next delayed retirement worker first, if we then park
the gt, the work remain cancelled [until we unpark].

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_gt_requests.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 74356db43325..4dc3cbeb1b36 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -109,9 +109,9 @@ static void retire_work_handler(struct work_struct *work)
 	struct intel_gt *gt =
 		container_of(work, typeof(*gt), requests.retire_work.work);
 
-	intel_gt_retire_requests(gt);
 	schedule_delayed_work(&gt->requests.retire_work,
 			      round_jiffies_up_relative(HZ));
+	intel_gt_retire_requests(gt);
 }
 
 void intel_gt_init_requests(struct intel_gt *gt)
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 10/14] drm/i915/gt: Flush the requests after wedging on suspend
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Retire all requests if we resort to wedged the driver on suspend. They
will now be idle, so we might as we free them before shutting down.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_gt_pm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index 7a9044ac4b75..f6b5169d623f 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -256,6 +256,7 @@ static void wait_for_suspend(struct intel_gt *gt)
 		 * the gpu quiet.
 		 */
 		intel_gt_set_wedged(gt);
+		intel_gt_retire_requests(gt);
 	}
 
 	intel_gt_pm_wait_for_idle(gt);
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 10/14] drm/i915/gt: Flush the requests after wedging on suspend
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Retire all requests if we resort to wedged the driver on suspend. They
will now be idle, so we might as we free them before shutting down.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_gt_pm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index 7a9044ac4b75..f6b5169d623f 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -256,6 +256,7 @@ static void wait_for_suspend(struct intel_gt *gt)
 		 * the gpu quiet.
 		 */
 		intel_gt_set_wedged(gt);
+		intel_gt_retire_requests(gt);
 	}
 
 	intel_gt_pm_wait_for_idle(gt);
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 11/14] drm/i915/selftests: Flush the active callbacks
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Before checking the current i915_active state for the asynchronous work
we submitted, flush any ongoing callback. This ensures that our sampling
is robust and does not sporadically fail due to bad timing as the work
is running on another cpu.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_context.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c
index 3586af636304..d1203b4c1409 100644
--- a/drivers/gpu/drm/i915/gt/selftest_context.c
+++ b/drivers/gpu/drm/i915/gt/selftest_context.c
@@ -48,20 +48,25 @@ static int context_sync(struct intel_context *ce)
 
 	mutex_lock(&tl->mutex);
 	do {
-		struct dma_fence *fence;
+		struct i915_request *rq;
 		long timeout;
 
-		fence = i915_active_fence_get(&tl->last_request);
-		if (!fence)
+		if (list_empty(&tl->requests))
 			break;
 
-		timeout = dma_fence_wait_timeout(fence, false, HZ / 10);
+		rq = list_last_entry(&tl->requests, typeof(*rq), link);
+		i915_request_get(rq);
+
+		timeout = i915_request_wait(rq, 0, HZ / 10);
 		if (timeout < 0)
 			err = timeout;
 		else
-			i915_request_retire_upto(to_request(fence));
+			i915_request_retire_upto(rq);
 
-		dma_fence_put(fence);
+		spin_lock_irq(&rq->lock);
+		spin_unlock_irq(&rq->lock);
+
+		i915_request_put(rq);
 	} while (!err);
 	mutex_unlock(&tl->mutex);
 
@@ -282,6 +287,8 @@ static int __live_active_context(struct intel_engine_cs *engine,
 		err = -EINVAL;
 	}
 
+	intel_engine_pm_unlock_wait(engine);
+
 	if (intel_engine_pm_is_awake(engine)) {
 		struct drm_printer p = drm_debug_printer(__func__);
 
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 11/14] drm/i915/selftests: Flush the active callbacks
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Before checking the current i915_active state for the asynchronous work
we submitted, flush any ongoing callback. This ensures that our sampling
is robust and does not sporadically fail due to bad timing as the work
is running on another cpu.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_context.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c
index 3586af636304..d1203b4c1409 100644
--- a/drivers/gpu/drm/i915/gt/selftest_context.c
+++ b/drivers/gpu/drm/i915/gt/selftest_context.c
@@ -48,20 +48,25 @@ static int context_sync(struct intel_context *ce)
 
 	mutex_lock(&tl->mutex);
 	do {
-		struct dma_fence *fence;
+		struct i915_request *rq;
 		long timeout;
 
-		fence = i915_active_fence_get(&tl->last_request);
-		if (!fence)
+		if (list_empty(&tl->requests))
 			break;
 
-		timeout = dma_fence_wait_timeout(fence, false, HZ / 10);
+		rq = list_last_entry(&tl->requests, typeof(*rq), link);
+		i915_request_get(rq);
+
+		timeout = i915_request_wait(rq, 0, HZ / 10);
 		if (timeout < 0)
 			err = timeout;
 		else
-			i915_request_retire_upto(to_request(fence));
+			i915_request_retire_upto(rq);
 
-		dma_fence_put(fence);
+		spin_lock_irq(&rq->lock);
+		spin_unlock_irq(&rq->lock);
+
+		i915_request_put(rq);
 	} while (!err);
 	mutex_unlock(&tl->mutex);
 
@@ -282,6 +287,8 @@ static int __live_active_context(struct intel_engine_cs *engine,
 		err = -EINVAL;
 	}
 
+	intel_engine_pm_unlock_wait(engine);
+
 	if (intel_engine_pm_is_awake(engine)) {
 		struct drm_printer p = drm_debug_printer(__func__);
 
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 12/14] drm/i915/selftests: Be explicit in ERR_PTR handling
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dan Carpenter

When setting up a full GGTT, we expect the next insert to fail with
-ENOSPC. Simplify the use of ERR_PTR to not confuse either the reader or
smatch.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
References: f40a7b7558ef ("drm/i915: Initial selftests for exercising eviction")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_evict.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
index 5f133d177212..06ef88510209 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
@@ -198,8 +198,8 @@ static int igt_overcommit(void *arg)
 	quirk_add(obj, &objects);
 
 	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
-	if (!IS_ERR(vma) || PTR_ERR(vma) != -ENOSPC) {
-		pr_err("Failed to evict+insert, i915_gem_object_ggtt_pin returned err=%d\n", (int)PTR_ERR(vma));
+	if (vma != ERR_PTR(-ENOSPC)) {
+		pr_err("Failed to evict+insert, i915_gem_object_ggtt_pin returned err=%d\n", (int)PTR_ERR_OR_ZERO(vma));
 		err = -EINVAL;
 		goto cleanup;
 	}
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 12/14] drm/i915/selftests: Be explicit in ERR_PTR handling
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dan Carpenter

When setting up a full GGTT, we expect the next insert to fail with
-ENOSPC. Simplify the use of ERR_PTR to not confuse either the reader or
smatch.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
References: f40a7b7558ef ("drm/i915: Initial selftests for exercising eviction")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_evict.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
index 5f133d177212..06ef88510209 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
@@ -198,8 +198,8 @@ static int igt_overcommit(void *arg)
 	quirk_add(obj, &objects);
 
 	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
-	if (!IS_ERR(vma) || PTR_ERR(vma) != -ENOSPC) {
-		pr_err("Failed to evict+insert, i915_gem_object_ggtt_pin returned err=%d\n", (int)PTR_ERR(vma));
+	if (vma != ERR_PTR(-ENOSPC)) {
+		pr_err("Failed to evict+insert, i915_gem_object_ggtt_pin returned err=%d\n", (int)PTR_ERR_OR_ZERO(vma));
 		err = -EINVAL;
 		goto cleanup;
 	}
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 13/14] drm/i915/selftests: Exercise rc6 handling
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Reading from CTX_INFO upsets rc6, requiring us to detect and prevent
possible rc6 context corruption. Poke at the bear!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_rc6.c           |   4 +
 drivers/gpu/drm/i915/gt/selftest_gt_pm.c      |  13 ++
 drivers/gpu/drm/i915/gt/selftest_rc6.c        | 146 ++++++++++++++++++
 drivers/gpu/drm/i915/gt/selftest_rc6.h        |  12 ++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 5 files changed, 176 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gt/selftest_rc6.c
 create mode 100644 drivers/gpu/drm/i915/gt/selftest_rc6.h

diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
index 977a617a196d..7a0bc6dde009 100644
--- a/drivers/gpu/drm/i915/gt/intel_rc6.c
+++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
@@ -783,3 +783,7 @@ u64 intel_rc6_residency_us(struct intel_rc6 *rc6, i915_reg_t reg)
 {
 	return DIV_ROUND_UP_ULL(intel_rc6_residency_ns(rc6, reg), 1000);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftest_rc6.c"
+#endif
diff --git a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
index d1752f15702a..1d5bf93d1258 100644
--- a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
@@ -6,6 +6,7 @@
  */
 
 #include "selftest_llc.h"
+#include "selftest_rc6.h"
 
 static int live_gt_resume(void *arg)
 {
@@ -58,3 +59,15 @@ int intel_gt_pm_live_selftests(struct drm_i915_private *i915)
 
 	return intel_gt_live_subtests(tests, &i915->gt);
 }
+
+int intel_gt_pm_late_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(live_rc6_ctx),
+	};
+
+	if (intel_gt_is_wedged(&i915->gt))
+		return 0;
+
+	return intel_gt_live_subtests(tests, &i915->gt);
+}
diff --git a/drivers/gpu/drm/i915/gt/selftest_rc6.c b/drivers/gpu/drm/i915/gt/selftest_rc6.c
new file mode 100644
index 000000000000..c8b729f7e93e
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/selftest_rc6.c
@@ -0,0 +1,146 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "intel_context.h"
+#include "intel_engine_pm.h"
+#include "intel_gt_requests.h"
+#include "intel_ring.h"
+#include "selftest_rc6.h"
+
+#include "selftests/i915_random.h"
+
+static const u32 *__live_rc6_ctx(struct intel_context *ce)
+{
+	struct i915_request *rq;
+	u32 const *result;
+	u32 cmd;
+	u32 *cs;
+
+	rq = intel_context_create_request(ce);
+	if (IS_ERR(rq))
+		return ERR_CAST(rq);
+
+	cs = intel_ring_begin(rq, 4);
+	if (IS_ERR(cs)) {
+		i915_request_add(rq);
+		return cs;
+	}
+
+	cmd = MI_STORE_REGISTER_MEM | MI_USE_GGTT;
+	if (INTEL_GEN(rq->i915) >= 8)
+		cmd++;
+
+	*cs++ = cmd;
+	*cs++ = i915_mmio_reg_offset(GEN8_RC6_CTX_INFO);
+	*cs++ = ce->timeline->hwsp_offset + 8;
+	*cs++ = 0;
+	intel_ring_advance(rq, cs);
+
+	result = rq->hwsp_seqno + 2;
+	i915_request_add(rq);
+
+	return result;
+}
+
+static struct intel_engine_cs **
+randomised_engines(struct intel_gt *gt,
+		   struct rnd_state *prng,
+		   unsigned int *count)
+{
+	struct intel_engine_cs *engine, **engines;
+	enum intel_engine_id id;
+	int n;
+
+	n = 0;
+	for_each_engine(engine, gt, id)
+		n++;
+	if (!n)
+		return NULL;
+
+	engines = kmalloc_array(n, sizeof(*engines), GFP_KERNEL);
+	if (!engines)
+		return NULL;
+
+	n = 0;
+	for_each_engine(engine, gt, id)
+		engines[n++] = engine;
+
+	i915_prandom_shuffle(engines, sizeof(*engines), n, prng);
+
+	*count = n;
+	return engines;
+}
+
+int live_rc6_ctx(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct intel_engine_cs **engines;
+	unsigned int n, count;
+	I915_RND_STATE(prng);
+	int err = 0;
+
+	/* A read of CTX_INFO upsets rc6. Poke the bear! */
+	if (INTEL_GEN(gt->i915) < 8)
+		return 0;
+
+	engines = randomised_engines(gt, &prng, &count);
+	if (!engines)
+		return 0;
+
+	for (n = 0; n < count; n++) {
+		struct intel_engine_cs *engine = engines[n];
+		int pass;
+
+		for (pass = 0; pass < 2; pass++) {
+			struct intel_context *ce;
+			unsigned int resets =
+				i915_reset_engine_count(&gt->i915->gpu_error,
+							engine);
+			const u32 *res;
+
+			/* Use a sacrifical context */
+			ce = intel_context_create(engine->kernel_context->gem_context,
+						  engine);
+			if (IS_ERR(ce)) {
+				err = PTR_ERR(ce);
+				goto out;
+			}
+
+			intel_engine_pm_get(engine);
+			res = __live_rc6_ctx(ce);
+			intel_engine_pm_put(engine);
+			intel_context_put(ce);
+			if (IS_ERR(res)) {
+				err = PTR_ERR(res);
+				goto out;
+			}
+
+			if (intel_gt_wait_for_idle(gt, HZ / 5) == -ETIME) {
+				intel_gt_set_wedged(gt);
+				err = -ETIME;
+				goto out;
+			}
+
+			intel_gt_pm_wait_for_idle(gt);
+			pr_debug("%s: CTX_INFO=%0x\n",
+				 engine->name, READ_ONCE(*res));
+
+			if (resets !=
+			    i915_reset_engine_count(&gt->i915->gpu_error,
+						    engine)) {
+				pr_err("%s: GPU reset required\n",
+				       engine->name);
+				add_taint_for_CI(TAINT_WARN);
+				err = -EIO;
+				goto out;
+			}
+		}
+	}
+
+out:
+	kfree(engines);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/gt/selftest_rc6.h b/drivers/gpu/drm/i915/gt/selftest_rc6.h
new file mode 100644
index 000000000000..230c6b4c7dc0
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/selftest_rc6.h
@@ -0,0 +1,12 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef SELFTEST_RC6_H
+#define SELFTEST_RC6_H
+
+int live_rc6_ctx(void *arg);
+
+#endif /* SELFTEST_RC6_H */
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 11b40bc58e6d..beff59ee9f6f 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -39,3 +39,4 @@ selftest(hangcheck, intel_hangcheck_live_selftests)
 selftest(execlists, intel_execlists_live_selftests)
 selftest(guc, intel_guc_live_selftest)
 selftest(perf, i915_perf_live_selftests)
+selftest(late_gt_pm, intel_gt_pm_late_selftests)
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 13/14] drm/i915/selftests: Exercise rc6 handling
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Reading from CTX_INFO upsets rc6, requiring us to detect and prevent
possible rc6 context corruption. Poke at the bear!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_rc6.c           |   4 +
 drivers/gpu/drm/i915/gt/selftest_gt_pm.c      |  13 ++
 drivers/gpu/drm/i915/gt/selftest_rc6.c        | 146 ++++++++++++++++++
 drivers/gpu/drm/i915/gt/selftest_rc6.h        |  12 ++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 5 files changed, 176 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gt/selftest_rc6.c
 create mode 100644 drivers/gpu/drm/i915/gt/selftest_rc6.h

diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
index 977a617a196d..7a0bc6dde009 100644
--- a/drivers/gpu/drm/i915/gt/intel_rc6.c
+++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
@@ -783,3 +783,7 @@ u64 intel_rc6_residency_us(struct intel_rc6 *rc6, i915_reg_t reg)
 {
 	return DIV_ROUND_UP_ULL(intel_rc6_residency_ns(rc6, reg), 1000);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftest_rc6.c"
+#endif
diff --git a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
index d1752f15702a..1d5bf93d1258 100644
--- a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
@@ -6,6 +6,7 @@
  */
 
 #include "selftest_llc.h"
+#include "selftest_rc6.h"
 
 static int live_gt_resume(void *arg)
 {
@@ -58,3 +59,15 @@ int intel_gt_pm_live_selftests(struct drm_i915_private *i915)
 
 	return intel_gt_live_subtests(tests, &i915->gt);
 }
+
+int intel_gt_pm_late_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(live_rc6_ctx),
+	};
+
+	if (intel_gt_is_wedged(&i915->gt))
+		return 0;
+
+	return intel_gt_live_subtests(tests, &i915->gt);
+}
diff --git a/drivers/gpu/drm/i915/gt/selftest_rc6.c b/drivers/gpu/drm/i915/gt/selftest_rc6.c
new file mode 100644
index 000000000000..c8b729f7e93e
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/selftest_rc6.c
@@ -0,0 +1,146 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "intel_context.h"
+#include "intel_engine_pm.h"
+#include "intel_gt_requests.h"
+#include "intel_ring.h"
+#include "selftest_rc6.h"
+
+#include "selftests/i915_random.h"
+
+static const u32 *__live_rc6_ctx(struct intel_context *ce)
+{
+	struct i915_request *rq;
+	u32 const *result;
+	u32 cmd;
+	u32 *cs;
+
+	rq = intel_context_create_request(ce);
+	if (IS_ERR(rq))
+		return ERR_CAST(rq);
+
+	cs = intel_ring_begin(rq, 4);
+	if (IS_ERR(cs)) {
+		i915_request_add(rq);
+		return cs;
+	}
+
+	cmd = MI_STORE_REGISTER_MEM | MI_USE_GGTT;
+	if (INTEL_GEN(rq->i915) >= 8)
+		cmd++;
+
+	*cs++ = cmd;
+	*cs++ = i915_mmio_reg_offset(GEN8_RC6_CTX_INFO);
+	*cs++ = ce->timeline->hwsp_offset + 8;
+	*cs++ = 0;
+	intel_ring_advance(rq, cs);
+
+	result = rq->hwsp_seqno + 2;
+	i915_request_add(rq);
+
+	return result;
+}
+
+static struct intel_engine_cs **
+randomised_engines(struct intel_gt *gt,
+		   struct rnd_state *prng,
+		   unsigned int *count)
+{
+	struct intel_engine_cs *engine, **engines;
+	enum intel_engine_id id;
+	int n;
+
+	n = 0;
+	for_each_engine(engine, gt, id)
+		n++;
+	if (!n)
+		return NULL;
+
+	engines = kmalloc_array(n, sizeof(*engines), GFP_KERNEL);
+	if (!engines)
+		return NULL;
+
+	n = 0;
+	for_each_engine(engine, gt, id)
+		engines[n++] = engine;
+
+	i915_prandom_shuffle(engines, sizeof(*engines), n, prng);
+
+	*count = n;
+	return engines;
+}
+
+int live_rc6_ctx(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct intel_engine_cs **engines;
+	unsigned int n, count;
+	I915_RND_STATE(prng);
+	int err = 0;
+
+	/* A read of CTX_INFO upsets rc6. Poke the bear! */
+	if (INTEL_GEN(gt->i915) < 8)
+		return 0;
+
+	engines = randomised_engines(gt, &prng, &count);
+	if (!engines)
+		return 0;
+
+	for (n = 0; n < count; n++) {
+		struct intel_engine_cs *engine = engines[n];
+		int pass;
+
+		for (pass = 0; pass < 2; pass++) {
+			struct intel_context *ce;
+			unsigned int resets =
+				i915_reset_engine_count(&gt->i915->gpu_error,
+							engine);
+			const u32 *res;
+
+			/* Use a sacrifical context */
+			ce = intel_context_create(engine->kernel_context->gem_context,
+						  engine);
+			if (IS_ERR(ce)) {
+				err = PTR_ERR(ce);
+				goto out;
+			}
+
+			intel_engine_pm_get(engine);
+			res = __live_rc6_ctx(ce);
+			intel_engine_pm_put(engine);
+			intel_context_put(ce);
+			if (IS_ERR(res)) {
+				err = PTR_ERR(res);
+				goto out;
+			}
+
+			if (intel_gt_wait_for_idle(gt, HZ / 5) == -ETIME) {
+				intel_gt_set_wedged(gt);
+				err = -ETIME;
+				goto out;
+			}
+
+			intel_gt_pm_wait_for_idle(gt);
+			pr_debug("%s: CTX_INFO=%0x\n",
+				 engine->name, READ_ONCE(*res));
+
+			if (resets !=
+			    i915_reset_engine_count(&gt->i915->gpu_error,
+						    engine)) {
+				pr_err("%s: GPU reset required\n",
+				       engine->name);
+				add_taint_for_CI(TAINT_WARN);
+				err = -EIO;
+				goto out;
+			}
+		}
+	}
+
+out:
+	kfree(engines);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/gt/selftest_rc6.h b/drivers/gpu/drm/i915/gt/selftest_rc6.h
new file mode 100644
index 000000000000..230c6b4c7dc0
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/selftest_rc6.h
@@ -0,0 +1,12 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef SELFTEST_RC6_H
+#define SELFTEST_RC6_H
+
+int live_rc6_ctx(void *arg);
+
+#endif /* SELFTEST_RC6_H */
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 11b40bc58e6d..beff59ee9f6f 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -39,3 +39,4 @@ selftest(hangcheck, intel_hangcheck_live_selftests)
 selftest(execlists, intel_execlists_live_selftests)
 selftest(guc, intel_guc_live_selftest)
 selftest(perf, i915_perf_live_selftests)
+selftest(late_gt_pm, intel_gt_pm_late_selftests)
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 14/14] drm/i915/gt: Track engine round-trip times
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Knowing the round trip time of an engine is useful for tracking the
health of the system as well as providing a metric for the baseline
responsiveness of the engine. We can use the latter metric for
automatically tuning our waits in selftests and when idling so we don't
confuse a slower system with a dead one.

Upon idling the engine, we send one last pulse to switch the context
away from precious user state to the volatile kernel context. We know
the engine is idle at this point, and the pulse is non-preemptable, so
this provides us with a good measurement of the round trip time. A
secondary effect is that by installing an interrupt onto the pulse, we
can flush the engine immediately upon completion, curtailing the
background flush and entering powersaving immediately.

v2: Manage pm wakerefs to avoid too early module unload

References: 7e34f4e4aad3 ("drm/i915/gen8+: Add RC6 CTX corruption WA")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c    |  2 ++
 drivers/gpu/drm/i915/gt/intel_engine_pm.c    | 36 +++++++++++++++++++-
 drivers/gpu/drm/i915/gt/intel_engine_types.h | 11 ++++++
 drivers/gpu/drm/i915/gt/intel_gt_pm.h        |  5 +++
 drivers/gpu/drm/i915/intel_wakeref.h         | 13 +++++++
 5 files changed, 66 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index b9613d044393..2d11db13dc89 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -334,6 +334,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
 	/* Nothing to do here, execute in order of dependencies */
 	engine->schedule = NULL;
 
+	ewma_delay_init(&engine->delay);
 	seqlock_init(&engine->stats.lock);
 
 	ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
@@ -1477,6 +1478,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
 		drm_printf(m, "*** WEDGED ***\n");
 
 	drm_printf(m, "\tAwake? %d\n", atomic_read(&engine->wakeref.count));
+	drm_printf(m, "\tDelay: %luus\n", ewma_delay_read(&engine->delay));
 
 	rcu_read_lock();
 	rq = READ_ONCE(engine->heartbeat.systole);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index d1e9b7fec3a6..8ee4fd99f0a6 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -73,6 +73,24 @@ static inline void __timeline_mark_unlock(struct intel_context *ce,
 
 #endif /* !IS_ENABLED(CONFIG_LOCKDEP) */
 
+struct duration_cb {
+	struct dma_fence_cb cb;
+	ktime_t emitted;
+};
+
+static void duration_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
+{
+	struct duration_cb *dcb = container_of(cb, typeof(*dcb), cb);
+	struct intel_engine_cs *engine = to_request(fence)->engine;
+
+	ewma_delay_add(&engine->delay,
+		       ktime_us_delta(ktime_get(), dcb->emitted));
+
+	/* Kick retire for quicker powersaving (soft-rc6). */
+	mod_delayed_work(system_wq, &engine->gt->requests.retire_work, 0);
+	intel_gt_pm_put_async(engine->gt);
+}
+
 static bool switch_to_kernel_context(struct intel_engine_cs *engine)
 {
 	struct i915_request *rq;
@@ -114,7 +132,23 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
 
 	/* Install ourselves as a preemption barrier */
 	rq->sched.attr.priority = I915_PRIORITY_BARRIER;
-	__i915_request_commit(rq);
+	if (likely(!__i915_request_commit(rq))) { /* engine should be idle! */
+		struct duration_cb *dcb;
+
+		BUILD_BUG_ON(sizeof(*dcb) > sizeof(rq->submitq));
+		dcb = (struct duration_cb *)&rq->submitq;
+
+		/*
+		 * Use an interrupt for precise measurement of duration,
+		 * otherwise we rely on someone else retiring all the requests
+		 * which may delay the signaling (i.e. we will likely wait
+		 * until the background request retirement running every
+		 * second or two).
+		 */
+		__intel_gt_pm_get(rq->engine->gt);
+		dma_fence_add_callback(&rq->fence, &dcb->cb, duration_cb);
+		dcb->emitted = ktime_get();
+	}
 
 	/* Release our exclusive hold on the engine */
 	__intel_wakeref_defer_park(&engine->wakeref);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 758f0e8ec672..1de121583482 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -7,6 +7,7 @@
 #ifndef __INTEL_ENGINE_TYPES__
 #define __INTEL_ENGINE_TYPES__
 
+#include <linux/average.h>
 #include <linux/hashtable.h>
 #include <linux/irq_work.h>
 #include <linux/kref.h>
@@ -119,6 +120,9 @@ enum intel_engine_id {
 #define INVALID_ENGINE ((enum intel_engine_id)-1)
 };
 
+/* A simple estimator for the round-trip responsive time of an engine */
+DECLARE_EWMA(delay, 6, 4)
+
 struct st_preempt_hang {
 	struct completion completion;
 	unsigned int count;
@@ -316,6 +320,13 @@ struct intel_engine_cs {
 		struct intel_timeline *timeline;
 	} legacy;
 
+	/*
+	 * We track the average duration of the idle pulse on parking the
+	 * engine to keep an estimate of the how the fast the engine is
+	 * under ideal conditions.
+	 */
+	struct ewma_delay delay;
+
 	/* Rather than have every client wait upon all user interrupts,
 	 * with the herd waking after every interrupt and each doing the
 	 * heavyweight seqno dance, we delegate the task (of being the
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
index 990efc27a4e4..4a9e48c12bd4 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
@@ -22,6 +22,11 @@ static inline void intel_gt_pm_get(struct intel_gt *gt)
 	intel_wakeref_get(&gt->wakeref);
 }
 
+static inline void __intel_gt_pm_get(struct intel_gt *gt)
+{
+	__intel_wakeref_get(&gt->wakeref);
+}
+
 static inline bool intel_gt_pm_get_if_awake(struct intel_gt *gt)
 {
 	return intel_wakeref_get_if_active(&gt->wakeref);
diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index 688b9b536a69..e19b0f8d1794 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -82,6 +82,19 @@ intel_wakeref_get(struct intel_wakeref *wf)
 	return 0;
 }
 
+/**
+ * __intel_wakeref_get: Acquire the wakeref, again
+ * @wf: the wakeref
+ *
+ * Acquire a hold on an already active wakeref.
+ */
+static inline void
+__intel_wakeref_get(struct intel_wakeref *wf)
+{
+	INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0);
+	atomic_inc(&wf->count);
+}
+
 /**
  * intel_wakeref_get_if_in_use: Acquire the wakeref
  * @wf: the wakeref
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 14/14] drm/i915/gt: Track engine round-trip times
@ 2019-11-17 21:03   ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2019-11-17 21:03 UTC (permalink / raw)
  To: intel-gfx

Knowing the round trip time of an engine is useful for tracking the
health of the system as well as providing a metric for the baseline
responsiveness of the engine. We can use the latter metric for
automatically tuning our waits in selftests and when idling so we don't
confuse a slower system with a dead one.

Upon idling the engine, we send one last pulse to switch the context
away from precious user state to the volatile kernel context. We know
the engine is idle at this point, and the pulse is non-preemptable, so
this provides us with a good measurement of the round trip time. A
secondary effect is that by installing an interrupt onto the pulse, we
can flush the engine immediately upon completion, curtailing the
background flush and entering powersaving immediately.

v2: Manage pm wakerefs to avoid too early module unload

References: 7e34f4e4aad3 ("drm/i915/gen8+: Add RC6 CTX corruption WA")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c    |  2 ++
 drivers/gpu/drm/i915/gt/intel_engine_pm.c    | 36 +++++++++++++++++++-
 drivers/gpu/drm/i915/gt/intel_engine_types.h | 11 ++++++
 drivers/gpu/drm/i915/gt/intel_gt_pm.h        |  5 +++
 drivers/gpu/drm/i915/intel_wakeref.h         | 13 +++++++
 5 files changed, 66 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index b9613d044393..2d11db13dc89 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -334,6 +334,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
 	/* Nothing to do here, execute in order of dependencies */
 	engine->schedule = NULL;
 
+	ewma_delay_init(&engine->delay);
 	seqlock_init(&engine->stats.lock);
 
 	ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
@@ -1477,6 +1478,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
 		drm_printf(m, "*** WEDGED ***\n");
 
 	drm_printf(m, "\tAwake? %d\n", atomic_read(&engine->wakeref.count));
+	drm_printf(m, "\tDelay: %luus\n", ewma_delay_read(&engine->delay));
 
 	rcu_read_lock();
 	rq = READ_ONCE(engine->heartbeat.systole);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index d1e9b7fec3a6..8ee4fd99f0a6 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -73,6 +73,24 @@ static inline void __timeline_mark_unlock(struct intel_context *ce,
 
 #endif /* !IS_ENABLED(CONFIG_LOCKDEP) */
 
+struct duration_cb {
+	struct dma_fence_cb cb;
+	ktime_t emitted;
+};
+
+static void duration_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
+{
+	struct duration_cb *dcb = container_of(cb, typeof(*dcb), cb);
+	struct intel_engine_cs *engine = to_request(fence)->engine;
+
+	ewma_delay_add(&engine->delay,
+		       ktime_us_delta(ktime_get(), dcb->emitted));
+
+	/* Kick retire for quicker powersaving (soft-rc6). */
+	mod_delayed_work(system_wq, &engine->gt->requests.retire_work, 0);
+	intel_gt_pm_put_async(engine->gt);
+}
+
 static bool switch_to_kernel_context(struct intel_engine_cs *engine)
 {
 	struct i915_request *rq;
@@ -114,7 +132,23 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
 
 	/* Install ourselves as a preemption barrier */
 	rq->sched.attr.priority = I915_PRIORITY_BARRIER;
-	__i915_request_commit(rq);
+	if (likely(!__i915_request_commit(rq))) { /* engine should be idle! */
+		struct duration_cb *dcb;
+
+		BUILD_BUG_ON(sizeof(*dcb) > sizeof(rq->submitq));
+		dcb = (struct duration_cb *)&rq->submitq;
+
+		/*
+		 * Use an interrupt for precise measurement of duration,
+		 * otherwise we rely on someone else retiring all the requests
+		 * which may delay the signaling (i.e. we will likely wait
+		 * until the background request retirement running every
+		 * second or two).
+		 */
+		__intel_gt_pm_get(rq->engine->gt);
+		dma_fence_add_callback(&rq->fence, &dcb->cb, duration_cb);
+		dcb->emitted = ktime_get();
+	}
 
 	/* Release our exclusive hold on the engine */
 	__intel_wakeref_defer_park(&engine->wakeref);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 758f0e8ec672..1de121583482 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -7,6 +7,7 @@
 #ifndef __INTEL_ENGINE_TYPES__
 #define __INTEL_ENGINE_TYPES__
 
+#include <linux/average.h>
 #include <linux/hashtable.h>
 #include <linux/irq_work.h>
 #include <linux/kref.h>
@@ -119,6 +120,9 @@ enum intel_engine_id {
 #define INVALID_ENGINE ((enum intel_engine_id)-1)
 };
 
+/* A simple estimator for the round-trip responsive time of an engine */
+DECLARE_EWMA(delay, 6, 4)
+
 struct st_preempt_hang {
 	struct completion completion;
 	unsigned int count;
@@ -316,6 +320,13 @@ struct intel_engine_cs {
 		struct intel_timeline *timeline;
 	} legacy;
 
+	/*
+	 * We track the average duration of the idle pulse on parking the
+	 * engine to keep an estimate of the how the fast the engine is
+	 * under ideal conditions.
+	 */
+	struct ewma_delay delay;
+
 	/* Rather than have every client wait upon all user interrupts,
 	 * with the herd waking after every interrupt and each doing the
 	 * heavyweight seqno dance, we delegate the task (of being the
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
index 990efc27a4e4..4a9e48c12bd4 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
@@ -22,6 +22,11 @@ static inline void intel_gt_pm_get(struct intel_gt *gt)
 	intel_wakeref_get(&gt->wakeref);
 }
 
+static inline void __intel_gt_pm_get(struct intel_gt *gt)
+{
+	__intel_wakeref_get(&gt->wakeref);
+}
+
 static inline bool intel_gt_pm_get_if_awake(struct intel_gt *gt)
 {
 	return intel_wakeref_get_if_active(&gt->wakeref);
diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index 688b9b536a69..e19b0f8d1794 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -82,6 +82,19 @@ intel_wakeref_get(struct intel_wakeref *wf)
 	return 0;
 }
 
+/**
+ * __intel_wakeref_get: Acquire the wakeref, again
+ * @wf: the wakeref
+ *
+ * Acquire a hold on an already active wakeref.
+ */
+static inline void
+__intel_wakeref_get(struct intel_wakeref *wf)
+{
+	INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0);
+	atomic_inc(&wf->count);
+}
+
 /**
  * intel_wakeref_get_if_in_use: Acquire the wakeref
  * @wf: the wakeref
-- 
2.24.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
@ 2019-11-17 21:14   ` Patchwork
  0 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2019-11-17 21:14 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
URL   : https://patchwork.freedesktop.org/series/69591/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
4195647761fd drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
e23d471a9d67 drm/i915: Mark up the calling context for intel_wakeref_put()
-:14: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#14: 
References: a0855d24fc22d ("locking/mutex: Complain upon mutex API misuse in IRQ contexts")

total: 0 errors, 1 warnings, 0 checks, 239 lines checked
20fd261d5876 drm/i915/gem: Merge GGTT vma flush into a single loop
5bfbc55b352b drm/i915/gt: Only wait for register chipset flush if active
962b71005d58 drm/i915: Protect the obj->vma.list during iteration
-:9: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#9: 
<1>[  347.820823] BUG: kernel NULL pointer dereference, address: 0000000000000150

total: 0 errors, 1 warnings, 0 checks, 12 lines checked
0a646b25c478 drm/i915: Wait until the intel_wakeref idle callback is complete
5895a67b56fa drm/i915/gt: Declare timeline.lock to be irq-free
f53a92f34557 drm/i915/gt: Move new timelines to the end of active_list
-:10: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#10: 
References: 7936a22dd466 ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")

-:10: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 7936a22dd466 ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")'
#10: 
References: 7936a22dd466 ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")

total: 1 errors, 1 warnings, 0 checks, 8 lines checked
8e182bc3d107 drm/i915/gt: Schedule next retirement worker first
66bd73a31635 drm/i915/gt: Flush the requests after wedging on suspend
e0b84c5e85c0 drm/i915/selftests: Flush the active callbacks
94b665235fde drm/i915/selftests: Be explicit in ERR_PTR handling
-:11: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#11: 
References: f40a7b7558ef ("drm/i915: Initial selftests for exercising eviction")

-:11: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit f40a7b7558ef ("drm/i915: Initial selftests for exercising eviction")'
#11: 
References: f40a7b7558ef ("drm/i915: Initial selftests for exercising eviction")

-:25: WARNING:LONG_LINE: line over 100 characters
#25: FILE: drivers/gpu/drm/i915/selftests/i915_gem_evict.c:202:
+		pr_err("Failed to evict+insert, i915_gem_object_ggtt_pin returned err=%d\n", (int)PTR_ERR_OR_ZERO(vma));

total: 1 errors, 2 warnings, 0 checks, 10 lines checked
4b1469aa03a9 drm/i915/selftests: Exercise rc6 handling
-:54: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#54: 
new file mode 100644

-:59: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#59: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.c:1:
+/*

-:60: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#60: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.c:2:
+ * SPDX-License-Identifier: MIT

-:140: WARNING:LINE_SPACING: Missing a blank line after declarations
#140: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.c:82:
+	unsigned int n, count;
+	I915_RND_STATE(prng);

-:211: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#211: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.h:1:
+/*

-:212: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#212: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.h:2:
+ * SPDX-License-Identifier: MIT

total: 0 errors, 6 warnings, 0 checks, 191 lines checked
5f675119a537 drm/i915/gt: Track engine round-trip times
-:14: WARNING:TYPO_SPELLING: 'preemptable' may be misspelled - perhaps 'preemptible'?
#14: 
the engine is idle at this point, and the pulse is non-preemptable, so

-:22: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 7e34f4e4aad3 ("drm/i915/gen8+: Add RC6 CTX corruption WA")'
#22: 
References: 7e34f4e4aad3 ("drm/i915/gen8+: Add RC6 CTX corruption WA")

total: 1 errors, 1 warnings, 0 checks, 121 lines checked

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
@ 2019-11-17 21:14   ` Patchwork
  0 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2019-11-17 21:14 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
URL   : https://patchwork.freedesktop.org/series/69591/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
4195647761fd drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
e23d471a9d67 drm/i915: Mark up the calling context for intel_wakeref_put()
-:14: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#14: 
References: a0855d24fc22d ("locking/mutex: Complain upon mutex API misuse in IRQ contexts")

total: 0 errors, 1 warnings, 0 checks, 239 lines checked
20fd261d5876 drm/i915/gem: Merge GGTT vma flush into a single loop
5bfbc55b352b drm/i915/gt: Only wait for register chipset flush if active
962b71005d58 drm/i915: Protect the obj->vma.list during iteration
-:9: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#9: 
<1>[  347.820823] BUG: kernel NULL pointer dereference, address: 0000000000000150

total: 0 errors, 1 warnings, 0 checks, 12 lines checked
0a646b25c478 drm/i915: Wait until the intel_wakeref idle callback is complete
5895a67b56fa drm/i915/gt: Declare timeline.lock to be irq-free
f53a92f34557 drm/i915/gt: Move new timelines to the end of active_list
-:10: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#10: 
References: 7936a22dd466 ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")

-:10: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 7936a22dd466 ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")'
#10: 
References: 7936a22dd466 ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")

total: 1 errors, 1 warnings, 0 checks, 8 lines checked
8e182bc3d107 drm/i915/gt: Schedule next retirement worker first
66bd73a31635 drm/i915/gt: Flush the requests after wedging on suspend
e0b84c5e85c0 drm/i915/selftests: Flush the active callbacks
94b665235fde drm/i915/selftests: Be explicit in ERR_PTR handling
-:11: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#11: 
References: f40a7b7558ef ("drm/i915: Initial selftests for exercising eviction")

-:11: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit f40a7b7558ef ("drm/i915: Initial selftests for exercising eviction")'
#11: 
References: f40a7b7558ef ("drm/i915: Initial selftests for exercising eviction")

-:25: WARNING:LONG_LINE: line over 100 characters
#25: FILE: drivers/gpu/drm/i915/selftests/i915_gem_evict.c:202:
+		pr_err("Failed to evict+insert, i915_gem_object_ggtt_pin returned err=%d\n", (int)PTR_ERR_OR_ZERO(vma));

total: 1 errors, 2 warnings, 0 checks, 10 lines checked
4b1469aa03a9 drm/i915/selftests: Exercise rc6 handling
-:54: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#54: 
new file mode 100644

-:59: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#59: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.c:1:
+/*

-:60: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#60: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.c:2:
+ * SPDX-License-Identifier: MIT

-:140: WARNING:LINE_SPACING: Missing a blank line after declarations
#140: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.c:82:
+	unsigned int n, count;
+	I915_RND_STATE(prng);

-:211: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#211: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.h:1:
+/*

-:212: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#212: FILE: drivers/gpu/drm/i915/gt/selftest_rc6.h:2:
+ * SPDX-License-Identifier: MIT

total: 0 errors, 6 warnings, 0 checks, 191 lines checked
5f675119a537 drm/i915/gt: Track engine round-trip times
-:14: WARNING:TYPO_SPELLING: 'preemptable' may be misspelled - perhaps 'preemptible'?
#14: 
the engine is idle at this point, and the pulse is non-preemptable, so

-:22: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 7e34f4e4aad3 ("drm/i915/gen8+: Add RC6 CTX corruption WA")'
#22: 
References: 7e34f4e4aad3 ("drm/i915/gen8+: Add RC6 CTX corruption WA")

total: 1 errors, 1 warnings, 0 checks, 121 lines checked

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
@ 2019-11-17 21:35   ` Patchwork
  0 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2019-11-17 21:35 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
URL   : https://patchwork.freedesktop.org/series/69591/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_7358 -> Patchwork_15307
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/index.html

New tests
---------

  New tests have been introduced between CI_DRM_7358 and Patchwork_15307:

### New IGT tests (1) ###

  * igt@i915_selftest@live_late_gt_pm:
    - Statuses : 35 pass(s)
    - Exec time: [0.41, 1.70] s

  

Known issues
------------

  Here are the changes found in Patchwork_15307 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_pm_rpm@module-reload:
    - fi-skl-6770hq:      [PASS][1] -> [DMESG-WARN][2] ([fdo#112261])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-skl-6770hq/igt@i915_pm_rpm@module-reload.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-skl-6770hq/igt@i915_pm_rpm@module-reload.html

  * igt@i915_selftest@live_gem_contexts:
    - fi-cfl-8700k:       [PASS][3] -> [INCOMPLETE][4] ([fdo#111700])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-cfl-8700k/igt@i915_selftest@live_gem_contexts.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-cfl-8700k/igt@i915_selftest@live_gem_contexts.html

  
#### Possible fixes ####

  * igt@i915_pm_rpm@module-reload:
    - fi-skl-lmem:        [DMESG-WARN][5] ([fdo#112261]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-skl-lmem/igt@i915_pm_rpm@module-reload.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-skl-lmem/igt@i915_pm_rpm@module-reload.html

  * igt@kms_cursor_legacy@basic-flip-before-cursor-atomic:
    - fi-skl-6770hq:      [FAIL][7] ([fdo#109495]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-skl-6770hq/igt@kms_cursor_legacy@basic-flip-before-cursor-atomic.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-skl-6770hq/igt@kms_cursor_legacy@basic-flip-before-cursor-atomic.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - fi-skl-6770hq:      [WARN][9] ([fdo#112252]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-skl-6770hq/igt@kms_setmode@basic-clone-single-crtc.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-skl-6770hq/igt@kms_setmode@basic-clone-single-crtc.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109495]: https://bugs.freedesktop.org/show_bug.cgi?id=109495
  [fdo#109964]: https://bugs.freedesktop.org/show_bug.cgi?id=109964
  [fdo#111700]: https://bugs.freedesktop.org/show_bug.cgi?id=111700
  [fdo#112252]: https://bugs.freedesktop.org/show_bug.cgi?id=112252
  [fdo#112261]: https://bugs.freedesktop.org/show_bug.cgi?id=112261
  [fdo#112298]: https://bugs.freedesktop.org/show_bug.cgi?id=112298


Participating hosts (50 -> 42)
------------------------------

  Additional (2): fi-kbl-7500u fi-bsw-n3050 
  Missing    (10): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 fi-ivb-3770 fi-icl-u3 fi-elk-e7500 fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_7358 -> Patchwork_15307

  CI-20190529: 20190529
  CI_DRM_7358: 3ff71899c56c5ebe182c84edf0567280863e553e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5290: 14d19371610fabafa0ee5a21160373b90ada0a30 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_15307: 5f675119a5372d258d95acf2d8e1e7cf5b8ef42a @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

5f675119a537 drm/i915/gt: Track engine round-trip times
4b1469aa03a9 drm/i915/selftests: Exercise rc6 handling
94b665235fde drm/i915/selftests: Be explicit in ERR_PTR handling
e0b84c5e85c0 drm/i915/selftests: Flush the active callbacks
66bd73a31635 drm/i915/gt: Flush the requests after wedging on suspend
8e182bc3d107 drm/i915/gt: Schedule next retirement worker first
f53a92f34557 drm/i915/gt: Move new timelines to the end of active_list
5895a67b56fa drm/i915/gt: Declare timeline.lock to be irq-free
0a646b25c478 drm/i915: Wait until the intel_wakeref idle callback is complete
962b71005d58 drm/i915: Protect the obj->vma.list during iteration
5bfbc55b352b drm/i915/gt: Only wait for register chipset flush if active
20fd261d5876 drm/i915/gem: Merge GGTT vma flush into a single loop
e23d471a9d67 drm/i915: Mark up the calling context for intel_wakeref_put()
4195647761fd drm/i915/gt: Close race between engine_park and intel_gt_retire_requests

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
@ 2019-11-17 21:35   ` Patchwork
  0 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2019-11-17 21:35 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
URL   : https://patchwork.freedesktop.org/series/69591/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_7358 -> Patchwork_15307
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/index.html

New tests
---------

  New tests have been introduced between CI_DRM_7358 and Patchwork_15307:

### New IGT tests (1) ###

  * igt@i915_selftest@live_late_gt_pm:
    - Statuses : 35 pass(s)
    - Exec time: [0.41, 1.70] s

  

Known issues
------------

  Here are the changes found in Patchwork_15307 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_pm_rpm@module-reload:
    - fi-skl-6770hq:      [PASS][1] -> [DMESG-WARN][2] ([fdo#112261])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-skl-6770hq/igt@i915_pm_rpm@module-reload.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-skl-6770hq/igt@i915_pm_rpm@module-reload.html

  * igt@i915_selftest@live_gem_contexts:
    - fi-cfl-8700k:       [PASS][3] -> [INCOMPLETE][4] ([fdo#111700])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-cfl-8700k/igt@i915_selftest@live_gem_contexts.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-cfl-8700k/igt@i915_selftest@live_gem_contexts.html

  
#### Possible fixes ####

  * igt@i915_pm_rpm@module-reload:
    - fi-skl-lmem:        [DMESG-WARN][5] ([fdo#112261]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-skl-lmem/igt@i915_pm_rpm@module-reload.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-skl-lmem/igt@i915_pm_rpm@module-reload.html

  * igt@kms_cursor_legacy@basic-flip-before-cursor-atomic:
    - fi-skl-6770hq:      [FAIL][7] ([fdo#109495]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-skl-6770hq/igt@kms_cursor_legacy@basic-flip-before-cursor-atomic.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-skl-6770hq/igt@kms_cursor_legacy@basic-flip-before-cursor-atomic.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - fi-skl-6770hq:      [WARN][9] ([fdo#112252]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/fi-skl-6770hq/igt@kms_setmode@basic-clone-single-crtc.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/fi-skl-6770hq/igt@kms_setmode@basic-clone-single-crtc.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109495]: https://bugs.freedesktop.org/show_bug.cgi?id=109495
  [fdo#109964]: https://bugs.freedesktop.org/show_bug.cgi?id=109964
  [fdo#111700]: https://bugs.freedesktop.org/show_bug.cgi?id=111700
  [fdo#112252]: https://bugs.freedesktop.org/show_bug.cgi?id=112252
  [fdo#112261]: https://bugs.freedesktop.org/show_bug.cgi?id=112261
  [fdo#112298]: https://bugs.freedesktop.org/show_bug.cgi?id=112298


Participating hosts (50 -> 42)
------------------------------

  Additional (2): fi-kbl-7500u fi-bsw-n3050 
  Missing    (10): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 fi-ivb-3770 fi-icl-u3 fi-elk-e7500 fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_7358 -> Patchwork_15307

  CI-20190529: 20190529
  CI_DRM_7358: 3ff71899c56c5ebe182c84edf0567280863e553e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5290: 14d19371610fabafa0ee5a21160373b90ada0a30 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_15307: 5f675119a5372d258d95acf2d8e1e7cf5b8ef42a @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

5f675119a537 drm/i915/gt: Track engine round-trip times
4b1469aa03a9 drm/i915/selftests: Exercise rc6 handling
94b665235fde drm/i915/selftests: Be explicit in ERR_PTR handling
e0b84c5e85c0 drm/i915/selftests: Flush the active callbacks
66bd73a31635 drm/i915/gt: Flush the requests after wedging on suspend
8e182bc3d107 drm/i915/gt: Schedule next retirement worker first
f53a92f34557 drm/i915/gt: Move new timelines to the end of active_list
5895a67b56fa drm/i915/gt: Declare timeline.lock to be irq-free
0a646b25c478 drm/i915: Wait until the intel_wakeref idle callback is complete
962b71005d58 drm/i915: Protect the obj->vma.list during iteration
5bfbc55b352b drm/i915/gt: Only wait for register chipset flush if active
20fd261d5876 drm/i915/gem: Merge GGTT vma flush into a single loop
e23d471a9d67 drm/i915: Mark up the calling context for intel_wakeref_put()
4195647761fd drm/i915/gt: Close race between engine_park and intel_gt_retire_requests

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* ✗ Fi.CI.IGT: failure for series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
@ 2019-11-18  1:47   ` Patchwork
  0 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2019-11-18  1:47 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
URL   : https://patchwork.freedesktop.org/series/69591/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_7358_full -> Patchwork_15307_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_15307_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_15307_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_15307_full:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@mock_requests:
    - shard-snb:          [PASS][1] -> [DMESG-WARN][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-snb1/igt@i915_selftest@mock_requests.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-snb4/igt@i915_selftest@mock_requests.html
    - shard-skl:          [PASS][3] -> [DMESG-WARN][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl7/igt@i915_selftest@mock_requests.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl10/igt@i915_selftest@mock_requests.html
    - shard-glk:          [PASS][5] -> [DMESG-WARN][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-glk8/igt@i915_selftest@mock_requests.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-glk2/igt@i915_selftest@mock_requests.html
    - shard-tglb:         [PASS][7] -> [DMESG-WARN][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb2/igt@i915_selftest@mock_requests.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb2/igt@i915_selftest@mock_requests.html
    - shard-apl:          [PASS][9] -> [DMESG-WARN][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-apl2/igt@i915_selftest@mock_requests.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-apl1/igt@i915_selftest@mock_requests.html
    - shard-kbl:          [PASS][11] -> [DMESG-WARN][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-kbl3/igt@i915_selftest@mock_requests.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-kbl7/igt@i915_selftest@mock_requests.html
    - shard-hsw:          [PASS][13] -> [DMESG-WARN][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw1/igt@i915_selftest@mock_requests.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw1/igt@i915_selftest@mock_requests.html
    - shard-iclb:         [PASS][15] -> [DMESG-WARN][16] +1 similar issue
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb4/igt@i915_selftest@mock_requests.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb7/igt@i915_selftest@mock_requests.html

  
New tests
---------

  New tests have been introduced between CI_DRM_7358_full and Patchwork_15307_full:

### New IGT tests (1) ###

  * igt@i915_selftest@live_late_gt_pm:
    - Statuses :
    - Exec time: [None] s

  

Known issues
------------

  Here are the changes found in Patchwork_15307_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_isolation@vcs1-clean:
    - shard-iclb:         [PASS][17] -> [SKIP][18] ([fdo#109276] / [fdo#112080]) +3 similar issues
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb2/igt@gem_ctx_isolation@vcs1-clean.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb5/igt@gem_ctx_isolation@vcs1-clean.html

  * igt@gem_ctx_shared@exec-single-timeline-bsd:
    - shard-iclb:         [PASS][19] -> [SKIP][20] ([fdo#110841])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb6/igt@gem_ctx_shared@exec-single-timeline-bsd.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb4/igt@gem_ctx_shared@exec-single-timeline-bsd.html

  * igt@gem_ctx_switch@vcs1-heavy:
    - shard-iclb:         [PASS][21] -> [SKIP][22] ([fdo#112080]) +11 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb4/igt@gem_ctx_switch@vcs1-heavy.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb8/igt@gem_ctx_switch@vcs1-heavy.html

  * igt@gem_exec_balancer@smoke:
    - shard-iclb:         [PASS][23] -> [SKIP][24] ([fdo#110854])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb1/igt@gem_exec_balancer@smoke.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb6/igt@gem_exec_balancer@smoke.html

  * igt@gem_exec_create@forked:
    - shard-tglb:         [PASS][25] -> [INCOMPLETE][26] ([fdo#108838] / [fdo#111747])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb7/igt@gem_exec_create@forked.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb6/igt@gem_exec_create@forked.html

  * igt@gem_exec_reloc@basic-wc-active:
    - shard-skl:          [PASS][27] -> [DMESG-WARN][28] ([fdo#106107])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl1/igt@gem_exec_reloc@basic-wc-active.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl6/igt@gem_exec_reloc@basic-wc-active.html

  * igt@gem_exec_schedule@preemptive-hang-bsd:
    - shard-iclb:         [PASS][29] -> [SKIP][30] ([fdo#112146]) +3 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb5/igt@gem_exec_schedule@preemptive-hang-bsd.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb2/igt@gem_exec_schedule@preemptive-hang-bsd.html

  * igt@gem_softpin@noreloc-s3:
    - shard-apl:          [PASS][31] -> [DMESG-WARN][32] ([fdo#108566]) +1 similar issue
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-apl7/igt@gem_softpin@noreloc-s3.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-apl1/igt@gem_softpin@noreloc-s3.html

  * igt@gem_userptr_blits@dmabuf-unsync:
    - shard-hsw:          [PASS][33] -> [DMESG-WARN][34] ([fdo#111870])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw2/igt@gem_userptr_blits@dmabuf-unsync.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw1/igt@gem_userptr_blits@dmabuf-unsync.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy-gup:
    - shard-snb:          [PASS][35] -> [DMESG-WARN][36] ([fdo#111870])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-snb6/igt@gem_userptr_blits@map-fixed-invalidate-busy-gup.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-snb1/igt@gem_userptr_blits@map-fixed-invalidate-busy-gup.html

  * igt@gem_workarounds@suspend-resume-fd:
    - shard-kbl:          [PASS][37] -> [DMESG-WARN][38] ([fdo#108566]) +6 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-kbl6/igt@gem_workarounds@suspend-resume-fd.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-kbl4/igt@gem_workarounds@suspend-resume-fd.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [PASS][39] -> [FAIL][40] ([fdo#111830 ])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb6/igt@i915_pm_dc@dc6-psr.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb4/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_suspend@sysfs-reader:
    - shard-tglb:         [PASS][41] -> [INCOMPLETE][42] ([fdo#111832] / [fdo#111850]) +6 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb4/igt@i915_suspend@sysfs-reader.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb2/igt@i915_suspend@sysfs-reader.html

  * igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy:
    - shard-hsw:          [PASS][43] -> [SKIP][44] ([fdo#109271])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw1/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw2/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-hsw:          [PASS][45] -> [INCOMPLETE][46] ([fdo#103540])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw6/igt@kms_flip@flip-vs-suspend-interruptible.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw2/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_frontbuffer_tracking@fbc-stridechange:
    - shard-tglb:         [PASS][47] -> [FAIL][48] ([fdo#103167]) +4 similar issues
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb7/igt@kms_frontbuffer_tracking@fbc-stridechange.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb5/igt@kms_frontbuffer_tracking@fbc-stridechange.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-shrfb-fliptrack:
    - shard-iclb:         [PASS][49] -> [FAIL][50] ([fdo#103167]) +6 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb4/igt@kms_frontbuffer_tracking@fbcpsr-1p-shrfb-fliptrack.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb8/igt@kms_frontbuffer_tracking@fbcpsr-1p-shrfb-fliptrack.html

  * igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-wc:
    - shard-skl:          [PASS][51] -> [FAIL][52] ([fdo#103167])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl3/igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-wc.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl1/igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-wc.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
    - shard-skl:          [PASS][53] -> [FAIL][54] ([fdo#108145])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl3/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl1/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html

  * igt@kms_psr@psr2_cursor_render:
    - shard-iclb:         [PASS][55] -> [SKIP][56] ([fdo#109441]) +1 similar issue
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb2/igt@kms_psr@psr2_cursor_render.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb5/igt@kms_psr@psr2_cursor_render.html

  * igt@prime_busy@hang-bsd2:
    - shard-iclb:         [PASS][57] -> [SKIP][58] ([fdo#109276]) +16 similar issues
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb1/igt@prime_busy@hang-bsd2.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb5/igt@prime_busy@hang-bsd2.html

  
#### Possible fixes ####

  * igt@gem_ctx_persistence@vcs1-queued:
    - shard-iclb:         [SKIP][59] ([fdo#109276] / [fdo#112080]) -> [PASS][60] +3 similar issues
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb6/igt@gem_ctx_persistence@vcs1-queued.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb4/igt@gem_ctx_persistence@vcs1-queued.html

  * igt@gem_ctx_switch@vcs1-heavy-queue:
    - shard-iclb:         [SKIP][61] ([fdo#112080]) -> [PASS][62] +11 similar issues
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb6/igt@gem_ctx_switch@vcs1-heavy-queue.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb2/igt@gem_ctx_switch@vcs1-heavy-queue.html

  * igt@gem_exec_parallel@fds:
    - shard-tglb:         [INCOMPLETE][63] ([fdo#111867]) -> [PASS][64]
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb6/igt@gem_exec_parallel@fds.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb7/igt@gem_exec_parallel@fds.html

  * igt@gem_exec_schedule@preempt-queue-bsd2:
    - shard-iclb:         [SKIP][65] ([fdo#109276]) -> [PASS][66] +17 similar issues
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb5/igt@gem_exec_schedule@preempt-queue-bsd2.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb2/igt@gem_exec_schedule@preempt-queue-bsd2.html

  * igt@gem_exec_schedule@reorder-wide-bsd:
    - shard-iclb:         [SKIP][67] ([fdo#112146]) -> [PASS][68] +4 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb2/igt@gem_exec_schedule@reorder-wide-bsd.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb8/igt@gem_exec_schedule@reorder-wide-bsd.html

  * igt@gem_exec_whisper@normal:
    - shard-tglb:         [INCOMPLETE][69] ([fdo#111747]) -> [PASS][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb6/igt@gem_exec_whisper@normal.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb9/igt@gem_exec_whisper@normal.html

  * igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive:
    - shard-apl:          [TIMEOUT][71] ([fdo#112113]) -> [PASS][72] +1 similar issue
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-apl8/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-apl2/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive.html

  * igt@gem_sync@basic-store-each:
    - shard-tglb:         [INCOMPLETE][73] ([fdo#111647] / [fdo#111747]) -> [PASS][74]
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb4/igt@gem_sync@basic-store-each.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb4/igt@gem_sync@basic-store-each.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy-gup:
    - shard-hsw:          [DMESG-WARN][75] ([fdo#111870]) -> [PASS][76] +2 similar issues
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw1/igt@gem_userptr_blits@map-fixed-invalidate-busy-gup.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw2/igt@gem_userptr_blits@map-fixed-invalidate-busy-gup.html

  * igt@gem_userptr_blits@sync-unmap-after-close:
    - shard-snb:          [DMESG-WARN][77] ([fdo#111870]) -> [PASS][78]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-snb7/igt@gem_userptr_blits@sync-unmap-after-close.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-snb1/igt@gem_userptr_blits@sync-unmap-after-close.html

  * igt@gem_workarounds@suspend-resume:
    - shard-tglb:         [INCOMPLETE][79] ([fdo#111832] / [fdo#111850]) -> [PASS][80]
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb1/igt@gem_workarounds@suspend-resume.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb1/igt@gem_workarounds@suspend-resume.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-iclb:         [FAIL][81] ([fdo#111830 ]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb3/igt@i915_pm_dc@dc6-dpms.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb4/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_rpm@system-suspend-execbuf:
    - shard-hsw:          [INCOMPLETE][83] ([fdo#103540] / [fdo#107807]) -> [PASS][84]
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw1/igt@i915_pm_rpm@system-suspend-execbuf.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw2/igt@i915_pm_rpm@system-suspend-execbuf.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-kbl:          [DMESG-WARN][85] ([fdo#108566]) -> [PASS][86] +5 similar issues
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-kbl4/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-kbl3/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible:
    - shard-skl:          [FAIL][87] ([fdo#105363]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl2/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl3/igt@kms_flip@flip-vs-expired-vblank-interruptible.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-apl:          [DMESG-WARN][89] ([fdo#108566]) -> [PASS][90] +2 similar issues
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-apl4/igt@kms_flip@flip-vs-suspend-interruptible.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-render:
    - shard-iclb:         [FAIL][91] ([fdo#103167]) -> [PASS][92] +2 similar issues
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-render.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-tglb:         [INCOMPLETE][93] ([fdo#111832] / [fdo#111850] / [fdo#111884]) -> [PASS][94]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb2/igt@kms_frontbuffer_tracking@fbc-suspend.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb8/igt@kms_frontbuffer_tracking@fbc-suspend.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite:
    - shard-tglb:         [FAIL][95] ([fdo#103167]) -> [PASS][96]
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb2/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite.html

  * igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min:
    - shard-skl:          [FAIL][97] ([fdo#108145]) -> [PASS][98]
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl10/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl7/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [FAIL][99] ([fdo#108145] / [fdo#110403]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl2/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_plane_scaling@pipe-c-scaler-with-clipping-clamping:
    - shard-iclb:         [INCOMPLETE][101] ([fdo#107713] / [fdo#110041]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb7/igt@kms_plane_scaling@pipe-c-scaler-with-clipping-clamping.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb3/igt@kms_plane_scaling@pipe-c-scaler-with-clipping-clamping.html

  * igt@kms_psr2_su@page_flip:
    - shard-iclb:         [SKIP][103] ([fdo#109642] / [fdo#111068]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb3/igt@kms_psr2_su@page_flip.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb2/igt@kms_psr2_su@page_flip.html

  * igt@kms_psr@psr2_primary_page_flip:
    - shard-iclb:         [SKIP][105] ([fdo#109441]) -> [PASS][106] +1 similar issue
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb1/igt@kms_psr@psr2_primary_page_flip.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
@ 2019-11-18  1:47   ` Patchwork
  0 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2019-11-18  1:47 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests
URL   : https://patchwork.freedesktop.org/series/69591/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_7358_full -> Patchwork_15307_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_15307_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_15307_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_15307_full:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@mock_requests:
    - shard-snb:          [PASS][1] -> [DMESG-WARN][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-snb1/igt@i915_selftest@mock_requests.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-snb4/igt@i915_selftest@mock_requests.html
    - shard-skl:          [PASS][3] -> [DMESG-WARN][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl7/igt@i915_selftest@mock_requests.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl10/igt@i915_selftest@mock_requests.html
    - shard-glk:          [PASS][5] -> [DMESG-WARN][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-glk8/igt@i915_selftest@mock_requests.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-glk2/igt@i915_selftest@mock_requests.html
    - shard-tglb:         [PASS][7] -> [DMESG-WARN][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb2/igt@i915_selftest@mock_requests.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb2/igt@i915_selftest@mock_requests.html
    - shard-apl:          [PASS][9] -> [DMESG-WARN][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-apl2/igt@i915_selftest@mock_requests.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-apl1/igt@i915_selftest@mock_requests.html
    - shard-kbl:          [PASS][11] -> [DMESG-WARN][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-kbl3/igt@i915_selftest@mock_requests.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-kbl7/igt@i915_selftest@mock_requests.html
    - shard-hsw:          [PASS][13] -> [DMESG-WARN][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw1/igt@i915_selftest@mock_requests.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw1/igt@i915_selftest@mock_requests.html
    - shard-iclb:         [PASS][15] -> [DMESG-WARN][16] +1 similar issue
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb4/igt@i915_selftest@mock_requests.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb7/igt@i915_selftest@mock_requests.html

  
New tests
---------

  New tests have been introduced between CI_DRM_7358_full and Patchwork_15307_full:

### New IGT tests (1) ###

  * igt@i915_selftest@live_late_gt_pm:
    - Statuses :
    - Exec time: [None] s

  

Known issues
------------

  Here are the changes found in Patchwork_15307_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_isolation@vcs1-clean:
    - shard-iclb:         [PASS][17] -> [SKIP][18] ([fdo#109276] / [fdo#112080]) +3 similar issues
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb2/igt@gem_ctx_isolation@vcs1-clean.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb5/igt@gem_ctx_isolation@vcs1-clean.html

  * igt@gem_ctx_shared@exec-single-timeline-bsd:
    - shard-iclb:         [PASS][19] -> [SKIP][20] ([fdo#110841])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb6/igt@gem_ctx_shared@exec-single-timeline-bsd.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb4/igt@gem_ctx_shared@exec-single-timeline-bsd.html

  * igt@gem_ctx_switch@vcs1-heavy:
    - shard-iclb:         [PASS][21] -> [SKIP][22] ([fdo#112080]) +11 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb4/igt@gem_ctx_switch@vcs1-heavy.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb8/igt@gem_ctx_switch@vcs1-heavy.html

  * igt@gem_exec_balancer@smoke:
    - shard-iclb:         [PASS][23] -> [SKIP][24] ([fdo#110854])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb1/igt@gem_exec_balancer@smoke.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb6/igt@gem_exec_balancer@smoke.html

  * igt@gem_exec_create@forked:
    - shard-tglb:         [PASS][25] -> [INCOMPLETE][26] ([fdo#108838] / [fdo#111747])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb7/igt@gem_exec_create@forked.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb6/igt@gem_exec_create@forked.html

  * igt@gem_exec_reloc@basic-wc-active:
    - shard-skl:          [PASS][27] -> [DMESG-WARN][28] ([fdo#106107])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl1/igt@gem_exec_reloc@basic-wc-active.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl6/igt@gem_exec_reloc@basic-wc-active.html

  * igt@gem_exec_schedule@preemptive-hang-bsd:
    - shard-iclb:         [PASS][29] -> [SKIP][30] ([fdo#112146]) +3 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb5/igt@gem_exec_schedule@preemptive-hang-bsd.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb2/igt@gem_exec_schedule@preemptive-hang-bsd.html

  * igt@gem_softpin@noreloc-s3:
    - shard-apl:          [PASS][31] -> [DMESG-WARN][32] ([fdo#108566]) +1 similar issue
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-apl7/igt@gem_softpin@noreloc-s3.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-apl1/igt@gem_softpin@noreloc-s3.html

  * igt@gem_userptr_blits@dmabuf-unsync:
    - shard-hsw:          [PASS][33] -> [DMESG-WARN][34] ([fdo#111870])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw2/igt@gem_userptr_blits@dmabuf-unsync.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw1/igt@gem_userptr_blits@dmabuf-unsync.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy-gup:
    - shard-snb:          [PASS][35] -> [DMESG-WARN][36] ([fdo#111870])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-snb6/igt@gem_userptr_blits@map-fixed-invalidate-busy-gup.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-snb1/igt@gem_userptr_blits@map-fixed-invalidate-busy-gup.html

  * igt@gem_workarounds@suspend-resume-fd:
    - shard-kbl:          [PASS][37] -> [DMESG-WARN][38] ([fdo#108566]) +6 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-kbl6/igt@gem_workarounds@suspend-resume-fd.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-kbl4/igt@gem_workarounds@suspend-resume-fd.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [PASS][39] -> [FAIL][40] ([fdo#111830 ])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb6/igt@i915_pm_dc@dc6-psr.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb4/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_suspend@sysfs-reader:
    - shard-tglb:         [PASS][41] -> [INCOMPLETE][42] ([fdo#111832] / [fdo#111850]) +6 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb4/igt@i915_suspend@sysfs-reader.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb2/igt@i915_suspend@sysfs-reader.html

  * igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy:
    - shard-hsw:          [PASS][43] -> [SKIP][44] ([fdo#109271])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw1/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw2/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-hsw:          [PASS][45] -> [INCOMPLETE][46] ([fdo#103540])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw6/igt@kms_flip@flip-vs-suspend-interruptible.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw2/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_frontbuffer_tracking@fbc-stridechange:
    - shard-tglb:         [PASS][47] -> [FAIL][48] ([fdo#103167]) +4 similar issues
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb7/igt@kms_frontbuffer_tracking@fbc-stridechange.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb5/igt@kms_frontbuffer_tracking@fbc-stridechange.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-shrfb-fliptrack:
    - shard-iclb:         [PASS][49] -> [FAIL][50] ([fdo#103167]) +6 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb4/igt@kms_frontbuffer_tracking@fbcpsr-1p-shrfb-fliptrack.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb8/igt@kms_frontbuffer_tracking@fbcpsr-1p-shrfb-fliptrack.html

  * igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-wc:
    - shard-skl:          [PASS][51] -> [FAIL][52] ([fdo#103167])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl3/igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-wc.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl1/igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-wc.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
    - shard-skl:          [PASS][53] -> [FAIL][54] ([fdo#108145])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl3/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl1/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html

  * igt@kms_psr@psr2_cursor_render:
    - shard-iclb:         [PASS][55] -> [SKIP][56] ([fdo#109441]) +1 similar issue
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb2/igt@kms_psr@psr2_cursor_render.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb5/igt@kms_psr@psr2_cursor_render.html

  * igt@prime_busy@hang-bsd2:
    - shard-iclb:         [PASS][57] -> [SKIP][58] ([fdo#109276]) +16 similar issues
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb1/igt@prime_busy@hang-bsd2.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb5/igt@prime_busy@hang-bsd2.html

  
#### Possible fixes ####

  * igt@gem_ctx_persistence@vcs1-queued:
    - shard-iclb:         [SKIP][59] ([fdo#109276] / [fdo#112080]) -> [PASS][60] +3 similar issues
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb6/igt@gem_ctx_persistence@vcs1-queued.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb4/igt@gem_ctx_persistence@vcs1-queued.html

  * igt@gem_ctx_switch@vcs1-heavy-queue:
    - shard-iclb:         [SKIP][61] ([fdo#112080]) -> [PASS][62] +11 similar issues
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb6/igt@gem_ctx_switch@vcs1-heavy-queue.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb2/igt@gem_ctx_switch@vcs1-heavy-queue.html

  * igt@gem_exec_parallel@fds:
    - shard-tglb:         [INCOMPLETE][63] ([fdo#111867]) -> [PASS][64]
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb6/igt@gem_exec_parallel@fds.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb7/igt@gem_exec_parallel@fds.html

  * igt@gem_exec_schedule@preempt-queue-bsd2:
    - shard-iclb:         [SKIP][65] ([fdo#109276]) -> [PASS][66] +17 similar issues
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb5/igt@gem_exec_schedule@preempt-queue-bsd2.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb2/igt@gem_exec_schedule@preempt-queue-bsd2.html

  * igt@gem_exec_schedule@reorder-wide-bsd:
    - shard-iclb:         [SKIP][67] ([fdo#112146]) -> [PASS][68] +4 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb2/igt@gem_exec_schedule@reorder-wide-bsd.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb8/igt@gem_exec_schedule@reorder-wide-bsd.html

  * igt@gem_exec_whisper@normal:
    - shard-tglb:         [INCOMPLETE][69] ([fdo#111747]) -> [PASS][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb6/igt@gem_exec_whisper@normal.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb9/igt@gem_exec_whisper@normal.html

  * igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive:
    - shard-apl:          [TIMEOUT][71] ([fdo#112113]) -> [PASS][72] +1 similar issue
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-apl8/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-apl2/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive.html

  * igt@gem_sync@basic-store-each:
    - shard-tglb:         [INCOMPLETE][73] ([fdo#111647] / [fdo#111747]) -> [PASS][74]
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb4/igt@gem_sync@basic-store-each.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb4/igt@gem_sync@basic-store-each.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy-gup:
    - shard-hsw:          [DMESG-WARN][75] ([fdo#111870]) -> [PASS][76] +2 similar issues
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw1/igt@gem_userptr_blits@map-fixed-invalidate-busy-gup.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw2/igt@gem_userptr_blits@map-fixed-invalidate-busy-gup.html

  * igt@gem_userptr_blits@sync-unmap-after-close:
    - shard-snb:          [DMESG-WARN][77] ([fdo#111870]) -> [PASS][78]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-snb7/igt@gem_userptr_blits@sync-unmap-after-close.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-snb1/igt@gem_userptr_blits@sync-unmap-after-close.html

  * igt@gem_workarounds@suspend-resume:
    - shard-tglb:         [INCOMPLETE][79] ([fdo#111832] / [fdo#111850]) -> [PASS][80]
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb1/igt@gem_workarounds@suspend-resume.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb1/igt@gem_workarounds@suspend-resume.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-iclb:         [FAIL][81] ([fdo#111830 ]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb3/igt@i915_pm_dc@dc6-dpms.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb4/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_rpm@system-suspend-execbuf:
    - shard-hsw:          [INCOMPLETE][83] ([fdo#103540] / [fdo#107807]) -> [PASS][84]
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-hsw1/igt@i915_pm_rpm@system-suspend-execbuf.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-hsw2/igt@i915_pm_rpm@system-suspend-execbuf.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-kbl:          [DMESG-WARN][85] ([fdo#108566]) -> [PASS][86] +5 similar issues
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-kbl4/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-kbl3/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible:
    - shard-skl:          [FAIL][87] ([fdo#105363]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl2/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl3/igt@kms_flip@flip-vs-expired-vblank-interruptible.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-apl:          [DMESG-WARN][89] ([fdo#108566]) -> [PASS][90] +2 similar issues
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-apl4/igt@kms_flip@flip-vs-suspend-interruptible.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-render:
    - shard-iclb:         [FAIL][91] ([fdo#103167]) -> [PASS][92] +2 similar issues
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-render.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-tglb:         [INCOMPLETE][93] ([fdo#111832] / [fdo#111850] / [fdo#111884]) -> [PASS][94]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb2/igt@kms_frontbuffer_tracking@fbc-suspend.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb8/igt@kms_frontbuffer_tracking@fbc-suspend.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite:
    - shard-tglb:         [FAIL][95] ([fdo#103167]) -> [PASS][96]
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-tglb2/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-tglb4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite.html

  * igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min:
    - shard-skl:          [FAIL][97] ([fdo#108145]) -> [PASS][98]
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl10/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl7/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [FAIL][99] ([fdo#108145] / [fdo#110403]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-skl2/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_plane_scaling@pipe-c-scaler-with-clipping-clamping:
    - shard-iclb:         [INCOMPLETE][101] ([fdo#107713] / [fdo#110041]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb7/igt@kms_plane_scaling@pipe-c-scaler-with-clipping-clamping.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb3/igt@kms_plane_scaling@pipe-c-scaler-with-clipping-clamping.html

  * igt@kms_psr2_su@page_flip:
    - shard-iclb:         [SKIP][103] ([fdo#109642] / [fdo#111068]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb3/igt@kms_psr2_su@page_flip.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard-iclb2/igt@kms_psr2_su@page_flip.html

  * igt@kms_psr@psr2_primary_page_flip:
    - shard-iclb:         [SKIP][105] ([fdo#109441]) -> [PASS][106] +1 similar issue
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7358/shard-iclb1/igt@kms_psr@psr2_primary_page_flip.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/shard

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15307/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2019-11-18  1:47 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-17 21:03 [PATCH 01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests Chris Wilson
2019-11-17 21:03 ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 02/14] drm/i915: Mark up the calling context for intel_wakeref_put() Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 03/14] drm/i915/gem: Merge GGTT vma flush into a single loop Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 04/14] drm/i915/gt: Only wait for register chipset flush if active Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 05/14] drm/i915: Protect the obj->vma.list during iteration Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 06/14] drm/i915: Wait until the intel_wakeref idle callback is complete Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 07/14] drm/i915/gt: Declare timeline.lock to be irq-free Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 08/14] drm/i915/gt: Move new timelines to the end of active_list Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 09/14] drm/i915/gt: Schedule next retirement worker first Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 10/14] drm/i915/gt: Flush the requests after wedging on suspend Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 11/14] drm/i915/selftests: Flush the active callbacks Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 12/14] drm/i915/selftests: Be explicit in ERR_PTR handling Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 13/14] drm/i915/selftests: Exercise rc6 handling Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:03 ` [PATCH 14/14] drm/i915/gt: Track engine round-trip times Chris Wilson
2019-11-17 21:03   ` [Intel-gfx] " Chris Wilson
2019-11-17 21:14 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/14] drm/i915/gt: Close race between engine_park and intel_gt_retire_requests Patchwork
2019-11-17 21:14   ` [Intel-gfx] " Patchwork
2019-11-17 21:35 ` ✓ Fi.CI.BAT: success " Patchwork
2019-11-17 21:35   ` [Intel-gfx] " Patchwork
2019-11-18  1:47 ` ✗ Fi.CI.IGT: failure " Patchwork
2019-11-18  1:47   ` [Intel-gfx] " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.