All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
@ 2020-01-06 10:22 Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 2/8] drm/i915/selftests: Impose a timeout for request submission Chris Wilson
                   ` (10 more replies)
  0 siblings, 11 replies; 13+ messages in thread
From: Chris Wilson @ 2020-01-06 10:22 UTC (permalink / raw)
  To: intel-gfx

The local var does not need the __user as it exists on the kernel stack
and not a pointer into the __user address space.

drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c:989:9: warning: dereference of noderef expression
drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c:990:13: warning: dereference of noderef expression

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index e9e8f62c1185..ef7c74cff28a 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -958,8 +958,9 @@ static int __igt_mmap_gpu(struct drm_i915_private *i915,
 {
 	struct intel_engine_cs *engine;
 	struct i915_mmap_offset *mmo;
-	u32 __user *ux, bbe;
 	unsigned long addr;
+	u32 __user *ux;
+	u32 bbe;
 	int err;
 
 	/*
-- 
2.25.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Intel-gfx] [PATCH 2/8] drm/i915/selftests: Impose a timeout for request submission
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
@ 2020-01-06 10:22 ` Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 3/8] drm/i915/gt: Convert the final GEM_TRACE to GT_TRACE and co Chris Wilson
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-01-06 10:22 UTC (permalink / raw)
  To: intel-gfx

Avoid spinning indefinitely waiting for the request to be submitted, and
instead apply a timeout. A secondary benefit is that the error message
will show which suspect is blocked.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 627613d85db8..d96604baab94 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -527,13 +527,19 @@ static struct i915_request *nop_request(struct intel_engine_cs *engine)
 	return rq;
 }
 
-static void wait_for_submit(struct intel_engine_cs *engine,
-			    struct i915_request *rq)
+static int wait_for_submit(struct intel_engine_cs *engine,
+			   struct i915_request *rq,
+			   unsigned long timeout)
 {
+	timeout += jiffies;
 	do {
 		cond_resched();
 		intel_engine_flush_submission(engine);
-	} while (!i915_request_is_active(rq));
+		if (i915_request_is_active(rq))
+			return 0;
+	} while (time_before(jiffies, timeout));
+
+	return -ETIME;
 }
 
 static long timeslice_threshold(const struct intel_engine_cs *engine)
@@ -601,7 +607,12 @@ static int live_timeslice_queue(void *arg)
 			goto err_heartbeat;
 		}
 		engine->schedule(rq, &attr);
-		wait_for_submit(engine, rq);
+		err = wait_for_submit(engine, rq, HZ / 2);
+		if (err) {
+			pr_err("%s: Timed out trying to submit semaphores\n",
+			       engine->name);
+			goto err_rq;
+		}
 
 		/* ELSP[1]: nop request */
 		nop = nop_request(engine);
@@ -609,8 +620,13 @@ static int live_timeslice_queue(void *arg)
 			err = PTR_ERR(nop);
 			goto err_rq;
 		}
-		wait_for_submit(engine, nop);
+		err = wait_for_submit(engine, nop, HZ / 2);
 		i915_request_put(nop);
+		if (err) {
+			pr_err("%s: Timed out trying to submit nop\n",
+			       engine->name);
+			goto err_rq;
+		}
 
 		GEM_BUG_ON(i915_request_completed(rq));
 		GEM_BUG_ON(execlists_active(&engine->execlists) != rq);
-- 
2.25.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Intel-gfx] [PATCH 3/8] drm/i915/gt: Convert the final GEM_TRACE to GT_TRACE and co
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 2/8] drm/i915/selftests: Impose a timeout for request submission Chris Wilson
@ 2020-01-06 10:22 ` Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 4/8] drm/i915: Merge i915_request.flags with i915_request.fence.flags Chris Wilson
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-01-06 10:22 UTC (permalink / raw)
  To: intel-gfx

Convert the few remaining GEM_TRACE() used for debugging over to the
appropriate GT_TRACE or RQ_TRACE.

References: 639f2f24895f ("drm/i915: Introduce new macros for tracing")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c |  2 ++
 drivers/gpu/drm/i915/gt/intel_reset.c   | 21 ++++++++-------------
 2 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index fbaa9df6f436..4d0bc1478ccd 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -152,6 +152,8 @@ static int __intel_context_active(struct i915_active *active)
 	struct intel_context *ce = container_of(active, typeof(*ce), active);
 	int err;
 
+	CE_TRACE(ce, "active\n");
+
 	intel_context_get(ce);
 
 	err = intel_ring_pin(ce->ring);
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index fe919a1af904..76de33ae9efe 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -147,11 +147,7 @@ static void mark_innocent(struct i915_request *rq)
 
 void __i915_request_reset(struct i915_request *rq, bool guilty)
 {
-	GEM_TRACE("%s rq=%llx:%lld, guilty? %s\n",
-		  rq->engine->name,
-		  rq->fence.context,
-		  rq->fence.seqno,
-		  yesno(guilty));
+	RQ_TRACE(rq, "guilty? %s\n", yesno(guilty));
 
 	GEM_BUG_ON(i915_request_completed(rq));
 
@@ -624,7 +620,7 @@ int __intel_gt_reset(struct intel_gt *gt, intel_engine_mask_t engine_mask)
 	 */
 	intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL);
 	for (retry = 0; ret == -ETIMEDOUT && retry < retries; retry++) {
-		GEM_TRACE("engine_mask=%x\n", engine_mask);
+		GT_TRACE(gt, "engine_mask=%x\n", engine_mask);
 		preempt_disable();
 		ret = reset(gt, engine_mask, retry);
 		preempt_enable();
@@ -784,8 +780,7 @@ static void nop_submit_request(struct i915_request *request)
 	struct intel_engine_cs *engine = request->engine;
 	unsigned long flags;
 
-	GEM_TRACE("%s fence %llx:%lld -> -EIO\n",
-		  engine->name, request->fence.context, request->fence.seqno);
+	RQ_TRACE(request, "-EIO\n");
 	dma_fence_set_error(&request->fence, -EIO);
 
 	spin_lock_irqsave(&engine->active.lock, flags);
@@ -812,7 +807,7 @@ static void __intel_gt_set_wedged(struct intel_gt *gt)
 			intel_engine_dump(engine, &p, "%s\n", engine->name);
 	}
 
-	GEM_TRACE("start\n");
+	GT_TRACE(gt, "start\n");
 
 	/*
 	 * First, stop submission to hw, but do not yet complete requests by
@@ -843,7 +838,7 @@ static void __intel_gt_set_wedged(struct intel_gt *gt)
 
 	reset_finish(gt, awake);
 
-	GEM_TRACE("end\n");
+	GT_TRACE(gt, "end\n");
 }
 
 void intel_gt_set_wedged(struct intel_gt *gt)
@@ -869,7 +864,7 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 	if (test_bit(I915_WEDGED_ON_INIT, &gt->reset.flags))
 		return false;
 
-	GEM_TRACE("start\n");
+	GT_TRACE(gt, "start\n");
 
 	/*
 	 * Before unwedging, make sure that all pending operations
@@ -931,7 +926,7 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt)
 	 */
 	intel_engines_reset_default_submission(gt);
 
-	GEM_TRACE("end\n");
+	GT_TRACE(gt, "end\n");
 
 	smp_mb__before_atomic(); /* complete takeover before enabling execbuf */
 	clear_bit(I915_WEDGED, &gt->reset.flags);
@@ -1006,7 +1001,7 @@ void intel_gt_reset(struct intel_gt *gt,
 	intel_engine_mask_t awake;
 	int ret;
 
-	GEM_TRACE("flags=%lx\n", gt->reset.flags);
+	GT_TRACE(gt, "flags=%lx\n", gt->reset.flags);
 
 	might_sleep();
 	GEM_BUG_ON(!test_bit(I915_RESET_BACKOFF, &gt->reset.flags));
-- 
2.25.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Intel-gfx] [PATCH 4/8] drm/i915: Merge i915_request.flags with i915_request.fence.flags
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 2/8] drm/i915/selftests: Impose a timeout for request submission Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 3/8] drm/i915/gt: Convert the final GEM_TRACE to GT_TRACE and co Chris Wilson
@ 2020-01-06 10:22 ` Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 5/8] drm/i915: Replace vma parking with a clock aging algorithm Chris Wilson
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-01-06 10:22 UTC (permalink / raw)
  To: intel-gfx

As we already have a flags field buried within i915_request, reuse it!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |  2 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  4 +-
 drivers/gpu/drm/i915/gt/intel_rps.c           |  2 +-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  2 +-
 drivers/gpu/drm/i915/i915_request.c           |  1 -
 drivers/gpu/drm/i915/i915_request.h           | 43 +++++++++++++++----
 7 files changed, 41 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index cbd2bcade3c8..d5a0f5ae4a8b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2173,7 +2173,7 @@ static int eb_submit(struct i915_execbuffer *eb)
 	}
 
 	if (intel_context_nopreempt(eb->context))
-		eb->request->flags |= I915_REQUEST_NOPREEMPT;
+		__set_bit(I915_FENCE_FLAG_NOPREEMPT, &eb->request->fence.flags);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index 742628e40201..6c6fd185457c 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -199,7 +199,7 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
 		goto out_unlock;
 	}
 
-	rq->flags |= I915_REQUEST_SENTINEL;
+	__set_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags);
 	idle_pulse(engine, rq);
 
 	__i915_request_commit(rq);
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 170b5a0139a3..28c05e7a1510 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1538,8 +1538,8 @@ static bool can_merge_rq(const struct i915_request *prev,
 	if (i915_request_completed(next))
 		return true;
 
-	if (unlikely((prev->flags ^ next->flags) &
-		     (I915_REQUEST_NOPREEMPT | I915_REQUEST_SENTINEL)))
+	if (unlikely((prev->fence.flags ^ next->fence.flags) &
+		     (I915_FENCE_FLAG_NOPREEMPT | I915_FENCE_FLAG_SENTINEL)))
 		return false;
 
 	if (!can_merge_ctx(prev->context, next->context))
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index f232036c3c7a..d2a3d935d186 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -777,7 +777,7 @@ void intel_rps_boost(struct i915_request *rq)
 	spin_lock_irqsave(&rq->lock, flags);
 	if (!i915_request_has_waitboost(rq) &&
 	    !dma_fence_is_signaled_locked(&rq->fence)) {
-		rq->flags |= I915_REQUEST_WAITBOOST;
+		set_bit(I915_FENCE_FLAG_BOOST, &rq->fence.flags);
 
 		if (!atomic_fetch_inc(&rps->num_waiters) &&
 		    READ_ONCE(rps->cur_freq) < rps->boost_freq)
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index d96604baab94..15cda024e3e4 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -1153,7 +1153,7 @@ static int live_nopreempt(void *arg)
 		}
 
 		/* Low priority client, but unpreemptable! */
-		rq_a->flags |= I915_REQUEST_NOPREEMPT;
+		__set_bit(I915_FENCE_FLAG_NOPREEMPT, &rq_a->fence.flags);
 
 		i915_request_add(rq_a);
 		if (!igt_wait_for_spinner(&a.spin, rq_a)) {
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 44a0d1a950c5..be185886e4fc 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -658,7 +658,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	rq->engine = ce->engine;
 	rq->ring = ce->ring;
 	rq->execution_mask = ce->engine->mask;
-	rq->flags = 0;
 
 	RCU_INIT_POINTER(rq->timeline, tl);
 	RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline);
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index 9784421a3b4d..031433691a06 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -77,6 +77,38 @@ enum {
 	 * a request is on the various signal_list.
 	 */
 	I915_FENCE_FLAG_SIGNAL,
+
+	/*
+	 * I915_FENCE_FLAG_NOPREEMPT - this request should not be preempted
+	 *
+	 * The execution of some requests should not be interrupted. This is
+	 * a sensitive operation as it makes the request super important,
+	 * blocking other higher priority work. Abuse of this flag will
+	 * lead to quality of service issues.
+	 */
+	I915_FENCE_FLAG_NOPREEMPT,
+
+	/*
+	 * I915_FENCE_FLAG_SENTINEL - this request should be last in the queue
+	 *
+	 * A high priority sentinel request may be submitted to clear the
+	 * submission queue. As it will be the only request in-flight, upon
+	 * execution all other active requests will have been preempted and
+	 * unsubmitted. This preemptive pulse is used to re-evaluate the
+	 * in-flight requests, particularly in cases where an active context
+	 * is banned and those active requests need to be cancelled.
+	 */
+	I915_FENCE_FLAG_SENTINEL,
+
+	/*
+	 * I915_FENCE_FLAG_BOOST - upclock the gpu for this request
+	 *
+	 * Some requests are more important than others! In particular, a
+	 * request that the user is waiting on is typically required for
+	 * interactive latency, for which we want to minimise by upclocking
+	 * the GPU. Here we track such boost requests on a per-request basis.
+	 */
+	I915_FENCE_FLAG_BOOST,
 };
 
 /**
@@ -225,11 +257,6 @@ struct i915_request {
 	/** Time at which this request was emitted, in jiffies. */
 	unsigned long emitted_jiffies;
 
-	unsigned long flags;
-#define I915_REQUEST_WAITBOOST	BIT(0)
-#define I915_REQUEST_NOPREEMPT	BIT(1)
-#define I915_REQUEST_SENTINEL	BIT(2)
-
 	/** timeline->request entry for this request */
 	struct list_head link;
 
@@ -442,18 +469,18 @@ static inline void i915_request_mark_complete(struct i915_request *rq)
 
 static inline bool i915_request_has_waitboost(const struct i915_request *rq)
 {
-	return rq->flags & I915_REQUEST_WAITBOOST;
+	return test_bit(I915_FENCE_FLAG_BOOST, &rq->fence.flags);
 }
 
 static inline bool i915_request_has_nopreempt(const struct i915_request *rq)
 {
 	/* Preemption should only be disabled very rarely */
-	return unlikely(rq->flags & I915_REQUEST_NOPREEMPT);
+	return unlikely(test_bit(I915_FENCE_FLAG_NOPREEMPT, &rq->fence.flags));
 }
 
 static inline bool i915_request_has_sentinel(const struct i915_request *rq)
 {
-	return unlikely(rq->flags & I915_REQUEST_SENTINEL);
+	return unlikely(test_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags));
 }
 
 static inline struct intel_timeline *
-- 
2.25.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Intel-gfx] [PATCH 5/8] drm/i915: Replace vma parking with a clock aging algorithm
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
                   ` (2 preceding siblings ...)
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 4/8] drm/i915: Merge i915_request.flags with i915_request.fence.flags Chris Wilson
@ 2020-01-06 10:22 ` Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 6/8] drm/i915: Only retire requests when eviction is allowed to blocked Chris Wilson
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-01-06 10:22 UTC (permalink / raw)
  To: intel-gfx

We cache the user's vma for a brief period of time after they close them
so that if they are immediately reopened we avoid having to unbind and
rebind them. This happens quite frequently for display servers which
only keep a client's frame open for as long as they are copying from it,
and so they open/close every vma about 30 Hz (every other frame for
double buffering).

Our current strategy is to keep the vma alive until the next global idle
point. However this cache should be purely temporal, so switch over from
using the parked notifier to using its own clock based aging algorithm:
if the closed vma is not reused within 2 clock ticks, it is destroyed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gt.c       |  3 --
 drivers/gpu/drm/i915/gt/intel_gt_pm.c    |  1 -
 drivers/gpu/drm/i915/gt/intel_gt_types.h |  3 --
 drivers/gpu/drm/i915/i915_debugfs.c      |  3 ++
 drivers/gpu/drm/i915/i915_drv.c          |  4 +-
 drivers/gpu/drm/i915/i915_drv.h          |  1 +
 drivers/gpu/drm/i915/i915_vma.c          | 68 ++++++++++++++++++------
 drivers/gpu/drm/i915/i915_vma.h          | 11 +++-
 8 files changed, 69 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
index 8a17abfbb19f..d0879b5fc313 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -23,9 +23,6 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915)
 
 	spin_lock_init(&gt->irq_lock);
 
-	INIT_LIST_HEAD(&gt->closed_vma);
-	spin_lock_init(&gt->closed_lock);
-
 	intel_gt_init_reset(gt);
 	intel_gt_init_requests(gt);
 	intel_gt_init_timelines(gt);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index d1c2f034296a..3302f676d12b 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -80,7 +80,6 @@ static int __gt_park(struct intel_wakeref *wf)
 
 	intel_gt_park_requests(gt);
 
-	i915_vma_parked(gt);
 	i915_pmu_gt_parked(i915);
 	intel_rps_park(&gt->rps);
 	intel_rc6_park(&gt->rc6);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
index 96890dd12b5f..4589dea67b8f 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
@@ -58,9 +58,6 @@ struct intel_gt {
 	struct intel_wakeref wakeref;
 	atomic_t user_wakeref;
 
-	struct list_head closed_vma;
-	spinlock_t closed_lock; /* guards the list of closed_vma */
-
 	struct intel_reset reset;
 
 	/**
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 0ac98e39eb75..00fb03d772ab 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -3589,6 +3589,9 @@ i915_drop_caches_set(void *data, u64 val)
 	if (ret)
 		return ret;
 
+	if (val & DROP_IDLE)
+		i915_vma_clock_flush(&i915->vma_clock);
+
 	fs_reclaim_acquire(GFP_KERNEL);
 	if (val & DROP_BOUND)
 		i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_BOUND);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index f7385abdd74b..9fde3918094f 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -523,8 +523,8 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
 
 	intel_wopcm_init_early(&dev_priv->wopcm);
 
+	i915_vma_clock_init_early(&dev_priv->vma_clock);
 	intel_gt_init_early(&dev_priv->gt, dev_priv);
-
 	i915_gem_init_early(dev_priv);
 
 	/* This must be called before any calls to HAS_PCH_* */
@@ -561,6 +561,8 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
  */
 static void i915_driver_late_release(struct drm_i915_private *dev_priv)
 {
+	i915_vma_clock_flush(&dev_priv->vma_clock);
+
 	intel_irq_fini(dev_priv);
 	intel_power_domains_cleanup(dev_priv);
 	i915_gem_cleanup_early(dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 50181113dd2b..d61d73c680b1 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1240,6 +1240,7 @@ struct drm_i915_private {
 	struct intel_runtime_pm runtime_pm;
 
 	struct i915_perf perf;
+	struct i915_vma_clock vma_clock;
 
 	/* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
 	struct intel_gt gt;
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index cbd783c31adb..925100c0690e 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -985,8 +985,7 @@ int i915_ggtt_pin(struct i915_vma *vma, u32 align, unsigned int flags)
 
 void i915_vma_close(struct i915_vma *vma)
 {
-	struct intel_gt *gt = vma->vm->gt;
-	unsigned long flags;
+	struct i915_vma_clock *clock = &vma->vm->i915->vma_clock;
 
 	GEM_BUG_ON(i915_vma_is_closed(vma));
 
@@ -1002,18 +1001,20 @@ void i915_vma_close(struct i915_vma *vma)
 	 * causing us to rebind the VMA once more. This ends up being a lot
 	 * of wasted work for the steady state.
 	 */
-	spin_lock_irqsave(&gt->closed_lock, flags);
-	list_add(&vma->closed_link, &gt->closed_vma);
-	spin_unlock_irqrestore(&gt->closed_lock, flags);
+	spin_lock(&clock->lock);
+	list_add(&vma->closed_link, &clock->age[0]);
+	spin_unlock(&clock->lock);
+
+	schedule_delayed_work(&clock->work, round_jiffies_up_relative(HZ));
 }
 
 static void __i915_vma_remove_closed(struct i915_vma *vma)
 {
-	struct intel_gt *gt = vma->vm->gt;
+	struct i915_vma_clock *clock = &vma->vm->i915->vma_clock;
 
-	spin_lock_irq(&gt->closed_lock);
+	spin_lock(&clock->lock);
 	list_del_init(&vma->closed_link);
-	spin_unlock_irq(&gt->closed_lock);
+	spin_unlock(&clock->lock);
 }
 
 void i915_vma_reopen(struct i915_vma *vma)
@@ -1051,12 +1052,28 @@ void i915_vma_release(struct kref *ref)
 	i915_vma_free(vma);
 }
 
-void i915_vma_parked(struct intel_gt *gt)
+static void i915_vma_clock(struct work_struct *w)
 {
+	struct i915_vma_clock *clock =
+		container_of(w, typeof(*clock), work.work);
 	struct i915_vma *vma, *next;
 
-	spin_lock_irq(&gt->closed_lock);
-	list_for_each_entry_safe(vma, next, &gt->closed_vma, closed_link) {
+	/*
+	 * A very simple clock aging algorithm: we keep the user's closed
+	 * vma alive for a couple of timer ticks before destroying them.
+	 * This serves a shortlived cache so that frequently reused VMA
+	 * are kept alive between frames and we skip having to rebing them.
+	 *
+	 * When closed, we insert the vma into age[0]. Upon completion of
+	 * a timer tick, it is moved to age[1]. At the start of each timer
+	 * tick, we destroy all the old vma that were accumulated into age[1]
+	 * and have not been reused. All destroyed vma have therefore been
+	 * unused for more than 1 tick (at least a second), and at most 2
+	 * ticks (we expect the average to be 1.5 ticks).
+	 */
+
+	spin_lock(&clock->lock);
+	list_for_each_entry_safe(vma, next, &clock->age[1], closed_link) {
 		struct drm_i915_gem_object *obj = vma->obj;
 		struct i915_address_space *vm = vma->vm;
 
@@ -1072,7 +1089,7 @@ void i915_vma_parked(struct intel_gt *gt)
 			obj = NULL;
 		}
 
-		spin_unlock_irq(&gt->closed_lock);
+		spin_unlock(&clock->lock);
 
 		if (obj) {
 			__i915_vma_put(vma);
@@ -1082,11 +1099,15 @@ void i915_vma_parked(struct intel_gt *gt)
 		i915_vm_close(vm);
 
 		/* Restart after dropping lock */
-		spin_lock_irq(&gt->closed_lock);
-		next = list_first_entry(&gt->closed_vma,
+		spin_lock(&clock->lock);
+		next = list_first_entry(&clock->age[1],
 					typeof(*next), closed_link);
 	}
-	spin_unlock_irq(&gt->closed_lock);
+	list_splice_tail_init(&clock->age[0], &clock->age[1]);
+	if (!list_empty(&clock->age[1]))
+		schedule_delayed_work(&clock->work,
+				      round_jiffies_up_relative(HZ));
+	spin_unlock(&clock->lock);
 }
 
 static void __i915_vma_iounmap(struct i915_vma *vma)
@@ -1277,6 +1298,23 @@ void i915_vma_make_purgeable(struct i915_vma *vma)
 	i915_gem_object_make_purgeable(vma->obj);
 }
 
+void i915_vma_clock_init_early(struct i915_vma_clock *clock)
+{
+	spin_lock_init(&clock->lock);
+	INIT_LIST_HEAD(&clock->age[0]);
+	INIT_LIST_HEAD(&clock->age[1]);
+
+	INIT_DELAYED_WORK(&clock->work, i915_vma_clock);
+}
+
+void i915_vma_clock_flush(struct i915_vma_clock *clock)
+{
+	do {
+		if (cancel_delayed_work_sync(&clock->work))
+			i915_vma_clock(&clock->work.work);
+	} while (delayed_work_pending(&clock->work));
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/i915_vma.c"
 #endif
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 5fffa3c58908..460a50a350d0 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -485,8 +485,6 @@ i915_vma_unpin_fence(struct i915_vma *vma)
 		__i915_vma_unpin_fence(vma);
 }
 
-void i915_vma_parked(struct intel_gt *gt);
-
 #define for_each_until(cond) if (cond) break; else
 
 /**
@@ -515,4 +513,13 @@ static inline int i915_vma_sync(struct i915_vma *vma)
 	return i915_active_wait(&vma->active);
 }
 
+struct i915_vma_clock {
+	spinlock_t lock;
+	struct list_head age[2];
+	struct delayed_work work;
+};
+
+void i915_vma_clock_init_early(struct i915_vma_clock *clock);
+void i915_vma_clock_flush(struct i915_vma_clock *clock);
+
 #endif
-- 
2.25.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Intel-gfx] [PATCH 6/8] drm/i915: Only retire requests when eviction is allowed to blocked
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
                   ` (3 preceding siblings ...)
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 5/8] drm/i915: Replace vma parking with a clock aging algorithm Chris Wilson
@ 2020-01-06 10:22 ` Chris Wilson
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 7/8] drm/i915/gt: Drop mutex serialisation between context pin/unpin Chris Wilson
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-01-06 10:22 UTC (permalink / raw)
  To: intel-gfx

We want to keep the PIN_NONBLOCK search quick, avoiding evicting
recently active nodes. To that end, skip performing the more laborious
retirement prior to beginning the fast search.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_evict.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 0697bedebeef..5f8b6cc55195 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -124,7 +124,8 @@ i915_gem_evict_something(struct i915_address_space *vm,
 				    min_size, alignment, color,
 				    start, end, mode);
 
-	intel_gt_retire_requests(vm->gt);
+	if (!(flags & PIN_NONBLOCK))
+		intel_gt_retire_requests(vm->gt);
 
 search_again:
 	active = NULL;
@@ -270,7 +271,8 @@ int i915_gem_evict_for_node(struct i915_address_space *vm,
 	 * a stray pin (preventing eviction) that can only be resolved by
 	 * retiring.
 	 */
-	intel_gt_retire_requests(vm->gt);
+	if (!(flags & PIN_NONBLOCK))
+		intel_gt_retire_requests(vm->gt);
 
 	if (i915_vm_has_cache_coloring(vm)) {
 		/* Expand search to cover neighbouring guard pages (or lack!) */
-- 
2.25.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Intel-gfx] [PATCH 7/8] drm/i915/gt: Drop mutex serialisation between context pin/unpin
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
                   ` (4 preceding siblings ...)
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 6/8] drm/i915: Only retire requests when eviction is allowed to blocked Chris Wilson
@ 2020-01-06 10:22 ` Chris Wilson
  2020-01-06 11:22   ` Maarten Lankhorst
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 8/8] drm/i915/gt: Use memset_p to clear the ports Chris Wilson
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 13+ messages in thread
From: Chris Wilson @ 2020-01-06 10:22 UTC (permalink / raw)
  To: intel-gfx

The last remaining reason for serialising the pin/unpin of the
intel_context is to ensure that our preallocated wakerefs are not
consumed too early (i.e. the unpin of the previous phase does not emit
the idle barriers for this phase before we even submit). All of the
other operations within the context pin/unpin are supposed to be
atomic...  Therefore, we can reduce the serialisation to being just on
the i915_active.preallocated_barriers itself and drop the nested
pin_mutex from intel_context_unpin().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c | 18 +++++-------------
 drivers/gpu/drm/i915/i915_active.c      | 19 +++++++++++++++----
 2 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index 4d0bc1478ccd..34ec958d400e 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -86,22 +86,14 @@ int __intel_context_do_pin(struct intel_context *ce)
 
 void intel_context_unpin(struct intel_context *ce)
 {
-	if (likely(atomic_add_unless(&ce->pin_count, -1, 1)))
+	if (!atomic_dec_and_test(&ce->pin_count))
 		return;
 
-	/* We may be called from inside intel_context_pin() to evict another */
-	intel_context_get(ce);
-	mutex_lock_nested(&ce->pin_mutex, SINGLE_DEPTH_NESTING);
-
-	if (likely(atomic_dec_and_test(&ce->pin_count))) {
-		CE_TRACE(ce, "retire\n");
+	CE_TRACE(ce, "unpin\n");
+	ce->ops->unpin(ce);
 
-		ce->ops->unpin(ce);
-
-		intel_context_active_release(ce);
-	}
-
-	mutex_unlock(&ce->pin_mutex);
+	intel_context_get(ce);
+	intel_context_active_release(ce);
 	intel_context_put(ce);
 }
 
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index cfe09964622b..f3da5c06f331 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -605,12 +605,15 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 					    struct intel_engine_cs *engine)
 {
 	intel_engine_mask_t tmp, mask = engine->mask;
+	struct llist_node *pos = NULL, *next;
 	struct intel_gt *gt = engine->gt;
-	struct llist_node *pos, *next;
 	int err;
 
 	GEM_BUG_ON(i915_active_is_idle(ref));
-	GEM_BUG_ON(!llist_empty(&ref->preallocated_barriers));
+
+	/* Wait until the previous preallocation is completed */
+	while (!llist_empty(&ref->preallocated_barriers))
+		cond_resched();
 
 	/*
 	 * Preallocate a node for each physical engine supporting the target
@@ -653,16 +656,24 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 		GEM_BUG_ON(rcu_access_pointer(node->base.fence) != ERR_PTR(-EAGAIN));
 
 		GEM_BUG_ON(barrier_to_engine(node) != engine);
-		llist_add(barrier_to_ll(node), &ref->preallocated_barriers);
+		next = barrier_to_ll(node);
+		next->next = pos;
+		if (!pos)
+			pos = next;
 		intel_engine_pm_get(engine);
 	}
 
+	GEM_BUG_ON(!llist_empty(&ref->preallocated_barriers));
+	llist_add_batch(next, pos, &ref->preallocated_barriers);
+
 	return 0;
 
 unwind:
-	llist_for_each_safe(pos, next, take_preallocated_barriers(ref)) {
+	while (pos) {
 		struct active_node *node = barrier_from_ll(pos);
 
+		pos = pos->next;
+
 		atomic_dec(&ref->count);
 		intel_engine_pm_put(barrier_to_engine(node));
 
-- 
2.25.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Intel-gfx] [PATCH 8/8] drm/i915/gt: Use memset_p to clear the ports
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
                   ` (5 preceding siblings ...)
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 7/8] drm/i915/gt: Drop mutex serialisation between context pin/unpin Chris Wilson
@ 2020-01-06 10:22 ` Chris Wilson
  2020-01-06 10:31 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Patchwork
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-01-06 10:22 UTC (permalink / raw)
  To: intel-gfx

Put memset_p to use to set the array of pointers used for tracking the
ELSP.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 28c05e7a1510..29b82fc24b2a 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1744,6 +1744,11 @@ static void set_preempt_timeout(struct intel_engine_cs *engine)
 		     active_preempt_timeout(engine));
 }
 
+static inline void clear_ports(struct i915_request **ports, int count)
+{
+	memset_p((void **)ports, NULL, count);
+}
+
 static void execlists_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -2105,7 +2110,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 			goto skip_submit;
 		}
 
-		memset(port + 1, 0, (last_port - port) * sizeof(*port));
+		clear_ports(port + 1, last_port - port);
 		execlists_submit_ports(engine);
 
 		set_preempt_timeout(engine);
@@ -2122,13 +2127,14 @@ cancel_port_requests(struct intel_engine_execlists * const execlists)
 
 	for (port = execlists->pending; *port; port++)
 		execlists_schedule_out(*port);
-	memset(execlists->pending, 0, sizeof(execlists->pending));
+	clear_ports(execlists->pending, ARRAY_SIZE(execlists->pending));
 
 	/* Mark the end of active before we overwrite *active */
 	for (port = xchg(&execlists->active, execlists->pending); *port; port++)
 		execlists_schedule_out(*port);
-	WRITE_ONCE(execlists->active,
-		   memset(execlists->inflight, 0, sizeof(execlists->inflight)));
+	clear_ports(execlists->inflight, ARRAY_SIZE(execlists->inflight));
+
+	WRITE_ONCE(execlists->active, execlists->inflight);
 }
 
 static inline void
-- 
2.25.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
                   ` (6 preceding siblings ...)
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 8/8] drm/i915/gt: Use memset_p to clear the ports Chris Wilson
@ 2020-01-06 10:31 ` Patchwork
  2020-01-06 10:34 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Patchwork @ 2020-01-06 10:31 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
URL   : https://patchwork.freedesktop.org/series/71648/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
2b51023a3737 drm/i915/selftests: Fixup sparse __user annotation on local var
-:4: WARNING:EMAIL_SUBJECT: A patch subject line should describe the change not the tool that found it
#4: 
Subject: [PATCH] drm/i915/selftests: Fixup sparse __user annotation on local

total: 0 errors, 1 warnings, 0 checks, 10 lines checked
dc3de2f2809d drm/i915/selftests: Impose a timeout for request submission
6fc7325bd127 drm/i915/gt: Convert the final GEM_TRACE to GT_TRACE and co
-:9: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 639f2f24895f ("drm/i915: Introduce new macros for tracing")'
#9: 
References: 639f2f24895f ("drm/i915: Introduce new macros for tracing")

total: 1 errors, 0 warnings, 0 checks, 77 lines checked
4b68dfe31714 drm/i915: Merge i915_request.flags with i915_request.fence.flags
66b1f7932954 drm/i915: Replace vma parking with a clock aging algorithm
-:253: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#253: FILE: drivers/gpu/drm/i915/i915_vma.h:517:
+	spinlock_t lock;

total: 0 errors, 0 warnings, 1 checks, 194 lines checked
a5dd173981d4 drm/i915: Only retire requests when eviction is allowed to blocked
7ef130338af2 drm/i915/gt: Drop mutex serialisation between context pin/unpin
ce3a2924e90c drm/i915/gt: Use memset_p to clear the ports

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
                   ` (7 preceding siblings ...)
  2020-01-06 10:31 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Patchwork
@ 2020-01-06 10:34 ` Patchwork
  2020-01-06 10:55 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  2020-01-06 13:00 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  10 siblings, 0 replies; 13+ messages in thread
From: Patchwork @ 2020-01-06 10:34 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
URL   : https://patchwork.freedesktop.org/series/71648/
State : warning

== Summary ==

$ dim sparse origin/drm-tip
Sparse version: v0.6.0
Commit: drm/i915/selftests: Fixup sparse __user annotation on local var
-drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c:990:9: warning: dereference of noderef expression
-drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c:991:13: warning: dereference of noderef expression
+

Commit: drm/i915/selftests: Impose a timeout for request submission
Okay!

Commit: drm/i915/gt: Convert the final GEM_TRACE to GT_TRACE and co
Okay!

Commit: drm/i915: Merge i915_request.flags with i915_request.fence.flags
Okay!

Commit: drm/i915: Replace vma parking with a clock aging algorithm
Okay!

Commit: drm/i915: Only retire requests when eviction is allowed to blocked
Okay!

Commit: drm/i915/gt: Drop mutex serialisation between context pin/unpin
Okay!

Commit: drm/i915/gt: Use memset_p to clear the ports
Okay!

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
                   ` (8 preceding siblings ...)
  2020-01-06 10:34 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2020-01-06 10:55 ` Patchwork
  2020-01-06 13:00 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  10 siblings, 0 replies; 13+ messages in thread
From: Patchwork @ 2020-01-06 10:55 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
URL   : https://patchwork.freedesktop.org/series/71648/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_7680 -> Patchwork_15999
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/index.html

Known issues
------------

  Here are the changes found in Patchwork_15999 that come from known issues:

### IGT changes ###

#### Possible fixes ####

  * igt@gem_close_race@basic-threads:
    - fi-byt-j1900:       [TIMEOUT][1] ([i915#816]) -> [PASS][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/fi-byt-j1900/igt@gem_close_race@basic-threads.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/fi-byt-j1900/igt@gem_close_race@basic-threads.html

  * igt@i915_selftest@live_active:
    - fi-icl-y:           [DMESG-FAIL][3] ([i915#765]) -> [PASS][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/fi-icl-y/igt@i915_selftest@live_active.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/fi-icl-y/igt@i915_selftest@live_active.html

  * igt@i915_selftest@live_blt:
    - fi-hsw-4770r:       [DMESG-FAIL][5] ([i915#553] / [i915#725]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/fi-hsw-4770r/igt@i915_selftest@live_blt.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/fi-hsw-4770r/igt@i915_selftest@live_blt.html
    - fi-ivb-3770:        [DMESG-FAIL][7] ([i915#725]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/fi-ivb-3770/igt@i915_selftest@live_blt.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/fi-ivb-3770/igt@i915_selftest@live_blt.html

  
#### Warnings ####

  * igt@i915_selftest@live_blt:
    - fi-hsw-4770:        [DMESG-FAIL][9] ([i915#553] / [i915#725]) -> [DMESG-FAIL][10] ([i915#770])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/fi-hsw-4770/igt@i915_selftest@live_blt.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/fi-hsw-4770/igt@i915_selftest@live_blt.html

  
  [i915#553]: https://gitlab.freedesktop.org/drm/intel/issues/553
  [i915#725]: https://gitlab.freedesktop.org/drm/intel/issues/725
  [i915#765]: https://gitlab.freedesktop.org/drm/intel/issues/765
  [i915#770]: https://gitlab.freedesktop.org/drm/intel/issues/770
  [i915#816]: https://gitlab.freedesktop.org/drm/intel/issues/816


Participating hosts (43 -> 40)
------------------------------

  Additional (6): fi-bdw-gvtdvm fi-glk-dsi fi-ilk-650 fi-cfl-8109u fi-kbl-8809g fi-skl-6600u 
  Missing    (9): fi-byt-squawks fi-bsw-cyan fi-kbl-7500u fi-ctg-p8600 fi-whl-u fi-bsw-kefka fi-skl-lmem fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_7680 -> Patchwork_15999

  CI-20190529: 20190529
  CI_DRM_7680: b70a5ffaee3192a3d21296a6d68f4a1b4f4cecd5 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5357: a555a4b98f90dab655d24bb3d07e9291a8b8dac8 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_15999: ce3a2924e90c45cf9cca6d2b80e29f4bdbf4ed98 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

ce3a2924e90c drm/i915/gt: Use memset_p to clear the ports
7ef130338af2 drm/i915/gt: Drop mutex serialisation between context pin/unpin
a5dd173981d4 drm/i915: Only retire requests when eviction is allowed to blocked
66b1f7932954 drm/i915: Replace vma parking with a clock aging algorithm
4b68dfe31714 drm/i915: Merge i915_request.flags with i915_request.fence.flags
6fc7325bd127 drm/i915/gt: Convert the final GEM_TRACE to GT_TRACE and co
dc3de2f2809d drm/i915/selftests: Impose a timeout for request submission
2b51023a3737 drm/i915/selftests: Fixup sparse __user annotation on local var

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Intel-gfx] [PATCH 7/8] drm/i915/gt: Drop mutex serialisation between context pin/unpin
  2020-01-06 10:22 ` [Intel-gfx] [PATCH 7/8] drm/i915/gt: Drop mutex serialisation between context pin/unpin Chris Wilson
@ 2020-01-06 11:22   ` Maarten Lankhorst
  0 siblings, 0 replies; 13+ messages in thread
From: Maarten Lankhorst @ 2020-01-06 11:22 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Op 06-01-2020 om 11:22 schreef Chris Wilson:
> The last remaining reason for serialising the pin/unpin of the
> intel_context is to ensure that our preallocated wakerefs are not
> consumed too early (i.e. the unpin of the previous phase does not emit
> the idle barriers for this phase before we even submit). All of the
> other operations within the context pin/unpin are supposed to be
> atomic...  Therefore, we can reduce the serialisation to being just on
> the i915_active.preallocated_barriers itself and drop the nested
> pin_mutex from intel_context_unpin().
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_context.c | 18 +++++-------------
>  drivers/gpu/drm/i915/i915_active.c      | 19 +++++++++++++++----
>  2 files changed, 20 insertions(+), 17 deletions(-)

For whole series, except 5 and 6:

Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

For 5 and 6, I think they look sane but I'm not the right person to review. :)

> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
> index 4d0bc1478ccd..34ec958d400e 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context.c
> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
> @@ -86,22 +86,14 @@ int __intel_context_do_pin(struct intel_context *ce)
>  
>  void intel_context_unpin(struct intel_context *ce)
>  {
> -	if (likely(atomic_add_unless(&ce->pin_count, -1, 1)))
> +	if (!atomic_dec_and_test(&ce->pin_count))
>  		return;
>  
> -	/* We may be called from inside intel_context_pin() to evict another */
> -	intel_context_get(ce);
> -	mutex_lock_nested(&ce->pin_mutex, SINGLE_DEPTH_NESTING);
> -
> -	if (likely(atomic_dec_and_test(&ce->pin_count))) {
> -		CE_TRACE(ce, "retire\n");
> +	CE_TRACE(ce, "unpin\n");
> +	ce->ops->unpin(ce);
>  
> -		ce->ops->unpin(ce);
> -
> -		intel_context_active_release(ce);
> -	}
> -
> -	mutex_unlock(&ce->pin_mutex);
> +	intel_context_get(ce);
> +	intel_context_active_release(ce);
>  	intel_context_put(ce);
>  }
>  
Might want to put a comment here why intel_context_get is needed?
> diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
> index cfe09964622b..f3da5c06f331 100644
> --- a/drivers/gpu/drm/i915/i915_active.c
> +++ b/drivers/gpu/drm/i915/i915_active.c
> @@ -605,12 +605,15 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
>  					    struct intel_engine_cs *engine)
>  {
>  	intel_engine_mask_t tmp, mask = engine->mask;
> +	struct llist_node *pos = NULL, *next;
>  	struct intel_gt *gt = engine->gt;
> -	struct llist_node *pos, *next;
>  	int err;
>  
>  	GEM_BUG_ON(i915_active_is_idle(ref));
> -	GEM_BUG_ON(!llist_empty(&ref->preallocated_barriers));
> +
> +	/* Wait until the previous preallocation is completed */
> +	while (!llist_empty(&ref->preallocated_barriers))
> +		cond_resched();
>  
>  	/*
>  	 * Preallocate a node for each physical engine supporting the target
> @@ -653,16 +656,24 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
>  		GEM_BUG_ON(rcu_access_pointer(node->base.fence) != ERR_PTR(-EAGAIN));
>  
>  		GEM_BUG_ON(barrier_to_engine(node) != engine);
> -		llist_add(barrier_to_ll(node), &ref->preallocated_barriers);
> +		next = barrier_to_ll(node);
> +		next->next = pos;
> +		if (!pos)
> +			pos = next;
>  		intel_engine_pm_get(engine);
>  	}
>  
> +	GEM_BUG_ON(!llist_empty(&ref->preallocated_barriers));
> +	llist_add_batch(next, pos, &ref->preallocated_barriers);
> +
>  	return 0;
>  
>  unwind:
> -	llist_for_each_safe(pos, next, take_preallocated_barriers(ref)) {
> +	while (pos) {
>  		struct active_node *node = barrier_from_ll(pos);
>  
> +		pos = pos->next;
> +
>  		atomic_dec(&ref->count);
>  		intel_engine_pm_put(barrier_to_engine(node));
>  


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
  2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
                   ` (9 preceding siblings ...)
  2020-01-06 10:55 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2020-01-06 13:00 ` Patchwork
  10 siblings, 0 replies; 13+ messages in thread
From: Patchwork @ 2020-01-06 13:00 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var
URL   : https://patchwork.freedesktop.org/series/71648/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_7680_full -> Patchwork_15999_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_15999_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_15999_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_15999_full:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_exec_schedule@wide-bsd:
    - shard-skl:          [PASS][1] -> [FAIL][2] +3 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl9/igt@gem_exec_schedule@wide-bsd.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl5/igt@gem_exec_schedule@wide-bsd.html
    - shard-glk:          [PASS][3] -> [FAIL][4] +3 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-glk5/igt@gem_exec_schedule@wide-bsd.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-glk5/igt@gem_exec_schedule@wide-bsd.html
    - shard-iclb:         [PASS][5] -> [FAIL][6] +5 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb7/igt@gem_exec_schedule@wide-bsd.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb5/igt@gem_exec_schedule@wide-bsd.html
    - shard-apl:          [PASS][7] -> [FAIL][8] +3 similar issues
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-apl7/igt@gem_exec_schedule@wide-bsd.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl1/igt@gem_exec_schedule@wide-bsd.html

  * igt@gem_exec_schedule@wide-render:
    - shard-kbl:          [PASS][9] -> [FAIL][10] +4 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-kbl2/igt@gem_exec_schedule@wide-render.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-kbl4/igt@gem_exec_schedule@wide-render.html

  * igt@gem_exec_schedule@wide-vebox:
    - shard-tglb:         [PASS][11] -> [FAIL][12] +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb8/igt@gem_exec_schedule@wide-vebox.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb2/igt@gem_exec_schedule@wide-vebox.html

  * igt@i915_selftest@mock_gtt:
    - shard-skl:          [PASS][13] -> [INCOMPLETE][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl8/igt@i915_selftest@mock_gtt.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl2/igt@i915_selftest@mock_gtt.html
    - shard-tglb:         [PASS][15] -> [INCOMPLETE][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb1/igt@i915_selftest@mock_gtt.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb2/igt@i915_selftest@mock_gtt.html

  * igt@i915_selftest@mock_timelines:
    - shard-glk:          [PASS][17] -> [DMESG-WARN][18]
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-glk6/igt@i915_selftest@mock_timelines.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-glk7/igt@i915_selftest@mock_timelines.html
    - shard-hsw:          [PASS][19] -> [DMESG-WARN][20]
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-hsw5/igt@i915_selftest@mock_timelines.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-hsw5/igt@i915_selftest@mock_timelines.html
    - shard-kbl:          [PASS][21] -> [DMESG-WARN][22]
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-kbl7/igt@i915_selftest@mock_timelines.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-kbl2/igt@i915_selftest@mock_timelines.html
    - shard-iclb:         [PASS][23] -> [DMESG-WARN][24]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb8/igt@i915_selftest@mock_timelines.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb4/igt@i915_selftest@mock_timelines.html
    - shard-snb:          [PASS][25] -> [DMESG-WARN][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-snb6/igt@i915_selftest@mock_timelines.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-snb7/igt@i915_selftest@mock_timelines.html
    - shard-skl:          [PASS][27] -> [DMESG-WARN][28]
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl8/igt@i915_selftest@mock_timelines.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl2/igt@i915_selftest@mock_timelines.html
    - shard-tglb:         [PASS][29] -> [DMESG-WARN][30]
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb1/igt@i915_selftest@mock_timelines.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb2/igt@i915_selftest@mock_timelines.html
    - shard-apl:          [PASS][31] -> [DMESG-WARN][32]
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-apl3/igt@i915_selftest@mock_timelines.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl4/igt@i915_selftest@mock_timelines.html

  * igt@runner@aborted:
    - shard-hsw:          NOTRUN -> [FAIL][33]
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-hsw5/igt@runner@aborted.html
    - shard-kbl:          NOTRUN -> [FAIL][34]
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-kbl2/igt@runner@aborted.html
    - shard-apl:          NOTRUN -> [FAIL][35]
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl4/igt@runner@aborted.html
    - shard-snb:          NOTRUN -> [FAIL][36]
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-snb7/igt@runner@aborted.html

  
#### Warnings ####

  * igt@gem_exec_schedule@deep-bsd1:
    - shard-iclb:         [SKIP][37] ([fdo#109276]) -> [FAIL][38]
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb3/igt@gem_exec_schedule@deep-bsd1.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb4/igt@gem_exec_schedule@deep-bsd1.html

  
New tests
---------

  New tests have been introduced between CI_DRM_7680_full and Patchwork_15999_full:

### New Piglit tests (3) ###

  * spec@arb_vertex_attrib_64bit@execution@vs_in@vs-input-double_dmat4x2-position-float_vec3:
    - Statuses : 1 fail(s)
    - Exec time: [0.14] s

  * spec@arb_vertex_attrib_64bit@execution@vs_in@vs-input-int_ivec3_array3-double_dvec4-position:
    - Statuses : 1 fail(s)
    - Exec time: [0.16] s

  * spec@arb_vertex_attrib_64bit@execution@vs_in@vs-input-position-double_dmat2x3-float_mat2_array3:
    - Statuses : 1 fail(s)
    - Exec time: [0.19] s

  

Known issues
------------

  Here are the changes found in Patchwork_15999_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_persistence@bcs0-mixed-process:
    - shard-skl:          [PASS][39] -> [FAIL][40] ([i915#679])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl4/igt@gem_ctx_persistence@bcs0-mixed-process.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl8/igt@gem_ctx_persistence@bcs0-mixed-process.html

  * igt@gem_ctx_persistence@vcs1-queued:
    - shard-iclb:         [PASS][41] -> [SKIP][42] ([fdo#109276] / [fdo#112080]) +4 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb4/igt@gem_ctx_persistence@vcs1-queued.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb6/igt@gem_ctx_persistence@vcs1-queued.html

  * igt@gem_ctx_shared@exec-single-timeline-bsd:
    - shard-iclb:         [PASS][43] -> [SKIP][44] ([fdo#110841])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb5/igt@gem_ctx_shared@exec-single-timeline-bsd.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb1/igt@gem_ctx_shared@exec-single-timeline-bsd.html

  * igt@gem_ctx_shared@q-smoketest-bsd1:
    - shard-tglb:         [PASS][45] -> [INCOMPLETE][46] ([fdo#111735])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb8/igt@gem_ctx_shared@q-smoketest-bsd1.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb9/igt@gem_ctx_shared@q-smoketest-bsd1.html

  * igt@gem_exec_balancer@smoke:
    - shard-iclb:         [PASS][47] -> [SKIP][48] ([fdo#110854])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb4/igt@gem_exec_balancer@smoke.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb8/igt@gem_exec_balancer@smoke.html

  * igt@gem_exec_schedule@deep-blt:
    - shard-apl:          [PASS][49] -> [FAIL][50] ([i915#412]) +2 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-apl1/igt@gem_exec_schedule@deep-blt.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl4/igt@gem_exec_schedule@deep-blt.html

  * igt@gem_exec_schedule@deep-bsd:
    - shard-skl:          [PASS][51] -> [FAIL][52] ([i915#412]) +1 similar issue
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl5/igt@gem_exec_schedule@deep-bsd.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl9/igt@gem_exec_schedule@deep-bsd.html

  * igt@gem_exec_schedule@deep-bsd1:
    - shard-tglb:         [PASS][53] -> [FAIL][54] ([i915#412]) +1 similar issue
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb2/igt@gem_exec_schedule@deep-bsd1.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb3/igt@gem_exec_schedule@deep-bsd1.html

  * igt@gem_exec_schedule@deep-render:
    - shard-kbl:          [PASS][55] -> [FAIL][56] ([i915#412]) +2 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-kbl1/igt@gem_exec_schedule@deep-render.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-kbl2/igt@gem_exec_schedule@deep-render.html

  * igt@gem_exec_schedule@deep-vebox:
    - shard-glk:          [PASS][57] -> [FAIL][58] ([i915#412]) +1 similar issue
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-glk1/igt@gem_exec_schedule@deep-vebox.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-glk2/igt@gem_exec_schedule@deep-vebox.html

  * igt@gem_exec_schedule@in-order-bsd:
    - shard-iclb:         [PASS][59] -> [SKIP][60] ([fdo#112146]) +3 similar issues
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb7/igt@gem_exec_schedule@in-order-bsd.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb4/igt@gem_exec_schedule@in-order-bsd.html

  * igt@gem_exec_schedule@preempt-contexts-bsd2:
    - shard-iclb:         [PASS][61] -> [SKIP][62] ([fdo#109276]) +15 similar issues
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb2/igt@gem_exec_schedule@preempt-contexts-bsd2.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb7/igt@gem_exec_schedule@preempt-contexts-bsd2.html

  * igt@gem_exec_schedule@preempt-queue-chain-bsd1:
    - shard-tglb:         [PASS][63] -> [INCOMPLETE][64] ([fdo#111606] / [fdo#111677])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb9/igt@gem_exec_schedule@preempt-queue-chain-bsd1.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb5/igt@gem_exec_schedule@preempt-queue-chain-bsd1.html

  * igt@gem_persistent_relocs@forked-interruptible-thrashing:
    - shard-tglb:         [PASS][65] -> [TIMEOUT][66] ([fdo#112126] / [i915#530])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb8/igt@gem_persistent_relocs@forked-interruptible-thrashing.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb5/igt@gem_persistent_relocs@forked-interruptible-thrashing.html

  * igt@gem_wait@write-busy-rcs0:
    - shard-skl:          [PASS][67] -> [DMESG-WARN][68] ([i915#109])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl6/igt@gem_wait@write-busy-rcs0.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl7/igt@gem_wait@write-busy-rcs0.html

  * igt@i915_pm_dc@dc5-dpms:
    - shard-iclb:         [PASS][69] -> [FAIL][70] ([i915#447])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb6/igt@i915_pm_dc@dc5-dpms.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb3/igt@i915_pm_dc@dc5-dpms.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [PASS][71] -> [FAIL][72] ([i915#454])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb7/igt@i915_pm_dc@dc6-psr.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb4/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_selftest@mock_gtt:
    - shard-apl:          [PASS][73] -> [INCOMPLETE][74] ([fdo#103927])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-apl3/igt@i915_selftest@mock_gtt.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl4/igt@i915_selftest@mock_gtt.html
    - shard-glk:          [PASS][75] -> [INCOMPLETE][76] ([i915#58] / [k.org#198133])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-glk6/igt@i915_selftest@mock_gtt.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-glk7/igt@i915_selftest@mock_gtt.html
    - shard-iclb:         [PASS][77] -> [INCOMPLETE][78] ([i915#140])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb8/igt@i915_selftest@mock_gtt.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb4/igt@i915_selftest@mock_gtt.html
    - shard-hsw:          [PASS][79] -> [INCOMPLETE][80] ([i915#61])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-hsw5/igt@i915_selftest@mock_gtt.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-hsw5/igt@i915_selftest@mock_gtt.html
    - shard-kbl:          [PASS][81] -> [INCOMPLETE][82] ([fdo#103665])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-kbl7/igt@i915_selftest@mock_gtt.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-kbl2/igt@i915_selftest@mock_gtt.html
    - shard-snb:          [PASS][83] -> [INCOMPLETE][84] ([i915#82])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-snb6/igt@i915_selftest@mock_gtt.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-snb7/igt@i915_selftest@mock_gtt.html

  * igt@kms_color@pipe-a-ctm-green-to-red:
    - shard-skl:          [PASS][85] -> [FAIL][86] ([i915#129])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl3/igt@kms_color@pipe-a-ctm-green-to-red.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl4/igt@kms_color@pipe-a-ctm-green-to-red.html

  * igt@kms_draw_crc@draw-method-xrgb2101010-render-ytiled:
    - shard-skl:          [PASS][87] -> [FAIL][88] ([i915#52] / [i915#54])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl3/igt@kms_draw_crc@draw-method-xrgb2101010-render-ytiled.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl4/igt@kms_draw_crc@draw-method-xrgb2101010-render-ytiled.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible:
    - shard-glk:          [PASS][89] -> [FAIL][90] ([i915#46])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-glk6/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-glk1/igt@kms_flip@flip-vs-expired-vblank-interruptible.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-mmap-wc:
    - shard-skl:          [PASS][91] -> [FAIL][92] ([i915#49])
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl3/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-mmap-wc.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl4/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes:
    - shard-apl:          [PASS][93] -> [DMESG-WARN][94] ([i915#180]) +2 similar issues
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-apl3/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl4/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html

  * igt@kms_plane@plane-position-covered-pipe-a-planes:
    - shard-skl:          [PASS][95] -> [FAIL][96] ([i915#247])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl3/igt@kms_plane@plane-position-covered-pipe-a-planes.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl4/igt@kms_plane@plane-position-covered-pipe-a-planes.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
    - shard-skl:          [PASS][97] -> [FAIL][98] ([fdo#108145]) +2 similar issues
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl2/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl3/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [PASS][99] -> [FAIL][100] ([fdo#108145] / [i915#265])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl7/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl4/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_psr@psr2_cursor_blt:
    - shard-iclb:         [PASS][101] -> [SKIP][102] ([fdo#109441])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb2/igt@kms_psr@psr2_cursor_blt.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb7/igt@kms_psr@psr2_cursor_blt.html

  * igt@kms_setmode@basic:
    - shard-skl:          [PASS][103] -> [FAIL][104] ([i915#31])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl5/igt@kms_setmode@basic.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl9/igt@kms_setmode@basic.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
    - shard-kbl:          [PASS][105] -> [DMESG-WARN][106] ([i915#180]) +6 similar issues
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-kbl4/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-kbl7/igt@kms_vblank@pipe-a-ts-continuation-suspend.html

  * igt@perf_pmu@busy-no-semaphores-vcs1:
    - shard-iclb:         [PASS][107] -> [SKIP][108] ([fdo#112080]) +13 similar issues
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb1/igt@perf_pmu@busy-no-semaphores-vcs1.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb8/igt@perf_pmu@busy-no-semaphores-vcs1.html

  * igt@perf_pmu@enable-race-vcs1:
    - shard-tglb:         [PASS][109] -> [INCOMPLETE][110] ([i915#435] / [i915#923])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb1/igt@perf_pmu@enable-race-vcs1.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb1/igt@perf_pmu@enable-race-vcs1.html

  * igt@prime_mmap_coherency@ioctl-errors:
    - shard-hsw:          [PASS][111] -> [FAIL][112] ([i915#831])
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-hsw6/igt@prime_mmap_coherency@ioctl-errors.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-hsw7/igt@prime_mmap_coherency@ioctl-errors.html

  
#### Possible fixes ####

  * igt@gem_ctx_isolation@rcs0-s3:
    - shard-kbl:          [DMESG-WARN][113] ([i915#180]) -> [PASS][114] +7 similar issues
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-kbl6/igt@gem_ctx_isolation@rcs0-s3.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-kbl6/igt@gem_ctx_isolation@rcs0-s3.html

  * igt@gem_ctx_persistence@rcs0-mixed-process:
    - shard-apl:          [FAIL][115] ([i915#679]) -> [PASS][116]
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-apl4/igt@gem_ctx_persistence@rcs0-mixed-process.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl6/igt@gem_ctx_persistence@rcs0-mixed-process.html

  * igt@gem_ctx_persistence@vcs1-cleanup:
    - shard-iclb:         [SKIP][117] ([fdo#109276] / [fdo#112080]) -> [PASS][118]
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb3/igt@gem_ctx_persistence@vcs1-cleanup.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb2/igt@gem_ctx_persistence@vcs1-cleanup.html

  * igt@gem_ctx_shared@q-smoketest-vebox:
    - shard-tglb:         [INCOMPLETE][119] ([fdo#111735]) -> [PASS][120]
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb9/igt@gem_ctx_shared@q-smoketest-vebox.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb1/igt@gem_ctx_shared@q-smoketest-vebox.html

  * igt@gem_eio@reset-stress:
    - shard-snb:          [FAIL][121] ([i915#232]) -> [PASS][122]
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-snb6/igt@gem_eio@reset-stress.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-snb7/igt@gem_eio@reset-stress.html

  * igt@gem_exec_basic@basic-vcs1:
    - shard-iclb:         [SKIP][123] ([fdo#112080]) -> [PASS][124] +7 similar issues
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb3/igt@gem_exec_basic@basic-vcs1.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb2/igt@gem_exec_basic@basic-vcs1.html

  * igt@gem_exec_parallel@fds:
    - shard-tglb:         [INCOMPLETE][125] ([i915#470]) -> [PASS][126]
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb3/igt@gem_exec_parallel@fds.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb5/igt@gem_exec_parallel@fds.html

  * igt@gem_exec_schedule@independent-bsd2:
    - shard-iclb:         [SKIP][127] ([fdo#109276]) -> [PASS][128] +20 similar issues
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb5/igt@gem_exec_schedule@independent-bsd2.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb1/igt@gem_exec_schedule@independent-bsd2.html

  * {igt@gem_exec_schedule@pi-common-bsd}:
    - shard-iclb:         [SKIP][129] ([i915#677]) -> [PASS][130]
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb2/igt@gem_exec_schedule@pi-common-bsd.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb7/igt@gem_exec_schedule@pi-common-bsd.html

  * igt@gem_exec_schedule@preempt-other-chain-bsd:
    - shard-iclb:         [SKIP][131] ([fdo#112146]) -> [PASS][132] +7 similar issues
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb4/igt@gem_exec_schedule@preempt-other-chain-bsd.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb6/igt@gem_exec_schedule@preempt-other-chain-bsd.html

  * igt@gem_exec_schedule@preempt-queue-bsd2:
    - shard-tglb:         [INCOMPLETE][133] ([fdo#111606] / [fdo#111677]) -> [PASS][134]
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb3/igt@gem_exec_schedule@preempt-queue-bsd2.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb7/igt@gem_exec_schedule@preempt-queue-bsd2.html

  * igt@gem_exec_schedule@preempt-queue-chain-vebox:
    - shard-tglb:         [INCOMPLETE][135] ([fdo#111677]) -> [PASS][136]
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb6/igt@gem_exec_schedule@preempt-queue-chain-vebox.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb7/igt@gem_exec_schedule@preempt-queue-chain-vebox.html

  * igt@gem_softpin@noreloc-s3:
    - shard-apl:          [DMESG-WARN][137] ([i915#180]) -> [PASS][138]
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-apl1/igt@gem_softpin@noreloc-s3.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl4/igt@gem_softpin@noreloc-s3.html

  * igt@kms_color@pipe-a-ctm-0-75:
    - shard-skl:          [DMESG-WARN][139] ([i915#109]) -> [PASS][140] +1 similar issue
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl1/igt@kms_color@pipe-a-ctm-0-75.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl5/igt@kms_color@pipe-a-ctm-0-75.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic:
    - shard-skl:          [FAIL][141] ([IGT#5]) -> [PASS][142]
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-skl2/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-skl3/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-pgflip-blt:
    - shard-tglb:         [FAIL][143] ([i915#49]) -> [PASS][144] +3 similar issues
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb1/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-pgflip-blt.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-pgflip-blt.html

  * igt@kms_psr@psr2_sprite_blt:
    - shard-iclb:         [SKIP][145] ([fdo#109441]) -> [PASS][146]
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb3/igt@kms_psr@psr2_sprite_blt.html
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb2/igt@kms_psr@psr2_sprite_blt.html

  * igt@kms_setmode@basic:
    - shard-apl:          [FAIL][147] ([i915#31]) -> [PASS][148]
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-apl2/igt@kms_setmode@basic.html
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-apl1/igt@kms_setmode@basic.html

  
#### Warnings ####

  * igt@gem_ctx_isolation@vcs1-nonpriv:
    - shard-iclb:         [SKIP][149] ([fdo#109276] / [fdo#112080]) -> [FAIL][150] ([IGT#28])
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-iclb5/igt@gem_ctx_isolation@vcs1-nonpriv.html
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-iclb1/igt@gem_ctx_isolation@vcs1-nonpriv.html

  * igt@gem_ctx_isolation@vcs2-dirty-create:
    - shard-tglb:         [SKIP][151] ([fdo#112080]) -> [SKIP][152] ([fdo#111912] / [fdo#112080])
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb9/igt@gem_ctx_isolation@vcs2-dirty-create.html
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb3/igt@gem_ctx_isolation@vcs2-dirty-create.html

  * igt@gem_ctx_isolation@vcs2-reset:
    - shard-tglb:         [SKIP][153] ([fdo#111912] / [fdo#112080]) -> [SKIP][154] ([fdo#112080])
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb3/igt@gem_ctx_isolation@vcs2-reset.html
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb9/igt@gem_ctx_isolation@vcs2-reset.html

  * igt@gem_tiled_blits@normal:
    - shard-hsw:          [FAIL][155] ([i915#832]) -> [FAIL][156] ([i915#818])
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-hsw5/igt@gem_tiled_blits@normal.html
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-hsw7/igt@gem_tiled_blits@normal.html

  * igt@kms_atomic_transition@6x-modeset-transitions:
    - shard-tglb:         [SKIP][157] ([fdo#112021]) -> [SKIP][158] ([fdo#112016] / [fdo#112021])
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-tglb9/igt@kms_atomic_transition@6x-modeset-transitions.html
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-tglb1/igt@kms_atomic_transition@6x-modeset-transitions.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-kbl:          [DMESG-FAIL][159] ([i915#180] / [i915#54]) -> [FAIL][160] ([i915#54])
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7680/shard-kbl6/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/shard-kbl2/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [IGT#28]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/28
  [IGT#5]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/5
  [fdo#103665]: https://bugs.freedesktop.org/show_bug.cgi?id=103665
  [fdo#103927]: https://bugs.freedesktop.org/show_bug.cgi?id=103927
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109276]: https://bugs.freedesktop.org/show_bug.cgi?id=109276
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#110841]: https://bugs.freedesktop.org/show_bug.cgi?id=110841
  [fdo#110854]: https://bugs.freedesktop.org/show_bug.cgi?id=110854
  [fdo#111606]: https://bugs.freedesktop.org/show_bug.cgi?id=111606
  [fdo#111677]: https://bugs.freedesktop.org/show_bug.cgi?id=111677
  [fdo#111735]: https://bugs.freedesktop.org/show_bug.cgi?id=111735
  [fdo#111912]: https://bugs.freedesktop.org/show_bug.cgi?id=111912
  [fdo#112016]: https://bugs.freedesktop.org/show_bug.cgi?id=112016
  [fdo#112021]: https://bugs.freedesktop.org/show_bug.cgi?id=112021
  [fdo#11

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15999/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-01-06 13:00 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-06 10:22 [Intel-gfx] [PATCH 1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Chris Wilson
2020-01-06 10:22 ` [Intel-gfx] [PATCH 2/8] drm/i915/selftests: Impose a timeout for request submission Chris Wilson
2020-01-06 10:22 ` [Intel-gfx] [PATCH 3/8] drm/i915/gt: Convert the final GEM_TRACE to GT_TRACE and co Chris Wilson
2020-01-06 10:22 ` [Intel-gfx] [PATCH 4/8] drm/i915: Merge i915_request.flags with i915_request.fence.flags Chris Wilson
2020-01-06 10:22 ` [Intel-gfx] [PATCH 5/8] drm/i915: Replace vma parking with a clock aging algorithm Chris Wilson
2020-01-06 10:22 ` [Intel-gfx] [PATCH 6/8] drm/i915: Only retire requests when eviction is allowed to blocked Chris Wilson
2020-01-06 10:22 ` [Intel-gfx] [PATCH 7/8] drm/i915/gt: Drop mutex serialisation between context pin/unpin Chris Wilson
2020-01-06 11:22   ` Maarten Lankhorst
2020-01-06 10:22 ` [Intel-gfx] [PATCH 8/8] drm/i915/gt: Use memset_p to clear the ports Chris Wilson
2020-01-06 10:31 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/8] drm/i915/selftests: Fixup sparse __user annotation on local var Patchwork
2020-01-06 10:34 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-01-06 10:55 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-01-06 13:00 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.