intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
@ 2020-05-19  6:31 Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection Chris Wilson
                   ` (15 more replies)
  0 siblings, 16 replies; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

We recorded the execlists->queue_priority_hint update for the inflight
request without kicking the tasklet. The next submitted request then
failed to be scheduled as it had a lower priority than the hint, leaving
the HW runnning with only the inflight request.

Fixes: 6cebcf746f3f ("drm/i915: Tweak scheduler's kick_submission()")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_scheduler.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
index f4ea318781f0..cbb880b10c65 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -209,14 +209,6 @@ static void kick_submission(struct intel_engine_cs *engine,
 	if (!inflight)
 		goto unlock;
 
-	ENGINE_TRACE(engine,
-		     "bumping queue-priority-hint:%d for rq:%llx:%lld, inflight:%llx:%lld prio %d\n",
-		     prio,
-		     rq->fence.context, rq->fence.seqno,
-		     inflight->fence.context, inflight->fence.seqno,
-		     inflight->sched.attr.priority);
-	engine->execlists.queue_priority_hint = prio;
-
 	/*
 	 * If we are already the currently executing context, don't
 	 * bother evaluating if we should preempt ourselves.
@@ -224,6 +216,14 @@ static void kick_submission(struct intel_engine_cs *engine,
 	if (inflight->context == rq->context)
 		goto unlock;
 
+	ENGINE_TRACE(engine,
+		     "bumping queue-priority-hint:%d for rq:%llx:%lld, inflight:%llx:%lld prio %d\n",
+		     prio,
+		     rq->fence.context, rq->fence.seqno,
+		     inflight->fence.context, inflight->fence.seqno,
+		     inflight->sched.attr.priority);
+
+	engine->execlists.queue_priority_hint = prio;
 	if (need_preempt(prio, rq_prio(inflight)))
 		tasklet_hi_schedule(&engine->execlists.tasklet);
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19 13:58   ` Mika Kuoppala
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat Chris Wilson
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Check for integer overflow in the priority chain, rather than against a
type-constricted max-priority check.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 94854a467e66..3e042fa4b94b 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -2735,12 +2735,12 @@ static int live_preempt_gang(void *arg)
 			/* Submit each spinner at increasing priority */
 			engine->schedule(rq, &attr);
 
+			if (prio < attr.priority)
+				break;
+
 			if (prio <= I915_PRIORITY_MAX)
 				continue;
 
-			if (prio > (INT_MAX >> I915_USER_PRIORITY_SHIFT))
-				break;
-
 			if (__igt_timeout(end_time, NULL))
 				break;
 		} while (1);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19 13:41   ` Mika Kuoppala
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit() Chris Wilson
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Since we temporarily disable the heartbeat and restore back to the
default value, we can use the stored defaults on the engine and avoid
using a local.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 25 +++----
 drivers/gpu/drm/i915/gt/selftest_lrc.c       | 67 +++++++------------
 drivers/gpu/drm/i915/gt/selftest_rps.c       | 69 ++++++++------------
 drivers/gpu/drm/i915/gt/selftest_timeline.c  | 15 ++---
 4 files changed, 67 insertions(+), 109 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index 2b2efff6e19d..4aa4cc917d8b 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -310,22 +310,20 @@ static bool wait_until_running(struct hang *h, struct i915_request *rq)
 			  1000));
 }
 
-static void engine_heartbeat_disable(struct intel_engine_cs *engine,
-				     unsigned long *saved)
+static void engine_heartbeat_disable(struct intel_engine_cs *engine)
 {
-	*saved = engine->props.heartbeat_interval_ms;
 	engine->props.heartbeat_interval_ms = 0;
 
 	intel_engine_pm_get(engine);
 	intel_engine_park_heartbeat(engine);
 }
 
-static void engine_heartbeat_enable(struct intel_engine_cs *engine,
-				    unsigned long saved)
+static void engine_heartbeat_enable(struct intel_engine_cs *engine)
 {
 	intel_engine_pm_put(engine);
 
-	engine->props.heartbeat_interval_ms = saved;
+	engine->props.heartbeat_interval_ms =
+		engine->defaults.heartbeat_interval_ms;
 }
 
 static int igt_hang_sanitycheck(void *arg)
@@ -473,7 +471,6 @@ static int igt_reset_nop_engine(void *arg)
 	for_each_engine(engine, gt, id) {
 		unsigned int reset_count, reset_engine_count, count;
 		struct intel_context *ce;
-		unsigned long heartbeat;
 		IGT_TIMEOUT(end_time);
 		int err;
 
@@ -485,7 +482,7 @@ static int igt_reset_nop_engine(void *arg)
 		reset_engine_count = i915_reset_engine_count(global, engine);
 		count = 0;
 
-		engine_heartbeat_disable(engine, &heartbeat);
+		engine_heartbeat_disable(engine);
 		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
 		do {
 			int i;
@@ -529,7 +526,7 @@ static int igt_reset_nop_engine(void *arg)
 			}
 		} while (time_before(jiffies, end_time));
 		clear_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
-		engine_heartbeat_enable(engine, heartbeat);
+		engine_heartbeat_enable(engine);
 
 		pr_info("%s(%s): %d resets\n", __func__, engine->name, count);
 
@@ -564,7 +561,6 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
 
 	for_each_engine(engine, gt, id) {
 		unsigned int reset_count, reset_engine_count;
-		unsigned long heartbeat;
 		IGT_TIMEOUT(end_time);
 
 		if (active && !intel_engine_can_store_dword(engine))
@@ -580,7 +576,7 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
 		reset_count = i915_reset_count(global);
 		reset_engine_count = i915_reset_engine_count(global, engine);
 
-		engine_heartbeat_disable(engine, &heartbeat);
+		engine_heartbeat_disable(engine);
 		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
 		do {
 			if (active) {
@@ -632,7 +628,7 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
 			}
 		} while (time_before(jiffies, end_time));
 		clear_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
-		engine_heartbeat_enable(engine, heartbeat);
+		engine_heartbeat_enable(engine);
 
 		if (err)
 			break;
@@ -789,7 +785,6 @@ static int __igt_reset_engines(struct intel_gt *gt,
 		struct active_engine threads[I915_NUM_ENGINES] = {};
 		unsigned long device = i915_reset_count(global);
 		unsigned long count = 0, reported;
-		unsigned long heartbeat;
 		IGT_TIMEOUT(end_time);
 
 		if (flags & TEST_ACTIVE &&
@@ -832,7 +827,7 @@ static int __igt_reset_engines(struct intel_gt *gt,
 
 		yield(); /* start all threads before we begin */
 
-		engine_heartbeat_disable(engine, &heartbeat);
+		engine_heartbeat_disable(engine);
 		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
 		do {
 			struct i915_request *rq = NULL;
@@ -906,7 +901,7 @@ static int __igt_reset_engines(struct intel_gt *gt,
 			}
 		} while (time_before(jiffies, end_time));
 		clear_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
-		engine_heartbeat_enable(engine, heartbeat);
+		engine_heartbeat_enable(engine);
 
 		pr_info("i915_reset_engine(%s:%s): %lu resets\n",
 			engine->name, test_name, count);
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 3e042fa4b94b..b71f04db9c6e 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -51,22 +51,20 @@ static struct i915_vma *create_scratch(struct intel_gt *gt)
 	return vma;
 }
 
-static void engine_heartbeat_disable(struct intel_engine_cs *engine,
-				     unsigned long *saved)
+static void engine_heartbeat_disable(struct intel_engine_cs *engine)
 {
-	*saved = engine->props.heartbeat_interval_ms;
 	engine->props.heartbeat_interval_ms = 0;
 
 	intel_engine_pm_get(engine);
 	intel_engine_park_heartbeat(engine);
 }
 
-static void engine_heartbeat_enable(struct intel_engine_cs *engine,
-				    unsigned long saved)
+static void engine_heartbeat_enable(struct intel_engine_cs *engine)
 {
 	intel_engine_pm_put(engine);
 
-	engine->props.heartbeat_interval_ms = saved;
+	engine->props.heartbeat_interval_ms =
+		engine->defaults.heartbeat_interval_ms;
 }
 
 static bool is_active(struct i915_request *rq)
@@ -224,7 +222,6 @@ static int live_unlite_restore(struct intel_gt *gt, int prio)
 		struct intel_context *ce[2] = {};
 		struct i915_request *rq[2];
 		struct igt_live_test t;
-		unsigned long saved;
 		int n;
 
 		if (prio && !intel_engine_has_preemption(engine))
@@ -237,7 +234,7 @@ static int live_unlite_restore(struct intel_gt *gt, int prio)
 			err = -EIO;
 			break;
 		}
-		engine_heartbeat_disable(engine, &saved);
+		engine_heartbeat_disable(engine);
 
 		for (n = 0; n < ARRAY_SIZE(ce); n++) {
 			struct intel_context *tmp;
@@ -345,7 +342,7 @@ static int live_unlite_restore(struct intel_gt *gt, int prio)
 			intel_context_put(ce[n]);
 		}
 
-		engine_heartbeat_enable(engine, saved);
+		engine_heartbeat_enable(engine);
 		if (igt_live_test_end(&t))
 			err = -EIO;
 		if (err)
@@ -466,7 +463,6 @@ static int live_hold_reset(void *arg)
 
 	for_each_engine(engine, gt, id) {
 		struct intel_context *ce;
-		unsigned long heartbeat;
 		struct i915_request *rq;
 
 		ce = intel_context_create(engine);
@@ -475,7 +471,7 @@ static int live_hold_reset(void *arg)
 			break;
 		}
 
-		engine_heartbeat_disable(engine, &heartbeat);
+		engine_heartbeat_disable(engine);
 
 		rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK);
 		if (IS_ERR(rq)) {
@@ -535,7 +531,7 @@ static int live_hold_reset(void *arg)
 		i915_request_put(rq);
 
 out:
-		engine_heartbeat_enable(engine, heartbeat);
+		engine_heartbeat_enable(engine);
 		intel_context_put(ce);
 		if (err)
 			break;
@@ -580,10 +576,9 @@ static int live_error_interrupt(void *arg)
 
 	for_each_engine(engine, gt, id) {
 		const struct error_phase *p;
-		unsigned long heartbeat;
 		int err = 0;
 
-		engine_heartbeat_disable(engine, &heartbeat);
+		engine_heartbeat_disable(engine);
 
 		for (p = phases; p->error[0] != GOOD; p++) {
 			struct i915_request *client[ARRAY_SIZE(phases->error)];
@@ -682,7 +677,7 @@ static int live_error_interrupt(void *arg)
 			}
 		}
 
-		engine_heartbeat_enable(engine, heartbeat);
+		engine_heartbeat_enable(engine);
 		if (err) {
 			intel_gt_set_wedged(gt);
 			return err;
@@ -895,16 +890,14 @@ static int live_timeslice_preempt(void *arg)
 		enum intel_engine_id id;
 
 		for_each_engine(engine, gt, id) {
-			unsigned long saved;
-
 			if (!intel_engine_has_preemption(engine))
 				continue;
 
 			memset(vaddr, 0, PAGE_SIZE);
 
-			engine_heartbeat_disable(engine, &saved);
+			engine_heartbeat_disable(engine);
 			err = slice_semaphore_queue(engine, vma, count);
-			engine_heartbeat_enable(engine, saved);
+			engine_heartbeat_enable(engine);
 			if (err)
 				goto err_pin;
 
@@ -1009,7 +1002,6 @@ static int live_timeslice_rewind(void *arg)
 		enum { X = 1, Z, Y };
 		struct i915_request *rq[3] = {};
 		struct intel_context *ce;
-		unsigned long heartbeat;
 		unsigned long timeslice;
 		int i, err = 0;
 		u32 *slot;
@@ -1028,7 +1020,7 @@ static int live_timeslice_rewind(void *arg)
 		 * Expect execution/evaluation order XZY
 		 */
 
-		engine_heartbeat_disable(engine, &heartbeat);
+		engine_heartbeat_disable(engine);
 		timeslice = xchg(&engine->props.timeslice_duration_ms, 1);
 
 		slot = memset32(engine->status_page.addr + 1000, 0, 4);
@@ -1122,7 +1114,7 @@ static int live_timeslice_rewind(void *arg)
 		wmb();
 
 		engine->props.timeslice_duration_ms = timeslice;
-		engine_heartbeat_enable(engine, heartbeat);
+		engine_heartbeat_enable(engine);
 		for (i = 0; i < 3; i++)
 			i915_request_put(rq[i]);
 		if (igt_flush_test(gt->i915))
@@ -1202,12 +1194,11 @@ static int live_timeslice_queue(void *arg)
 			.priority = I915_USER_PRIORITY(I915_PRIORITY_MAX),
 		};
 		struct i915_request *rq, *nop;
-		unsigned long saved;
 
 		if (!intel_engine_has_preemption(engine))
 			continue;
 
-		engine_heartbeat_disable(engine, &saved);
+		engine_heartbeat_disable(engine);
 		memset(vaddr, 0, PAGE_SIZE);
 
 		/* ELSP[0]: semaphore wait */
@@ -1284,7 +1275,7 @@ static int live_timeslice_queue(void *arg)
 err_rq:
 		i915_request_put(rq);
 err_heartbeat:
-		engine_heartbeat_enable(engine, saved);
+		engine_heartbeat_enable(engine);
 		if (err)
 			break;
 	}
@@ -4145,7 +4136,6 @@ static int reset_virtual_engine(struct intel_gt *gt,
 {
 	struct intel_engine_cs *engine;
 	struct intel_context *ve;
-	unsigned long *heartbeat;
 	struct igt_spinner spin;
 	struct i915_request *rq;
 	unsigned int n;
@@ -4157,15 +4147,9 @@ static int reset_virtual_engine(struct intel_gt *gt,
 	 * descendents are not executed while the capture is in progress.
 	 */
 
-	heartbeat = kmalloc_array(nsibling, sizeof(*heartbeat), GFP_KERNEL);
-	if (!heartbeat)
+	if (igt_spinner_init(&spin, gt))
 		return -ENOMEM;
 
-	if (igt_spinner_init(&spin, gt)) {
-		err = -ENOMEM;
-		goto out_free;
-	}
-
 	ve = intel_execlists_create_virtual(siblings, nsibling);
 	if (IS_ERR(ve)) {
 		err = PTR_ERR(ve);
@@ -4173,7 +4157,7 @@ static int reset_virtual_engine(struct intel_gt *gt,
 	}
 
 	for (n = 0; n < nsibling; n++)
-		engine_heartbeat_disable(siblings[n], &heartbeat[n]);
+		engine_heartbeat_disable(siblings[n]);
 
 	rq = igt_spinner_create_request(&spin, ve, MI_ARB_CHECK);
 	if (IS_ERR(rq)) {
@@ -4244,13 +4228,11 @@ static int reset_virtual_engine(struct intel_gt *gt,
 	i915_request_put(rq);
 out_heartbeat:
 	for (n = 0; n < nsibling; n++)
-		engine_heartbeat_enable(siblings[n], heartbeat[n]);
+		engine_heartbeat_enable(siblings[n]);
 
 	intel_context_put(ve);
 out_spin:
 	igt_spinner_fini(&spin);
-out_free:
-	kfree(heartbeat);
 	return err;
 }
 
@@ -4918,9 +4900,7 @@ static int live_lrc_gpr(void *arg)
 		return PTR_ERR(scratch);
 
 	for_each_engine(engine, gt, id) {
-		unsigned long heartbeat;
-
-		engine_heartbeat_disable(engine, &heartbeat);
+		engine_heartbeat_disable(engine);
 
 		err = __live_lrc_gpr(engine, scratch, false);
 		if (err)
@@ -4931,7 +4911,7 @@ static int live_lrc_gpr(void *arg)
 			goto err;
 
 err:
-		engine_heartbeat_enable(engine, heartbeat);
+		engine_heartbeat_enable(engine);
 		if (igt_flush_test(gt->i915))
 			err = -EIO;
 		if (err)
@@ -5078,10 +5058,9 @@ static int live_lrc_timestamp(void *arg)
 	 */
 
 	for_each_engine(data.engine, gt, id) {
-		unsigned long heartbeat;
 		int i, err = 0;
 
-		engine_heartbeat_disable(data.engine, &heartbeat);
+		engine_heartbeat_disable(data.engine);
 
 		for (i = 0; i < ARRAY_SIZE(data.ce); i++) {
 			struct intel_context *tmp;
@@ -5114,7 +5093,7 @@ static int live_lrc_timestamp(void *arg)
 		}
 
 err:
-		engine_heartbeat_enable(data.engine, heartbeat);
+		engine_heartbeat_enable(data.engine);
 		for (i = 0; i < ARRAY_SIZE(data.ce); i++) {
 			if (!data.ce[i])
 				break;
diff --git a/drivers/gpu/drm/i915/gt/selftest_rps.c b/drivers/gpu/drm/i915/gt/selftest_rps.c
index 6275d69aa9cc..5049c3dd08a6 100644
--- a/drivers/gpu/drm/i915/gt/selftest_rps.c
+++ b/drivers/gpu/drm/i915/gt/selftest_rps.c
@@ -20,24 +20,20 @@
 /* Try to isolate the impact of cstates from determing frequency response */
 #define CPU_LATENCY 0 /* -1 to disable pm_qos, 0 to disable cstates */
 
-static unsigned long engine_heartbeat_disable(struct intel_engine_cs *engine)
+static void engine_heartbeat_disable(struct intel_engine_cs *engine)
 {
-	unsigned long old;
-
-	old = fetch_and_zero(&engine->props.heartbeat_interval_ms);
+	engine->props.heartbeat_interval_ms = 0;
 
 	intel_engine_pm_get(engine);
 	intel_engine_park_heartbeat(engine);
-
-	return old;
 }
 
-static void engine_heartbeat_enable(struct intel_engine_cs *engine,
-				    unsigned long saved)
+static void engine_heartbeat_enable(struct intel_engine_cs *engine)
 {
 	intel_engine_pm_put(engine);
 
-	engine->props.heartbeat_interval_ms = saved;
+	engine->props.heartbeat_interval_ms =
+		engine->defaults.heartbeat_interval_ms;
 }
 
 static void dummy_rps_work(struct work_struct *wrk)
@@ -246,7 +242,6 @@ int live_rps_clock_interval(void *arg)
 	intel_gt_check_clock_frequency(gt);
 
 	for_each_engine(engine, gt, id) {
-		unsigned long saved_heartbeat;
 		struct i915_request *rq;
 		u32 cycles;
 		u64 dt;
@@ -254,13 +249,13 @@ int live_rps_clock_interval(void *arg)
 		if (!intel_engine_can_store_dword(engine))
 			continue;
 
-		saved_heartbeat = engine_heartbeat_disable(engine);
+		engine_heartbeat_disable(engine);
 
 		rq = igt_spinner_create_request(&spin,
 						engine->kernel_context,
 						MI_NOOP);
 		if (IS_ERR(rq)) {
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			err = PTR_ERR(rq);
 			break;
 		}
@@ -271,7 +266,7 @@ int live_rps_clock_interval(void *arg)
 			pr_err("%s: RPS spinner did not start\n",
 			       engine->name);
 			igt_spinner_end(&spin);
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			intel_gt_set_wedged(engine->gt);
 			err = -EIO;
 			break;
@@ -327,7 +322,7 @@ int live_rps_clock_interval(void *arg)
 		intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL);
 
 		igt_spinner_end(&spin);
-		engine_heartbeat_enable(engine, saved_heartbeat);
+		engine_heartbeat_enable(engine);
 
 		if (err == 0) {
 			u64 time = intel_gt_pm_interval_to_ns(gt, cycles);
@@ -405,7 +400,6 @@ int live_rps_control(void *arg)
 
 	intel_gt_pm_get(gt);
 	for_each_engine(engine, gt, id) {
-		unsigned long saved_heartbeat;
 		struct i915_request *rq;
 		ktime_t min_dt, max_dt;
 		int f, limit;
@@ -414,7 +408,7 @@ int live_rps_control(void *arg)
 		if (!intel_engine_can_store_dword(engine))
 			continue;
 
-		saved_heartbeat = engine_heartbeat_disable(engine);
+		engine_heartbeat_disable(engine);
 
 		rq = igt_spinner_create_request(&spin,
 						engine->kernel_context,
@@ -430,7 +424,7 @@ int live_rps_control(void *arg)
 			pr_err("%s: RPS spinner did not start\n",
 			       engine->name);
 			igt_spinner_end(&spin);
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			intel_gt_set_wedged(engine->gt);
 			err = -EIO;
 			break;
@@ -440,7 +434,7 @@ int live_rps_control(void *arg)
 			pr_err("%s: could not set minimum frequency [%x], only %x!\n",
 			       engine->name, rps->min_freq, read_cagf(rps));
 			igt_spinner_end(&spin);
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			show_pstate_limits(rps);
 			err = -EINVAL;
 			break;
@@ -457,7 +451,7 @@ int live_rps_control(void *arg)
 			pr_err("%s: could not restore minimum frequency [%x], only %x!\n",
 			       engine->name, rps->min_freq, read_cagf(rps));
 			igt_spinner_end(&spin);
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			show_pstate_limits(rps);
 			err = -EINVAL;
 			break;
@@ -472,7 +466,7 @@ int live_rps_control(void *arg)
 		min_dt = ktime_sub(ktime_get(), min_dt);
 
 		igt_spinner_end(&spin);
-		engine_heartbeat_enable(engine, saved_heartbeat);
+		engine_heartbeat_enable(engine);
 
 		pr_info("%s: range:[%x:%uMHz, %x:%uMHz] limit:[%x:%uMHz], %x:%x response %lluns:%lluns\n",
 			engine->name,
@@ -635,7 +629,6 @@ int live_rps_frequency_cs(void *arg)
 	rps->work.func = dummy_rps_work;
 
 	for_each_engine(engine, gt, id) {
-		unsigned long saved_heartbeat;
 		struct i915_request *rq;
 		struct i915_vma *vma;
 		u32 *cancel, *cntr;
@@ -644,14 +637,14 @@ int live_rps_frequency_cs(void *arg)
 			int freq;
 		} min, max;
 
-		saved_heartbeat = engine_heartbeat_disable(engine);
+		engine_heartbeat_disable(engine);
 
 		vma = create_spin_counter(engine,
 					  engine->kernel_context->vm, false,
 					  &cancel, &cntr);
 		if (IS_ERR(vma)) {
 			err = PTR_ERR(vma);
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			break;
 		}
 
@@ -732,7 +725,7 @@ int live_rps_frequency_cs(void *arg)
 		i915_vma_unpin(vma);
 		i915_vma_put(vma);
 
-		engine_heartbeat_enable(engine, saved_heartbeat);
+		engine_heartbeat_enable(engine);
 		if (igt_flush_test(gt->i915))
 			err = -EIO;
 		if (err)
@@ -778,7 +771,6 @@ int live_rps_frequency_srm(void *arg)
 	rps->work.func = dummy_rps_work;
 
 	for_each_engine(engine, gt, id) {
-		unsigned long saved_heartbeat;
 		struct i915_request *rq;
 		struct i915_vma *vma;
 		u32 *cancel, *cntr;
@@ -787,14 +779,14 @@ int live_rps_frequency_srm(void *arg)
 			int freq;
 		} min, max;
 
-		saved_heartbeat = engine_heartbeat_disable(engine);
+		engine_heartbeat_disable(engine);
 
 		vma = create_spin_counter(engine,
 					  engine->kernel_context->vm, true,
 					  &cancel, &cntr);
 		if (IS_ERR(vma)) {
 			err = PTR_ERR(vma);
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			break;
 		}
 
@@ -874,7 +866,7 @@ int live_rps_frequency_srm(void *arg)
 		i915_vma_unpin(vma);
 		i915_vma_put(vma);
 
-		engine_heartbeat_enable(engine, saved_heartbeat);
+		engine_heartbeat_enable(engine);
 		if (igt_flush_test(gt->i915))
 			err = -EIO;
 		if (err)
@@ -1066,16 +1058,14 @@ int live_rps_interrupt(void *arg)
 	for_each_engine(engine, gt, id) {
 		/* Keep the engine busy with a spinner; expect an UP! */
 		if (pm_events & GEN6_PM_RP_UP_THRESHOLD) {
-			unsigned long saved_heartbeat;
-
 			intel_gt_pm_wait_for_idle(engine->gt);
 			GEM_BUG_ON(intel_rps_is_active(rps));
 
-			saved_heartbeat = engine_heartbeat_disable(engine);
+			engine_heartbeat_disable(engine);
 
 			err = __rps_up_interrupt(rps, engine, &spin);
 
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			if (err)
 				goto out;
 
@@ -1084,15 +1074,13 @@ int live_rps_interrupt(void *arg)
 
 		/* Keep the engine awake but idle and check for DOWN */
 		if (pm_events & GEN6_PM_RP_DOWN_THRESHOLD) {
-			unsigned long saved_heartbeat;
-
-			saved_heartbeat = engine_heartbeat_disable(engine);
+			engine_heartbeat_disable(engine);
 			intel_rc6_disable(&gt->rc6);
 
 			err = __rps_down_interrupt(rps, engine);
 
 			intel_rc6_enable(&gt->rc6);
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			if (err)
 				goto out;
 		}
@@ -1168,7 +1156,6 @@ int live_rps_power(void *arg)
 	rps->work.func = dummy_rps_work;
 
 	for_each_engine(engine, gt, id) {
-		unsigned long saved_heartbeat;
 		struct i915_request *rq;
 		struct {
 			u64 power;
@@ -1178,13 +1165,13 @@ int live_rps_power(void *arg)
 		if (!intel_engine_can_store_dword(engine))
 			continue;
 
-		saved_heartbeat = engine_heartbeat_disable(engine);
+		engine_heartbeat_disable(engine);
 
 		rq = igt_spinner_create_request(&spin,
 						engine->kernel_context,
 						MI_NOOP);
 		if (IS_ERR(rq)) {
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			err = PTR_ERR(rq);
 			break;
 		}
@@ -1195,7 +1182,7 @@ int live_rps_power(void *arg)
 			pr_err("%s: RPS spinner did not start\n",
 			       engine->name);
 			igt_spinner_end(&spin);
-			engine_heartbeat_enable(engine, saved_heartbeat);
+			engine_heartbeat_enable(engine);
 			intel_gt_set_wedged(engine->gt);
 			err = -EIO;
 			break;
@@ -1208,7 +1195,7 @@ int live_rps_power(void *arg)
 		min.power = measure_power_at(rps, &min.freq);
 
 		igt_spinner_end(&spin);
-		engine_heartbeat_enable(engine, saved_heartbeat);
+		engine_heartbeat_enable(engine);
 
 		pr_info("%s: min:%llumW @ %uMHz, max:%llumW @ %uMHz\n",
 			engine->name,
diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index c2578a0f2f14..ef1c35073dc0 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -751,22 +751,20 @@ static int live_hwsp_wrap(void *arg)
 	return err;
 }
 
-static void engine_heartbeat_disable(struct intel_engine_cs *engine,
-				     unsigned long *saved)
+static void engine_heartbeat_disable(struct intel_engine_cs *engine)
 {
-	*saved = engine->props.heartbeat_interval_ms;
 	engine->props.heartbeat_interval_ms = 0;
 
 	intel_engine_pm_get(engine);
 	intel_engine_park_heartbeat(engine);
 }
 
-static void engine_heartbeat_enable(struct intel_engine_cs *engine,
-				    unsigned long saved)
+static void engine_heartbeat_enable(struct intel_engine_cs *engine)
 {
 	intel_engine_pm_put(engine);
 
-	engine->props.heartbeat_interval_ms = saved;
+	engine->props.heartbeat_interval_ms =
+		engine->defaults.heartbeat_interval_ms;
 }
 
 static int live_hwsp_rollover_kernel(void *arg)
@@ -785,10 +783,9 @@ static int live_hwsp_rollover_kernel(void *arg)
 		struct intel_context *ce = engine->kernel_context;
 		struct intel_timeline *tl = ce->timeline;
 		struct i915_request *rq[3] = {};
-		unsigned long heartbeat;
 		int i;
 
-		engine_heartbeat_disable(engine, &heartbeat);
+		engine_heartbeat_disable(engine);
 		if (intel_gt_wait_for_idle(gt, HZ / 2)) {
 			err = -EIO;
 			goto out;
@@ -839,7 +836,7 @@ static int live_hwsp_rollover_kernel(void *arg)
 out:
 		for (i = 0; i < ARRAY_SIZE(rq); i++)
 			i915_request_put(rq[i]);
-		engine_heartbeat_enable(engine, heartbeat);
+		engine_heartbeat_enable(engine);
 		if (err)
 			break;
 	}
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit()
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19 14:20   ` Mika Kuoppala
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 05/12] drm/i915/execlists: Shortcircuit queue_prio() for no internal levels Chris Wilson
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

When we look at i915_request_is_started() we must be careful in case we
are using a request that does not have the initial-breadcrumb and
instead the is-started is being compared against the end of the previous
request. This will make wait_for_submit() declare that a request has
been already submitted too early.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index b71f04db9c6e..f6949cd55e92 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -75,7 +75,7 @@ static bool is_active(struct i915_request *rq)
 	if (i915_request_on_hold(rq))
 		return true;
 
-	if (i915_request_started(rq))
+	if (i915_request_has_initial_breadcrumb(rq) && i915_request_started(rq))
 		return true;
 
 	return false;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 05/12] drm/i915/execlists: Shortcircuit queue_prio() for no internal levels
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (2 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit() Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 06/12] drm/i915: Move saturated workload detection back to the context Chris Wilson
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

If there are no internal levels and the user priority-shift is zero, we
can help the compiler eliminate some dead code:

Function                                     old     new   delta
start_timeslice                              169     154     -15
__execlists_submission_tasklet              4696    4659     -37

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index d7ef3f8640d2..06544c56a137 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -446,6 +446,9 @@ static int queue_prio(const struct intel_engine_execlists *execlists)
 	 * we have to flip the index value to become priority.
 	 */
 	p = to_priolist(rb);
+	if (!I915_USER_PRIORITY_SHIFT)
+		return p->priority;
+
 	return ((p->priority + 1) << I915_USER_PRIORITY_SHIFT) - ffs(p->used);
 }
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 06/12] drm/i915: Move saturated workload detection back to the context
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (3 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 05/12] drm/i915/execlists: Shortcircuit queue_prio() for no internal levels Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines Chris Wilson
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

When we introduced the saturated workload detection to tell us to back
off from semaphore usage [semaphores have a noticeable impact on
contended bus cycles with the CPU for some heavy workloads], we first
introduced it as a per-context tracker. This allows individual contexts
to try and optimise their own usage, but we found that with the local
tracking and the no-semaphore boosting, the first context to disable
semaphores got a massive priority boost and so would starve the rest and
all new contexts (as they started with semaphores enabled and lower
priority). Hence we moved the saturated workload detection to the
engine, and a consequence had to disable semaphores on virtual engines.

Now that we do not have semaphore priority boosting, we can move the
tracking back to the context and virtual engines can now utilise the
faster inter-engine synchronisation.

References: 44d89409a12e ("drm/i915: Make the semaphore saturation mask global")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_context.c       |  1 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
 drivers/gpu/drm/i915/gt/intel_engine_pm.c     |  2 --
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  2 --
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 15 ---------------
 drivers/gpu/drm/i915/i915_request.c           |  4 ++--
 6 files changed, 5 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index e4aece20bc80..762a251d553b 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -268,6 +268,7 @@ static int __intel_context_active(struct i915_active *active)
 	if (err)
 		goto err_timeline;
 
+	ce->saturated = 0;
 	return 0;
 
 err_timeline:
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 4954b0df4864..aed26d93c2ca 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -78,6 +78,8 @@ struct intel_context {
 	} lrc;
 	u32 tag; /* cookie passed to HW to track this context on submission */
 
+	intel_engine_mask_t saturated; /* submitting semaphores too late? */
+
 	/* Time on GPU as tracked by the hw. */
 	struct {
 		struct ewma_runtime avg;
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index d0a1078ef632..6d7fdba5adef 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -229,8 +229,6 @@ static int __engine_park(struct intel_wakeref *wf)
 	struct intel_engine_cs *engine =
 		container_of(wf, typeof(*engine), wakeref);
 
-	engine->saturated = 0;
-
 	/*
 	 * If one and only one request is completed between pm events,
 	 * we know that we are inside the kernel context and it is
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 2b6cdf47d428..c443b6bb884b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -332,8 +332,6 @@ struct intel_engine_cs {
 
 	struct intel_context *kernel_context; /* pinned */
 
-	intel_engine_mask_t saturated; /* submitting semaphores too late? */
-
 	struct {
 		struct delayed_work work;
 		struct i915_request *systole;
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 06544c56a137..8998fe1f739d 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -5633,21 +5633,6 @@ intel_execlists_create_virtual(struct intel_engine_cs **siblings,
 	ve->base.instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;
 	ve->base.uabi_instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;
 
-	/*
-	 * The decision on whether to submit a request using semaphores
-	 * depends on the saturated state of the engine. We only compute
-	 * this during HW submission of the request, and we need for this
-	 * state to be globally applied to all requests being submitted
-	 * to this engine. Virtual engines encompass more than one physical
-	 * engine and so we cannot accurately tell in advance if one of those
-	 * engines is already saturated and so cannot afford to use a semaphore
-	 * and be pessimized in priority for doing so -- if we are the only
-	 * context using semaphores after all other clients have stopped, we
-	 * will be starved on the saturated system. Such a global switch for
-	 * semaphores is less than ideal, but alas is the current compromise.
-	 */
-	ve->base.saturated = ALL_ENGINES;
-
 	snprintf(ve->base.name, sizeof(ve->base.name), "virtual");
 
 	intel_engine_init_active(&ve->base, ENGINE_VIRTUAL);
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 526c1e9acbd5..31ef683d27b4 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -467,7 +467,7 @@ bool __i915_request_submit(struct i915_request *request)
 	 */
 	if (request->sched.semaphores &&
 	    i915_sw_fence_signaled(&request->semaphore))
-		engine->saturated |= request->sched.semaphores;
+		request->context->saturated |= request->sched.semaphores;
 
 	engine->emit_fini_breadcrumb(request,
 				     request->ring->vaddr + request->postfix);
@@ -919,7 +919,7 @@ already_busywaiting(struct i915_request *rq)
 	 *
 	 * See the are-we-too-late? check in __i915_request_submit().
 	 */
-	return rq->sched.semaphores | READ_ONCE(rq->engine->saturated);
+	return rq->sched.semaphores | READ_ONCE(rq->context->saturated);
 }
 
 static int
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (4 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 06/12] drm/i915: Move saturated workload detection back to the context Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19  9:39   ` Tvrtko Ursulin
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 08/12] drm/i915/gt: Kick virtual siblings on timeslice out Chris Wilson
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 200 ++++++++++++++++++++++++-
 1 file changed, 197 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index f6949cd55e92..ef38dd52945c 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -3591,9 +3591,11 @@ static int nop_virtual_engine(struct intel_gt *gt,
 	return err;
 }
 
-static unsigned int select_siblings(struct intel_gt *gt,
-				    unsigned int class,
-				    struct intel_engine_cs **siblings)
+static unsigned int
+__select_siblings(struct intel_gt *gt,
+		  unsigned int class,
+		  struct intel_engine_cs **siblings,
+		  bool (*filter)(const struct intel_engine_cs *))
 {
 	unsigned int n = 0;
 	unsigned int inst;
@@ -3602,12 +3604,23 @@ static unsigned int select_siblings(struct intel_gt *gt,
 		if (!gt->engine_class[class][inst])
 			continue;
 
+		if (filter && !filter(gt->engine_class[class][inst]))
+			continue;
+
 		siblings[n++] = gt->engine_class[class][inst];
 	}
 
 	return n;
 }
 
+static unsigned int
+select_siblings(struct intel_gt *gt,
+		unsigned int class,
+		struct intel_engine_cs **siblings)
+{
+	return __select_siblings(gt, class, siblings, NULL);
+}
+
 static int live_virtual_engine(void *arg)
 {
 	struct intel_gt *gt = arg;
@@ -3762,6 +3775,186 @@ static int live_virtual_mask(void *arg)
 	return 0;
 }
 
+static long slice_timeout(struct intel_engine_cs *engine)
+{
+	long timeout;
+
+	/* Enough time for a timeslice to kick in, and kick out */
+	timeout = 2 * msecs_to_jiffies_timeout(timeslice(engine));
+
+	/* Enough time for the nop request to complete */
+	timeout += HZ / 5;
+
+	return timeout;
+}
+
+static int slicein_virtual_engine(struct intel_gt *gt,
+				  struct intel_engine_cs **siblings,
+				  unsigned int nsibling)
+{
+	const long timeout = slice_timeout(siblings[0]);
+	struct intel_context *ce;
+	struct i915_request *rq;
+	struct igt_spinner spin;
+	unsigned int n;
+	int err = 0;
+
+	/*
+	 * Virtual requests must take part in timeslicing on the target engines.
+	 */
+
+	if (igt_spinner_init(&spin, gt))
+		return -ENOMEM;
+
+	for (n = 0; n < nsibling; n++) {
+		ce = intel_context_create(siblings[n]);
+		if (IS_ERR(ce)) {
+			err = PTR_ERR(ce);
+			goto out;
+		}
+
+		rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK);
+		intel_context_put(ce);
+		if (IS_ERR(rq)) {
+			err = PTR_ERR(rq);
+			goto out;
+		}
+
+		i915_request_add(rq);
+	}
+
+	ce = intel_execlists_create_virtual(siblings, nsibling);
+	if (IS_ERR(ce)) {
+		err = PTR_ERR(ce);
+		goto out;
+	}
+
+	rq = intel_context_create_request(ce);
+	intel_context_put(ce);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto out;
+	}
+
+	i915_request_get(rq);
+	i915_request_add(rq);
+	if (i915_request_wait(rq, 0, timeout) < 0) {
+		GEM_TRACE_ERR("%s(%s) failed to slice in virtual request\n",
+			      __func__, rq->engine->name);
+		GEM_TRACE_DUMP();
+		intel_gt_set_wedged(gt);
+		err = -EIO;
+	}
+	i915_request_put(rq);
+
+out:
+	igt_spinner_end(&spin);
+	if (igt_flush_test(gt->i915))
+		err = -EIO;
+	igt_spinner_fini(&spin);
+	return err;
+}
+
+static int sliceout_virtual_engine(struct intel_gt *gt,
+				   struct intel_engine_cs **siblings,
+				   unsigned int nsibling)
+{
+	const long timeout = slice_timeout(siblings[0]);
+	struct intel_context *ce;
+	struct i915_request *rq;
+	struct igt_spinner spin;
+	unsigned int n;
+	int err = 0;
+
+	/*
+	 * Virtual requests must allow others a fair timeslice.
+	 */
+
+	if (igt_spinner_init(&spin, gt))
+		return -ENOMEM;
+
+	/* XXX We do not handle oversubscription and fairness with normal rq */
+	for (n = 0; n < nsibling; n++) {
+		ce = intel_execlists_create_virtual(siblings, nsibling);
+		if (IS_ERR(ce)) {
+			err = PTR_ERR(ce);
+			goto out;
+		}
+
+		rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK);
+		intel_context_put(ce);
+		if (IS_ERR(rq)) {
+			err = PTR_ERR(rq);
+			goto out;
+		}
+
+		i915_request_add(rq);
+	}
+
+	for (n = 0; !err && n < nsibling; n++) {
+		ce = intel_context_create(siblings[n]);
+		if (IS_ERR(ce)) {
+			err = PTR_ERR(ce);
+			goto out;
+		}
+
+		rq = intel_context_create_request(ce);
+		intel_context_put(ce);
+		if (IS_ERR(rq)) {
+			err = PTR_ERR(rq);
+			goto out;
+		}
+
+		i915_request_get(rq);
+		i915_request_add(rq);
+		if (i915_request_wait(rq, 0, timeout) < 0) {
+			GEM_TRACE_ERR("%s(%s) failed to slice out virtual request\n",
+				      __func__, siblings[n]->name);
+			GEM_TRACE_DUMP();
+			intel_gt_set_wedged(gt);
+			err = -EIO;
+		}
+		i915_request_put(rq);
+	}
+
+out:
+	igt_spinner_end(&spin);
+	if (igt_flush_test(gt->i915))
+		err = -EIO;
+	igt_spinner_fini(&spin);
+	return err;
+}
+
+static int live_virtual_slice(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct intel_engine_cs *siblings[MAX_ENGINE_INSTANCE + 1];
+	unsigned int class;
+	int err;
+
+	if (intel_uc_uses_guc_submission(&gt->uc))
+		return 0;
+
+	for (class = 0; class <= MAX_ENGINE_CLASS; class++) {
+		unsigned int nsibling;
+
+		nsibling = __select_siblings(gt, class, siblings,
+					     intel_engine_has_timeslices);
+		if (nsibling < 2)
+			continue;
+
+		err = slicein_virtual_engine(gt, siblings, nsibling);
+		if (err)
+			return err;
+
+		err = sliceout_virtual_engine(gt, siblings, nsibling);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
 static int preserved_virtual_engine(struct intel_gt *gt,
 				    struct intel_engine_cs **siblings,
 				    unsigned int nsibling)
@@ -4297,6 +4490,7 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915)
 		SUBTEST(live_virtual_engine),
 		SUBTEST(live_virtual_mask),
 		SUBTEST(live_virtual_preserved),
+		SUBTEST(live_virtual_slice),
 		SUBTEST(live_virtual_bond),
 		SUBTEST(live_virtual_reset),
 	};
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 08/12] drm/i915/gt: Kick virtual siblings on timeslice out
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (5 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing Chris Wilson
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

If we decide to timeslice out the current virtual request, we will
unsubmit it while it is still busy (ve->context.inflight == sibling[0]).
If the virtual tasklet and then the other sibling tasklets run before we
completely schedule out the active virtual request for the preemption,
those other tasklets will see that the virtul request is still inflight
on sibling[0] and leave it be. Therefore when we finally schedule-out
the virtual request and if we see that we have passed it back to the
virtual engine, reschedule the virtual tasklet so that it may be
resubmitted on any of the siblings.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 8998fe1f739d..35e7ae3c049c 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1405,7 +1405,7 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
 	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
 	struct i915_request *next = READ_ONCE(ve->request);
 
-	if (next && next->execution_mask & ~rq->execution_mask)
+	if (next == rq || (next && next->execution_mask & ~rq->execution_mask))
 		tasklet_hi_schedule(&ve->base.execlists.tasklet);
 }
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (6 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 08/12] drm/i915/gt: Kick virtual siblings on timeslice out Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19  9:40   ` Tvrtko Ursulin
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 10/12] drm/i915/gt: Use virtual_engine during execlists_dequeue Chris Wilson
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual request.

Testcase: igt/gem_exec_balancer/sliced
Fixes: 3df2deed411e ("drm/i915/execlists: Enable timeslice on partial virtual engine dequeue")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 30 +++++++++++++++++++++++------
 1 file changed, 24 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 35e7ae3c049c..42cb0cae2845 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1898,7 +1898,8 @@ static void defer_active(struct intel_engine_cs *engine)
 
 static bool
 need_timeslice(const struct intel_engine_cs *engine,
-	       const struct i915_request *rq)
+	       const struct i915_request *rq,
+	       const struct rb_node *rb)
 {
 	int hint;
 
@@ -1906,6 +1907,24 @@ need_timeslice(const struct intel_engine_cs *engine,
 		return false;
 
 	hint = engine->execlists.queue_priority_hint;
+
+	if (rb) {
+		const struct virtual_engine *ve =
+			rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
+		const struct intel_engine_cs *inflight =
+			intel_context_inflight(&ve->context);
+
+		if (!inflight || inflight == engine) {
+			struct i915_request *next;
+
+			rcu_read_lock();
+			next = READ_ONCE(ve->request);
+			if (next)
+				hint = max(hint, rq_prio(next));
+			rcu_read_unlock();
+		}
+	}
+
 	if (!list_is_last(&rq->sched.link, &engine->active.requests))
 		hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
 
@@ -1980,10 +1999,9 @@ static void set_timeslice(struct intel_engine_cs *engine)
 	set_timer_ms(&engine->execlists.timer, duration);
 }
 
-static void start_timeslice(struct intel_engine_cs *engine)
+static void start_timeslice(struct intel_engine_cs *engine, int prio)
 {
 	struct intel_engine_execlists *execlists = &engine->execlists;
-	const int prio = queue_prio(execlists);
 	unsigned long duration;
 
 	if (!intel_engine_has_timeslices(engine))
@@ -2143,7 +2161,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 			__unwind_incomplete_requests(engine);
 
 			last = NULL;
-		} else if (need_timeslice(engine, last) &&
+		} else if (need_timeslice(engine, last, rb) &&
 			   timeslice_expired(execlists, last)) {
 			if (i915_request_completed(last)) {
 				tasklet_hi_schedule(&execlists->tasklet);
@@ -2191,7 +2209,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 				 * Even if ELSP[1] is occupied and not worthy
 				 * of timeslices, our queue might be.
 				 */
-				start_timeslice(engine);
+				start_timeslice(engine, queue_prio(execlists));
 				return;
 			}
 		}
@@ -2226,7 +2244,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 
 			if (last && !can_merge_rq(last, rq)) {
 				spin_unlock(&ve->base.active.lock);
-				start_timeslice(engine);
+				start_timeslice(engine, rq_prio(rq));
 				return; /* leave this for another sibling */
 			}
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 10/12] drm/i915/gt: Use virtual_engine during execlists_dequeue
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (7 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 11/12] drm/i915/gt: Decouple inflight virtual engines Chris Wilson
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves off a bit
of code.

v2: Keep a single virtual engine lookup, for typical use.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 217 +++++++++++++---------------
 1 file changed, 104 insertions(+), 113 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 42cb0cae2845..fe9e31a25a76 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -454,7 +454,7 @@ static int queue_prio(const struct intel_engine_execlists *execlists)
 
 static inline bool need_preempt(const struct intel_engine_cs *engine,
 				const struct i915_request *rq,
-				struct rb_node *rb)
+				struct virtual_engine *ve)
 {
 	int last_prio;
 
@@ -491,9 +491,7 @@ static inline bool need_preempt(const struct intel_engine_cs *engine,
 	    rq_prio(list_next_entry(rq, sched.link)) > last_prio)
 		return true;
 
-	if (rb) {
-		struct virtual_engine *ve =
-			rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
+	if (ve) {
 		bool preempt = false;
 
 		if (engine == ve->siblings[0]) { /* only preempt one sibling */
@@ -1815,6 +1813,35 @@ static bool virtual_matches(const struct virtual_engine *ve,
 	return true;
 }
 
+static struct virtual_engine *
+first_virtual_engine(struct intel_engine_cs *engine)
+{
+	struct intel_engine_execlists *el = &engine->execlists;
+	struct rb_node *rb = rb_first_cached(&el->virtual);
+
+	while (rb) {
+		struct virtual_engine *ve =
+			rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
+		struct i915_request *rq = READ_ONCE(ve->request);
+
+		if (!rq) { /* lazily cleanup after another engine handled rq */
+			rb_erase_cached(rb, &el->virtual);
+			RB_CLEAR_NODE(rb);
+			rb = rb_first_cached(&el->virtual);
+			continue;
+		}
+
+		if (!virtual_matches(ve, rq, engine)) {
+			rb = rb_next(rb);
+			continue;
+		}
+
+		return ve;
+	}
+
+	return NULL;
+}
+
 static void virtual_xfer_breadcrumbs(struct virtual_engine *ve)
 {
 	/*
@@ -1899,7 +1926,7 @@ static void defer_active(struct intel_engine_cs *engine)
 static bool
 need_timeslice(const struct intel_engine_cs *engine,
 	       const struct i915_request *rq,
-	       const struct rb_node *rb)
+	       struct virtual_engine *ve)
 {
 	int hint;
 
@@ -1908,9 +1935,7 @@ need_timeslice(const struct intel_engine_cs *engine,
 
 	hint = engine->execlists.queue_priority_hint;
 
-	if (rb) {
-		const struct virtual_engine *ve =
-			rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
+	if (ve) {
 		const struct intel_engine_cs *inflight =
 			intel_context_inflight(&ve->context);
 
@@ -2060,7 +2085,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 	struct intel_engine_execlists * const execlists = &engine->execlists;
 	struct i915_request **port = execlists->pending;
 	struct i915_request ** const last_port = port + execlists->port_mask;
-	struct i915_request * const *active;
+	struct i915_request * const *active = READ_ONCE(execlists->active);
+	struct virtual_engine *ve = first_virtual_engine(engine);
 	struct i915_request *last;
 	struct rb_node *rb;
 	bool submit = false;
@@ -2087,26 +2113,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 	 * and context switches) submission.
 	 */
 
-	for (rb = rb_first_cached(&execlists->virtual); rb; ) {
-		struct virtual_engine *ve =
-			rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
-		struct i915_request *rq = READ_ONCE(ve->request);
-
-		if (!rq) { /* lazily cleanup after another engine handled rq */
-			rb_erase_cached(rb, &execlists->virtual);
-			RB_CLEAR_NODE(rb);
-			rb = rb_first_cached(&execlists->virtual);
-			continue;
-		}
-
-		if (!virtual_matches(ve, rq, engine)) {
-			rb = rb_next(rb);
-			continue;
-		}
-
-		break;
-	}
-
 	/*
 	 * If the queue is higher priority than the last
 	 * request in the currently active context, submit afresh.
@@ -2114,10 +2120,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 	 * the active context to interject the preemption request,
 	 * i.e. we will retrigger preemption following the ack in case
 	 * of trouble.
-	 */
-	active = READ_ONCE(execlists->active);
-
-	/*
+	 *
 	 * In theory we can skip over completed contexts that have not
 	 * yet been processed by events (as those events are in flight):
 	 *
@@ -2128,9 +2131,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 	 * find itself trying to jump back into a context it has just
 	 * completed and barf.
 	 */
-
 	if ((last = *active)) {
-		if (need_preempt(engine, last, rb)) {
+		if (need_preempt(engine, last, ve)) {
 			if (i915_request_completed(last)) {
 				tasklet_hi_schedule(&execlists->tasklet);
 				return;
@@ -2161,7 +2163,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 			__unwind_incomplete_requests(engine);
 
 			last = NULL;
-		} else if (need_timeslice(engine, last, rb) &&
+		} else if (need_timeslice(engine, last, ve) &&
 			   timeslice_expired(execlists, last)) {
 			if (i915_request_completed(last)) {
 				tasklet_hi_schedule(&execlists->tasklet);
@@ -2215,110 +2217,99 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 		}
 	}
 
-	while (rb) { /* XXX virtual is always taking precedence */
-		struct virtual_engine *ve =
-			rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
+	while (ve) { /* XXX virtual is always taking precedence */
 		struct i915_request *rq;
 
 		spin_lock(&ve->base.active.lock);
 
 		rq = ve->request;
-		if (unlikely(!rq)) { /* lost the race to a sibling */
-			spin_unlock(&ve->base.active.lock);
-			rb_erase_cached(rb, &execlists->virtual);
-			RB_CLEAR_NODE(rb);
-			rb = rb_first_cached(&execlists->virtual);
-			continue;
-		}
+		if (unlikely(!rq)) /* lost the race to a sibling */
+			goto unlock;
 
 		GEM_BUG_ON(rq != ve->request);
 		GEM_BUG_ON(rq->engine != &ve->base);
 		GEM_BUG_ON(rq->context != &ve->context);
 
-		if (rq_prio(rq) >= queue_prio(execlists)) {
-			if (!virtual_matches(ve, rq, engine)) {
-				spin_unlock(&ve->base.active.lock);
-				rb = rb_next(rb);
-				continue;
-			}
+		if (unlikely(rq_prio(rq) < queue_prio(execlists))) {
+			spin_unlock(&ve->base.active.lock);
+			break;
+		}
 
-			if (last && !can_merge_rq(last, rq)) {
-				spin_unlock(&ve->base.active.lock);
-				start_timeslice(engine, rq_prio(rq));
-				return; /* leave this for another sibling */
-			}
+		GEM_BUG_ON(!virtual_matches(ve, rq, engine));
 
-			ENGINE_TRACE(engine,
-				     "virtual rq=%llx:%lld%s, new engine? %s\n",
-				     rq->fence.context,
-				     rq->fence.seqno,
-				     i915_request_completed(rq) ? "!" :
-				     i915_request_started(rq) ? "*" :
-				     "",
-				     yesno(engine != ve->siblings[0]));
-
-			WRITE_ONCE(ve->request, NULL);
-			WRITE_ONCE(ve->base.execlists.queue_priority_hint,
-				   INT_MIN);
-			rb_erase_cached(rb, &execlists->virtual);
-			RB_CLEAR_NODE(rb);
+		if (last && !can_merge_rq(last, rq)) {
+			spin_unlock(&ve->base.active.lock);
+			start_timeslice(engine, rq_prio(rq));
+			return; /* leave this for another sibling */
+		}
 
-			GEM_BUG_ON(!(rq->execution_mask & engine->mask));
-			WRITE_ONCE(rq->engine, engine);
+		ENGINE_TRACE(engine,
+			     "virtual rq=%llx:%lld%s, new engine? %s\n",
+			     rq->fence.context,
+			     rq->fence.seqno,
+			     i915_request_completed(rq) ? "!" :
+			     i915_request_started(rq) ? "*" :
+			     "",
+			     yesno(engine != ve->siblings[0]));
 
-			if (engine != ve->siblings[0]) {
-				u32 *regs = ve->context.lrc_reg_state;
-				unsigned int n;
+		WRITE_ONCE(ve->request, NULL);
+		WRITE_ONCE(ve->base.execlists.queue_priority_hint,
+			   INT_MIN);
 
-				GEM_BUG_ON(READ_ONCE(ve->context.inflight));
+		rb = &ve->nodes[engine->id].rb;
+		rb_erase_cached(rb, &execlists->virtual);
+		RB_CLEAR_NODE(rb);
 
-				if (!intel_engine_has_relative_mmio(engine))
-					virtual_update_register_offsets(regs,
-									engine);
+		GEM_BUG_ON(!(rq->execution_mask & engine->mask));
+		WRITE_ONCE(rq->engine, engine);
 
-				if (!list_empty(&ve->context.signals))
-					virtual_xfer_breadcrumbs(ve);
+		if (engine != ve->siblings[0]) {
+			u32 *regs = ve->context.lrc_reg_state;
+			unsigned int n;
 
-				/*
-				 * Move the bound engine to the top of the list
-				 * for future execution. We then kick this
-				 * tasklet first before checking others, so that
-				 * we preferentially reuse this set of bound
-				 * registers.
-				 */
-				for (n = 1; n < ve->num_siblings; n++) {
-					if (ve->siblings[n] == engine) {
-						swap(ve->siblings[n],
-						     ve->siblings[0]);
-						break;
-					}
-				}
+			GEM_BUG_ON(READ_ONCE(ve->context.inflight));
 
-				GEM_BUG_ON(ve->siblings[0] != engine);
-			}
+			if (!intel_engine_has_relative_mmio(engine))
+				virtual_update_register_offsets(regs,
+								engine);
 
-			if (__i915_request_submit(rq)) {
-				submit = true;
-				last = rq;
-			}
-			i915_request_put(rq);
+			if (!list_empty(&ve->context.signals))
+				virtual_xfer_breadcrumbs(ve);
 
 			/*
-			 * Hmm, we have a bunch of virtual engine requests,
-			 * but the first one was already completed (thanks
-			 * preempt-to-busy!). Keep looking at the veng queue
-			 * until we have no more relevant requests (i.e.
-			 * the normal submit queue has higher priority).
+			 * Move the bound engine to the top of the list for
+			 * future execution. We then kick this tasklet first
+			 * before checking others, so that we preferentially
+			 * reuse this set of bound registers.
 			 */
-			if (!submit) {
-				spin_unlock(&ve->base.active.lock);
-				rb = rb_first_cached(&execlists->virtual);
-				continue;
+			for (n = 1; n < ve->num_siblings; n++) {
+				if (ve->siblings[n] == engine) {
+					swap(ve->siblings[n],
+					     ve->siblings[0]);
+					break;
+				}
 			}
+
+			GEM_BUG_ON(ve->siblings[0] != engine);
 		}
 
+		if (__i915_request_submit(rq)) {
+			submit = true;
+			last = rq;
+		}
+
+		i915_request_put(rq);
+unlock:
 		spin_unlock(&ve->base.active.lock);
-		break;
+
+		/*
+		 * Hmm, we have a bunch of virtual engine requests,
+		 * but the first one was already completed (thanks
+		 * preempt-to-busy!). Keep looking at the veng queue
+		 * until we have no more relevant requests (i.e.
+		 * the normal submit queue has higher priority).
+		 */
+		ve = submit ? NULL : first_virtual_engine(engine);
 	}
 
 	while ((rb = rb_first_cached(&execlists->queue))) {
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 11/12] drm/i915/gt: Decouple inflight virtual engines
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (8 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 10/12] drm/i915/gt: Use virtual_engine during execlists_dequeue Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 12/12] drm/i915/gt: Resubmit the virtual engine on schedule-out Chris Wilson
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Once a virtual engine has been bound to a sibling, it will remain bound
until we finally schedule out the last active request. We can not rebind
the context to a new sibling while it is inflight as the context save
will conflict, hence we wait. As we cannot then use any other sibliing
while the context is inflight, only kick the bound sibling while it
inflight and upon scheduling out the kick the rest (so that we can swap
engines on timeslicing if the previously bound engine becomes
oversubscribed).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 30 +++++++++++++----------------
 1 file changed, 13 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index fe9e31a25a76..7eee35edfb42 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1401,9 +1401,8 @@ execlists_schedule_in(struct i915_request *rq, int idx)
 static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
 {
 	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
-	struct i915_request *next = READ_ONCE(ve->request);
 
-	if (next == rq || (next && next->execution_mask & ~rq->execution_mask))
+	if (READ_ONCE(ve->request))
 		tasklet_hi_schedule(&ve->base.execlists.tasklet);
 }
 
@@ -1824,18 +1823,14 @@ first_virtual_engine(struct intel_engine_cs *engine)
 			rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
 		struct i915_request *rq = READ_ONCE(ve->request);
 
-		if (!rq) { /* lazily cleanup after another engine handled rq */
+		/* lazily cleanup after another engine handled rq */
+		if (!rq || !virtual_matches(ve, rq, engine)) {
 			rb_erase_cached(rb, &el->virtual);
 			RB_CLEAR_NODE(rb);
 			rb = rb_first_cached(&el->virtual);
 			continue;
 		}
 
-		if (!virtual_matches(ve, rq, engine)) {
-			rb = rb_next(rb);
-			continue;
-		}
-
 		return ve;
 	}
 
@@ -5471,7 +5466,6 @@ static void virtual_submission_tasklet(unsigned long data)
 	if (unlikely(!mask))
 		return;
 
-	local_irq_disable();
 	for (n = 0; n < ve->num_siblings; n++) {
 		struct intel_engine_cs *sibling = READ_ONCE(ve->siblings[n]);
 		struct ve_node * const node = &ve->nodes[sibling->id];
@@ -5481,20 +5475,19 @@ static void virtual_submission_tasklet(unsigned long data)
 		if (!READ_ONCE(ve->request))
 			break; /* already handled by a sibling's tasklet */
 
+		spin_lock_irq(&sibling->active.lock);
+
 		if (unlikely(!(mask & sibling->mask))) {
 			if (!RB_EMPTY_NODE(&node->rb)) {
-				spin_lock(&sibling->active.lock);
 				rb_erase_cached(&node->rb,
 						&sibling->execlists.virtual);
 				RB_CLEAR_NODE(&node->rb);
-				spin_unlock(&sibling->active.lock);
 			}
-			continue;
-		}
 
-		spin_lock(&sibling->active.lock);
+			goto unlock_engine;
+		}
 
-		if (!RB_EMPTY_NODE(&node->rb)) {
+		if (unlikely(!RB_EMPTY_NODE(&node->rb))) {
 			/*
 			 * Cheat and avoid rebalancing the tree if we can
 			 * reuse this node in situ.
@@ -5534,9 +5527,12 @@ static void virtual_submission_tasklet(unsigned long data)
 		if (first && prio > sibling->execlists.queue_priority_hint)
 			tasklet_hi_schedule(&sibling->execlists.tasklet);
 
-		spin_unlock(&sibling->active.lock);
+unlock_engine:
+		spin_unlock_irq(&sibling->active.lock);
+
+		if (intel_context_inflight(&ve->context))
+			break;
 	}
-	local_irq_enable();
 }
 
 static void virtual_submit_request(struct i915_request *rq)
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] [PATCH 12/12] drm/i915/gt: Resubmit the virtual engine on schedule-out
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (9 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 11/12] drm/i915/gt: Decouple inflight virtual engines Chris Wilson
@ 2020-05-19  6:31 ` Chris Wilson
  2020-05-19  7:15 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Patchwork
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 22+ messages in thread
From: Chris Wilson @ 2020-05-19  6:31 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Having recognised that we do not change the sibling until we schedule
out, we can then defer the decision to resubmit the virtual engine from
the unwind of the active queue to scheduling out of the virtual context.

By keeping the unwind order intact on the local engine, we can preserve
data dependency ordering while doing a preempt-to-busy pass until we
have determined the new ELSP. This means that if we try to timeslice
between a virtual engine and a data-dependent ordinary request, the pair
will maintain their relative ordering and we will avoid the
resubmission, cancelling the timeslicing until further change.

The dilemma though is that we then may end up in a situation where the
'demotion' of the virtual request to an ordinary request in the engine
queue results in filling the ELSP[] with virtual requests instead of
spreading the load across the engines. To compensate for this, we mark
each virtual request and refuse to resubmit a virtual request in the
secondary ELSP slots, thus forcing subsequent virtual requests to be
scheduled out after timeslicing. By delaying the decision until we
schedule out, we will avoid unnecessary resubmission.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c    | 99 ++++++++++++++++----------
 drivers/gpu/drm/i915/gt/selftest_lrc.c |  2 +-
 2 files changed, 62 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 7eee35edfb42..f85a2be92dc6 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1117,46 +1117,17 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
 
 		__i915_request_unsubmit(rq);
 
-		/*
-		 * Push the request back into the queue for later resubmission.
-		 * If this request is not native to this physical engine (i.e.
-		 * it came from a virtual source), push it back onto the virtual
-		 * engine so that it can be moved across onto another physical
-		 * engine as load dictates.
-		 */
-		if (likely(rq->execution_mask == engine->mask)) {
-			GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
-			if (rq_prio(rq) != prio) {
-				prio = rq_prio(rq);
-				pl = i915_sched_lookup_priolist(engine, prio);
-			}
-			GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root));
-
-			list_move(&rq->sched.link, pl);
-			set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+		GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
+		if (rq_prio(rq) != prio) {
+			prio = rq_prio(rq);
+			pl = i915_sched_lookup_priolist(engine, prio);
+		}
+		GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root));
 
-			active = rq;
-		} else {
-			struct intel_engine_cs *owner = rq->context->engine;
+		list_move(&rq->sched.link, pl);
+		set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
 
-			/*
-			 * Decouple the virtual breadcrumb before moving it
-			 * back to the virtual engine -- we don't want the
-			 * request to complete in the background and try
-			 * and cancel the breadcrumb on the virtual engine
-			 * (instead of the old engine where it is linked)!
-			 */
-			if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
-				     &rq->fence.flags)) {
-				spin_lock_nested(&rq->lock,
-						 SINGLE_DEPTH_NESTING);
-				i915_request_cancel_breadcrumb(rq);
-				spin_unlock(&rq->lock);
-			}
-			WRITE_ONCE(rq->engine, owner);
-			owner->submit_request(rq);
-			active = NULL;
-		}
+		active = rq;
 	}
 
 	return active;
@@ -1398,12 +1369,41 @@ execlists_schedule_in(struct i915_request *rq, int idx)
 	return i915_request_get(rq);
 }
 
+static void
+resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
+{
+	/*
+	 * Decouple the virtual breadcrumb before moving it back to the virtual
+	 * engine -- we don't want the request to complete in the background
+	 * and then try and cancel the breadcrumb on the virtual engine
+	 * (instead of the old engine where it is linked)!
+	 */
+	if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &rq->fence.flags)) {
+		spin_lock_nested(&rq->lock, SINGLE_DEPTH_NESTING);
+		i915_request_cancel_breadcrumb(rq);
+		spin_unlock(&rq->lock);
+	}
+
+	WRITE_ONCE(rq->engine, &ve->base);
+	ve->base.submit_request(rq);
+}
+
 static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
 {
 	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
 
 	if (READ_ONCE(ve->request))
 		tasklet_hi_schedule(&ve->base.execlists.tasklet);
+
+	/*
+	 * This engine is now too busy to run this virtual request, so
+	 * see if we can find an alternative engine for it to execute on.
+	 * Once a request has become bonded to this engine, we treat it the
+	 * same as other native request.
+	 */
+	if (i915_request_in_priority_queue(rq) &&
+	    rq->execution_mask != rq->engine->mask)
+		resubmit_virtual_request(rq, ve);
 }
 
 static inline void
@@ -1649,6 +1649,20 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
 			return false;
 		}
 
+		/*
+		 * We want virtual requests to only be in the first slot so
+		 * that they are never stuck behind a hog and can be immediately
+		 * transferred onto the next idle engine.
+		 */
+		if (rq->execution_mask != engine->mask &&
+		    port != execlists->pending) {
+			GEM_TRACE_ERR("%s: virtual engine:%llx not in prime position[%zd]\n",
+				      engine->name,
+				      ce->timeline->fence_context,
+				      port - execlists->pending);
+			return false;
+		}
+
 		/* Hold tightly onto the lock to prevent concurrent retires! */
 		if (!spin_trylock_irqsave(&rq->lock, flags))
 			continue;
@@ -2346,6 +2360,15 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 				if (i915_request_has_sentinel(last))
 					goto done;
 
+				/*
+				 * We avoid submitting virtual requests into
+				 * the secondary ports so that we can migrate
+				 * the request immediately to another engine
+				 * rather than wait for the primary request.
+				 */
+				if (rq->execution_mask != engine->mask)
+					goto done;
+
 				/*
 				 * If GVT overrides us we only ever submit
 				 * port[0], leaving port[1] empty. Note that we
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index ef38dd52945c..c3d722840e2d 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -4384,7 +4384,7 @@ static int reset_virtual_engine(struct intel_gt *gt,
 	spin_lock_irq(&engine->active.lock);
 	__unwind_incomplete_requests(engine);
 	spin_unlock_irq(&engine->active.lock);
-	GEM_BUG_ON(rq->engine != ve->engine);
+	GEM_BUG_ON(rq->engine != engine);
 
 	/* Reset the engine while keeping our active request on hold */
 	execlists_hold(engine, rq);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (10 preceding siblings ...)
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 12/12] drm/i915/gt: Resubmit the virtual engine on schedule-out Chris Wilson
@ 2020-05-19  7:15 ` Patchwork
  2020-05-19  7:16 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 22+ messages in thread
From: Patchwork @ 2020-05-19  7:15 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
URL   : https://patchwork.freedesktop.org/series/77389/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
364ab8bd9968 drm/i915: Don't set queue-priority hint when supressing the reschedule
-:10: WARNING:TYPO_SPELLING: 'runnning' may be misspelled - perhaps 'running'?
#10: 
the HW runnning with only the inflight request.

total: 0 errors, 1 warnings, 0 checks, 28 lines checked
1d8f35d7152c drm/i915/selftests: Change priority overflow detection
160ec59b6ea7 drm/i915/selftests: Restore to default heartbeat
447b4e777a40 drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit()
841b9993f566 drm/i915/execlists: Shortcircuit queue_prio() for no internal levels
1ebfae936ba7 drm/i915: Move saturated workload detection back to the context
-:22: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#22: 
References: 44d89409a12e ("drm/i915: Make the semaphore saturation mask global")

-:22: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 44d89409a12e ("drm/i915: Make the semaphore saturation mask global")'
#22: 
References: 44d89409a12e ("drm/i915: Make the semaphore saturation mask global")

total: 1 errors, 1 warnings, 0 checks, 68 lines checked
dabfa4752751 drm/i915/selftests: Add tests for timeslicing virtual engines
0bbb9df18350 drm/i915/gt: Kick virtual siblings on timeslice out
be9b4c6a092f drm/i915/gt: Incorporate the virtual engine into timeslicing
05075f82f723 drm/i915/gt: Use virtual_engine during execlists_dequeue
36097b3facaf drm/i915/gt: Decouple inflight virtual engines
2dd47ed24108 drm/i915/gt: Resubmit the virtual engine on schedule-out

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (11 preceding siblings ...)
  2020-05-19  7:15 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Patchwork
@ 2020-05-19  7:16 ` Patchwork
  2020-05-19  7:40 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 22+ messages in thread
From: Patchwork @ 2020-05-19  7:16 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
URL   : https://patchwork.freedesktop.org/series/77389/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/display/intel_display.c:1222:22: error: Expected constant expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1225:22: error: Expected constant expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1228:22: error: Expected constant expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1231:22: error: Expected constant expression in case statement
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2274:17: error: bad integer constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2275:17: error: bad integer constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2276:17: error: bad integer constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2277:17: error: bad integer constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2278:17: error: bad integer constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2279:17: error: bad integer constant expression
+drivers/gpu/drm/i915/gt/intel_reset.c:1310:5: warning: context imbalance in 'intel_gt_reset_trylock' - different lock contexts for basic block
+drivers/gpu/drm/i915/gt/sysfs_engines.c:61:10: error: bad integer constant expression
+drivers/gpu/drm/i915/gt/sysfs_engines.c:62:10: error: bad integer constant expression
+drivers/gpu/drm/i915/gt/sysfs_engines.c:66:10: error: bad integer constant expression
+drivers/gpu/drm/i915/gvt/mmio.c:287:23: warning: memcpy with byte count of 279040
+drivers/gpu/drm/i915/i915_perf.c:1425:15: warning: memset with byte count of 16777216
+drivers/gpu/drm/i915/i915_perf.c:1479:15: warning: memset with byte count of 16777216
+./include/linux/compiler.h:199:9: warning: context imbalance in 'engines_sample' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen11_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen11_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen11_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen11_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen11_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen11_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen11_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen12_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen12_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen12_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen12_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen12_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen12_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen12_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen8_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen8_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen8_write8' - different lock contexts for basic block

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (12 preceding siblings ...)
  2020-05-19  7:16 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2020-05-19  7:40 ` Patchwork
  2020-05-19  9:21 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  2020-05-19 13:38 ` [Intel-gfx] [PATCH 01/12] " Mika Kuoppala
  15 siblings, 0 replies; 22+ messages in thread
From: Patchwork @ 2020-05-19  7:40 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
URL   : https://patchwork.freedesktop.org/series/77389/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8498 -> Patchwork_17707
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/index.html

Known issues
------------

  Here are the changes found in Patchwork_17707 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@gt_mocs:
    - fi-bwr-2160:        [PASS][1] -> [INCOMPLETE][2] ([i915#489])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/fi-bwr-2160/igt@i915_selftest@live@gt_mocs.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/fi-bwr-2160/igt@i915_selftest@live@gt_mocs.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#1803]: https://gitlab.freedesktop.org/drm/intel/issues/1803
  [i915#489]: https://gitlab.freedesktop.org/drm/intel/issues/489


Participating hosts (52 -> 44)
------------------------------

  Missing    (8): fi-kbl-soraka fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_8498 -> Patchwork_17707

  CI-20190529: 20190529
  CI_DRM_8498: 1493c649ae92207a758afa50a639275bd6c80e2e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5659: 66ab5e42811fee3dea8c21ab29e70e323a0650de @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17707: 2dd47ed241085a15e7df5e1f79ac259a94eff274 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

2dd47ed24108 drm/i915/gt: Resubmit the virtual engine on schedule-out
36097b3facaf drm/i915/gt: Decouple inflight virtual engines
05075f82f723 drm/i915/gt: Use virtual_engine during execlists_dequeue
be9b4c6a092f drm/i915/gt: Incorporate the virtual engine into timeslicing
0bbb9df18350 drm/i915/gt: Kick virtual siblings on timeslice out
dabfa4752751 drm/i915/selftests: Add tests for timeslicing virtual engines
1ebfae936ba7 drm/i915: Move saturated workload detection back to the context
841b9993f566 drm/i915/execlists: Shortcircuit queue_prio() for no internal levels
447b4e777a40 drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit()
160ec59b6ea7 drm/i915/selftests: Restore to default heartbeat
1d8f35d7152c drm/i915/selftests: Change priority overflow detection
364ab8bd9968 drm/i915: Don't set queue-priority hint when supressing the reschedule

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (13 preceding siblings ...)
  2020-05-19  7:40 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2020-05-19  9:21 ` Patchwork
  2020-05-19 13:38 ` [Intel-gfx] [PATCH 01/12] " Mika Kuoppala
  15 siblings, 0 replies; 22+ messages in thread
From: Patchwork @ 2020-05-19  9:21 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
URL   : https://patchwork.freedesktop.org/series/77389/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8498_full -> Patchwork_17707_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_17707_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_17707_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_17707_full:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_exec_balancer@semaphore:
    - shard-tglb:         [PASS][1] -> [DMESG-WARN][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-tglb8/igt@gem_exec_balancer@semaphore.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-tglb5/igt@gem_exec_balancer@semaphore.html

  * igt@runner@aborted:
    - shard-tglb:         NOTRUN -> [FAIL][3]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-tglb5/igt@runner@aborted.html

  
Known issues
------------

  Here are the changes found in Patchwork_17707_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_cursor_crc@pipe-b-cursor-suspend:
    - shard-apl:          [PASS][4] -> [DMESG-WARN][5] ([i915#180]) +2 similar issues
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl1/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl6/igt@kms_cursor_crc@pipe-b-cursor-suspend.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-skl:          [PASS][6] -> [FAIL][7] ([IGT#5])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl3/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl9/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@flip-vs-cursor-toggle:
    - shard-skl:          [PASS][8] -> [FAIL][9] ([IGT#5] / [i915#697])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl6/igt@kms_cursor_legacy@flip-vs-cursor-toggle.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl8/igt@kms_cursor_legacy@flip-vs-cursor-toggle.html

  * igt@kms_flip_tiling@flip-changes-tiling:
    - shard-apl:          [PASS][10] -> [FAIL][11] ([i915#95])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl8/igt@kms_flip_tiling@flip-changes-tiling.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl8/igt@kms_flip_tiling@flip-changes-tiling.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [PASS][12] -> [FAIL][13] ([i915#1188])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl5/igt@kms_hdr@bpc-switch-suspend.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl8/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
    - shard-skl:          [PASS][14] -> [FAIL][15] ([fdo#108145] / [i915#265]) +1 similar issue
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl3/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html

  * igt@kms_psr@psr2_sprite_render:
    - shard-iclb:         [PASS][16] -> [SKIP][17] ([fdo#109441]) +2 similar issues
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-iclb2/igt@kms_psr@psr2_sprite_render.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-iclb6/igt@kms_psr@psr2_sprite_render.html

  
#### Possible fixes ####

  * igt@gen9_exec_parse@allowed-all:
    - shard-apl:          [DMESG-WARN][18] ([i915#1436] / [i915#716]) -> [PASS][19]
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl1/igt@gen9_exec_parse@allowed-all.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl4/igt@gen9_exec_parse@allowed-all.html

  * igt@i915_selftest@live@execlists:
    - shard-skl:          [INCOMPLETE][20] ([i915#1795] / [i915#1874]) -> [PASS][21]
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl2/igt@i915_selftest@live@execlists.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl4/igt@i915_selftest@live@execlists.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-kbl:          [DMESG-WARN][22] ([i915#180]) -> [PASS][23] +3 similar issues
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-kbl2/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-kbl7/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  * {igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a1}:
    - shard-glk:          [FAIL][24] ([i915#79]) -> [PASS][25]
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-glk9/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a1.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-glk8/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a1.html

  * {igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1}:
    - shard-skl:          [FAIL][26] ([i915#79]) -> [PASS][27]
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl1/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl6/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html

  * {igt@kms_flip@flip-vs-suspend-interruptible@a-dp1}:
    - shard-apl:          [DMESG-WARN][28] ([i915#180]) -> [PASS][29]
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_psr@no_drrs:
    - shard-iclb:         [FAIL][30] ([i915#173]) -> [PASS][31]
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-iclb1/igt@kms_psr@no_drrs.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-iclb6/igt@kms_psr@no_drrs.html

  * igt@kms_psr@psr2_suspend:
    - shard-iclb:         [SKIP][32] ([fdo#109441]) -> [PASS][33] +1 similar issue
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-iclb7/igt@kms_psr@psr2_suspend.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-iclb2/igt@kms_psr@psr2_suspend.html

  * {igt@perf@polling-parameterized}:
    - shard-tglb:         [FAIL][34] ([i915#1542]) -> [PASS][35]
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-tglb1/igt@perf@polling-parameterized.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-tglb5/igt@perf@polling-parameterized.html

  
#### Warnings ####

  * igt@i915_pm_dc@dc6-dpms:
    - shard-snb:          [SKIP][36] ([fdo#109271]) -> [INCOMPLETE][37] ([i915#82])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-snb2/igt@i915_pm_dc@dc6-dpms.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-snb2/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-tglb:         [SKIP][38] ([i915#468]) -> [FAIL][39] ([i915#454])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-tglb2/igt@i915_pm_dc@dc6-psr.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-tglb3/igt@i915_pm_dc@dc6-psr.html

  * igt@kms_content_protection@atomic:
    - shard-apl:          [DMESG-FAIL][40] ([fdo#110321]) -> [FAIL][41] ([fdo#110321] / [fdo#110336])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl7/igt@kms_content_protection@atomic.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl3/igt@kms_content_protection@atomic.html

  * igt@kms_content_protection@legacy:
    - shard-apl:          [FAIL][42] ([fdo#110321] / [fdo#110336]) -> [TIMEOUT][43] ([i915#1319])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl6/igt@kms_content_protection@legacy.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl7/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@lic:
    - shard-kbl:          [FAIL][44] ([fdo#110321]) -> [TIMEOUT][45] ([i915#1319])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-kbl3/igt@kms_content_protection@lic.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-kbl2/igt@kms_content_protection@lic.html

  * igt@kms_content_protection@srm:
    - shard-apl:          [TIMEOUT][46] ([i915#1319]) -> [FAIL][47] ([fdo#110321])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl8/igt@kms_content_protection@srm.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl1/igt@kms_content_protection@srm.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [IGT#5]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/5
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#110321]: https://bugs.freedesktop.org/show_bug.cgi?id=110321
  [fdo#110336]: https://bugs.freedesktop.org/show_bug.cgi?id=110336
  [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
  [i915#1319]: https://gitlab.freedesktop.org/drm/intel/issues/1319
  [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
  [i915#1542]: https://gitlab.freedesktop.org/drm/intel/issues/1542
  [i915#173]: https://gitlab.freedesktop.org/drm/intel/issues/173
  [i915#1732]: https://gitlab.freedesktop.org/drm/intel/issues/1732
  [i915#1795]: https://gitlab.freedesktop.org/drm/intel/issues/1795
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874
  [i915#1883]: https://gitlab.freedesktop.org/drm/intel/issues/1883
  [i915#198]: https://gitlab.freedesktop.org/drm/intel/issues/198
  [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
  [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
  [i915#468]: https://gitlab.freedesktop.org/drm/intel/issues/468
  [i915#697]: https://gitlab.freedesktop.org/drm/intel/issues/697
  [i915#716]: https://gitlab.freedesktop.org/drm/intel/issues/716
  [i915#79]: https://gitlab.freedesktop.org/drm/intel/issues/79
  [i915#82]: https://gitlab.freedesktop.org/drm/intel/issues/82
  [i915#95]: https://gitlab.freedesktop.org/drm/intel/issues/95


Participating hosts (11 -> 11)
------------------------------

  No changes in participating hosts


Build changes
-------------

  * Linux: CI_DRM_8498 -> Patchwork_17707

  CI-20190529: 20190529
  CI_DRM_8498: 1493c649ae92207a758afa50a639275bd6c80e2e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5659: 66ab5e42811fee3dea8c21ab29e70e323a0650de @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17707: 2dd47ed241085a15e7df5e1f79ac259a94eff274 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines Chris Wilson
@ 2020-05-19  9:39   ` Tvrtko Ursulin
  0 siblings, 0 replies; 22+ messages in thread
From: Tvrtko Ursulin @ 2020-05-19  9:39 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/05/2020 07:31, Chris Wilson wrote:
> Make sure that we can execute a virtual request on an already busy
> engine, and conversely that we can execute a normal request if the
> engines are already fully occupied by virtual requests.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/gt/selftest_lrc.c | 200 ++++++++++++++++++++++++-
>   1 file changed, 197 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index f6949cd55e92..ef38dd52945c 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -3591,9 +3591,11 @@ static int nop_virtual_engine(struct intel_gt *gt,
>   	return err;
>   }
>   
> -static unsigned int select_siblings(struct intel_gt *gt,
> -				    unsigned int class,
> -				    struct intel_engine_cs **siblings)
> +static unsigned int
> +__select_siblings(struct intel_gt *gt,
> +		  unsigned int class,
> +		  struct intel_engine_cs **siblings,
> +		  bool (*filter)(const struct intel_engine_cs *))
>   {
>   	unsigned int n = 0;
>   	unsigned int inst;
> @@ -3602,12 +3604,23 @@ static unsigned int select_siblings(struct intel_gt *gt,
>   		if (!gt->engine_class[class][inst])
>   			continue;
>   
> +		if (filter && !filter(gt->engine_class[class][inst]))
> +			continue;
> +
>   		siblings[n++] = gt->engine_class[class][inst];
>   	}
>   
>   	return n;
>   }
>   
> +static unsigned int
> +select_siblings(struct intel_gt *gt,
> +		unsigned int class,
> +		struct intel_engine_cs **siblings)
> +{
> +	return __select_siblings(gt, class, siblings, NULL);
> +}
> +
>   static int live_virtual_engine(void *arg)
>   {
>   	struct intel_gt *gt = arg;
> @@ -3762,6 +3775,186 @@ static int live_virtual_mask(void *arg)
>   	return 0;
>   }
>   
> +static long slice_timeout(struct intel_engine_cs *engine)
> +{
> +	long timeout;
> +
> +	/* Enough time for a timeslice to kick in, and kick out */
> +	timeout = 2 * msecs_to_jiffies_timeout(timeslice(engine));
> +
> +	/* Enough time for the nop request to complete */
> +	timeout += HZ / 5;
> +
> +	return timeout;
> +}
> +
> +static int slicein_virtual_engine(struct intel_gt *gt,
> +				  struct intel_engine_cs **siblings,
> +				  unsigned int nsibling)
> +{
> +	const long timeout = slice_timeout(siblings[0]);
> +	struct intel_context *ce;
> +	struct i915_request *rq;
> +	struct igt_spinner spin;
> +	unsigned int n;
> +	int err = 0;
> +
> +	/*
> +	 * Virtual requests must take part in timeslicing on the target engines.
> +	 */
> +
> +	if (igt_spinner_init(&spin, gt))
> +		return -ENOMEM;
> +
> +	for (n = 0; n < nsibling; n++) {
> +		ce = intel_context_create(siblings[n]);
> +		if (IS_ERR(ce)) {
> +			err = PTR_ERR(ce);
> +			goto out;
> +		}
> +
> +		rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK);
> +		intel_context_put(ce);
> +		if (IS_ERR(rq)) {
> +			err = PTR_ERR(rq);
> +			goto out;
> +		}
> +
> +		i915_request_add(rq);
> +	}
> +
> +	ce = intel_execlists_create_virtual(siblings, nsibling);
> +	if (IS_ERR(ce)) {
> +		err = PTR_ERR(ce);
> +		goto out;
> +	}
> +
> +	rq = intel_context_create_request(ce);
> +	intel_context_put(ce);
> +	if (IS_ERR(rq)) {
> +		err = PTR_ERR(rq);
> +		goto out;
> +	}
> +
> +	i915_request_get(rq);
> +	i915_request_add(rq);
> +	if (i915_request_wait(rq, 0, timeout) < 0) {
> +		GEM_TRACE_ERR("%s(%s) failed to slice in virtual request\n",
> +			      __func__, rq->engine->name);
> +		GEM_TRACE_DUMP();
> +		intel_gt_set_wedged(gt);
> +		err = -EIO;
> +	}
> +	i915_request_put(rq);
> +
> +out:
> +	igt_spinner_end(&spin);
> +	if (igt_flush_test(gt->i915))
> +		err = -EIO;
> +	igt_spinner_fini(&spin);
> +	return err;
> +}
> +
> +static int sliceout_virtual_engine(struct intel_gt *gt,
> +				   struct intel_engine_cs **siblings,
> +				   unsigned int nsibling)
> +{
> +	const long timeout = slice_timeout(siblings[0]);
> +	struct intel_context *ce;
> +	struct i915_request *rq;
> +	struct igt_spinner spin;
> +	unsigned int n;
> +	int err = 0;
> +
> +	/*
> +	 * Virtual requests must allow others a fair timeslice.
> +	 */
> +
> +	if (igt_spinner_init(&spin, gt))
> +		return -ENOMEM;
> +
> +	/* XXX We do not handle oversubscription and fairness with normal rq */
> +	for (n = 0; n < nsibling; n++) {
> +		ce = intel_execlists_create_virtual(siblings, nsibling);
> +		if (IS_ERR(ce)) {
> +			err = PTR_ERR(ce);
> +			goto out;
> +		}
> +
> +		rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK);
> +		intel_context_put(ce);
> +		if (IS_ERR(rq)) {
> +			err = PTR_ERR(rq);
> +			goto out;
> +		}
> +
> +		i915_request_add(rq);
> +	}
> +
> +	for (n = 0; !err && n < nsibling; n++) {
> +		ce = intel_context_create(siblings[n]);
> +		if (IS_ERR(ce)) {
> +			err = PTR_ERR(ce);
> +			goto out;
> +		}
> +
> +		rq = intel_context_create_request(ce);
> +		intel_context_put(ce);
> +		if (IS_ERR(rq)) {
> +			err = PTR_ERR(rq);
> +			goto out;
> +		}
> +
> +		i915_request_get(rq);
> +		i915_request_add(rq);
> +		if (i915_request_wait(rq, 0, timeout) < 0) {
> +			GEM_TRACE_ERR("%s(%s) failed to slice out virtual request\n",
> +				      __func__, siblings[n]->name);
> +			GEM_TRACE_DUMP();
> +			intel_gt_set_wedged(gt);
> +			err = -EIO;
> +		}
> +		i915_request_put(rq);
> +	}
> +
> +out:
> +	igt_spinner_end(&spin);
> +	if (igt_flush_test(gt->i915))
> +		err = -EIO;
> +	igt_spinner_fini(&spin);
> +	return err;
> +}
> +
> +static int live_virtual_slice(void *arg)
> +{
> +	struct intel_gt *gt = arg;
> +	struct intel_engine_cs *siblings[MAX_ENGINE_INSTANCE + 1];
> +	unsigned int class;
> +	int err;
> +
> +	if (intel_uc_uses_guc_submission(&gt->uc))
> +		return 0;
> +
> +	for (class = 0; class <= MAX_ENGINE_CLASS; class++) {
> +		unsigned int nsibling;
> +
> +		nsibling = __select_siblings(gt, class, siblings,
> +					     intel_engine_has_timeslices);
> +		if (nsibling < 2)
> +			continue;
> +
> +		err = slicein_virtual_engine(gt, siblings, nsibling);
> +		if (err)
> +			return err;
> +
> +		err = sliceout_virtual_engine(gt, siblings, nsibling);
> +		if (err)
> +			return err;
> +	}
> +
> +	return 0;
> +}
> +
>   static int preserved_virtual_engine(struct intel_gt *gt,
>   				    struct intel_engine_cs **siblings,
>   				    unsigned int nsibling)
> @@ -4297,6 +4490,7 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915)
>   		SUBTEST(live_virtual_engine),
>   		SUBTEST(live_virtual_mask),
>   		SUBTEST(live_virtual_preserved),
> +		SUBTEST(live_virtual_slice),
>   		SUBTEST(live_virtual_bond),
>   		SUBTEST(live_virtual_reset),
>   	};
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing Chris Wilson
@ 2020-05-19  9:40   ` Tvrtko Ursulin
  0 siblings, 0 replies; 22+ messages in thread
From: Tvrtko Ursulin @ 2020-05-19  9:40 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/05/2020 07:31, Chris Wilson wrote:
> It was quite the oversight to only factor in the normal queue to decide
> the timeslicing switch priority. By leaving out the next virtual request
> from the priority decision, we would not timeslice the current engine if
> there was an available virtual request.
> 
> Testcase: igt/gem_exec_balancer/sliced
> Fixes: 3df2deed411e ("drm/i915/execlists: Enable timeslice on partial virtual engine dequeue")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_lrc.c | 30 +++++++++++++++++++++++------
>   1 file changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index 35e7ae3c049c..42cb0cae2845 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -1898,7 +1898,8 @@ static void defer_active(struct intel_engine_cs *engine)
>   
>   static bool
>   need_timeslice(const struct intel_engine_cs *engine,
> -	       const struct i915_request *rq)
> +	       const struct i915_request *rq,
> +	       const struct rb_node *rb)
>   {
>   	int hint;
>   
> @@ -1906,6 +1907,24 @@ need_timeslice(const struct intel_engine_cs *engine,
>   		return false;
>   
>   	hint = engine->execlists.queue_priority_hint;
> +
> +	if (rb) {
> +		const struct virtual_engine *ve =
> +			rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
> +		const struct intel_engine_cs *inflight =
> +			intel_context_inflight(&ve->context);
> +
> +		if (!inflight || inflight == engine) {
> +			struct i915_request *next;
> +
> +			rcu_read_lock();
> +			next = READ_ONCE(ve->request);
> +			if (next)
> +				hint = max(hint, rq_prio(next));
> +			rcu_read_unlock();
> +		}
> +	}
> +
>   	if (!list_is_last(&rq->sched.link, &engine->active.requests))
>   		hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
>   
> @@ -1980,10 +1999,9 @@ static void set_timeslice(struct intel_engine_cs *engine)
>   	set_timer_ms(&engine->execlists.timer, duration);
>   }
>   
> -static void start_timeslice(struct intel_engine_cs *engine)
> +static void start_timeslice(struct intel_engine_cs *engine, int prio)
>   {
>   	struct intel_engine_execlists *execlists = &engine->execlists;
> -	const int prio = queue_prio(execlists);
>   	unsigned long duration;
>   
>   	if (!intel_engine_has_timeslices(engine))
> @@ -2143,7 +2161,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>   			__unwind_incomplete_requests(engine);
>   
>   			last = NULL;
> -		} else if (need_timeslice(engine, last) &&
> +		} else if (need_timeslice(engine, last, rb) &&
>   			   timeslice_expired(execlists, last)) {
>   			if (i915_request_completed(last)) {
>   				tasklet_hi_schedule(&execlists->tasklet);
> @@ -2191,7 +2209,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>   				 * Even if ELSP[1] is occupied and not worthy
>   				 * of timeslices, our queue might be.
>   				 */
> -				start_timeslice(engine);
> +				start_timeslice(engine, queue_prio(execlists));
>   				return;
>   			}
>   		}
> @@ -2226,7 +2244,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>   
>   			if (last && !can_merge_rq(last, rq)) {
>   				spin_unlock(&ve->base.active.lock);
> -				start_timeslice(engine);
> +				start_timeslice(engine, rq_prio(rq));
>   				return; /* leave this for another sibling */
>   			}
>   
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule
  2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
                   ` (14 preceding siblings ...)
  2020-05-19  9:21 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
@ 2020-05-19 13:38 ` Mika Kuoppala
  15 siblings, 0 replies; 22+ messages in thread
From: Mika Kuoppala @ 2020-05-19 13:38 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx; +Cc: Chris Wilson

Chris Wilson <chris@chris-wilson.co.uk> writes:

s/supressing/suppressing

> We recorded the execlists->queue_priority_hint update for the inflight
> request without kicking the tasklet. The next submitted request then
> failed to be scheduled as it had a lower priority than the hint, leaving
> the HW runnning with only the inflight request.

s/nnn/nn

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

>
> Fixes: 6cebcf746f3f ("drm/i915: Tweak scheduler's kick_submission()")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/i915_scheduler.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
> index f4ea318781f0..cbb880b10c65 100644
> --- a/drivers/gpu/drm/i915/i915_scheduler.c
> +++ b/drivers/gpu/drm/i915/i915_scheduler.c
> @@ -209,14 +209,6 @@ static void kick_submission(struct intel_engine_cs *engine,
>  	if (!inflight)
>  		goto unlock;
>  
> -	ENGINE_TRACE(engine,
> -		     "bumping queue-priority-hint:%d for rq:%llx:%lld, inflight:%llx:%lld prio %d\n",
> -		     prio,
> -		     rq->fence.context, rq->fence.seqno,
> -		     inflight->fence.context, inflight->fence.seqno,
> -		     inflight->sched.attr.priority);
> -	engine->execlists.queue_priority_hint = prio;
> -
>  	/*
>  	 * If we are already the currently executing context, don't
>  	 * bother evaluating if we should preempt ourselves.
> @@ -224,6 +216,14 @@ static void kick_submission(struct intel_engine_cs *engine,
>  	if (inflight->context == rq->context)
>  		goto unlock;
>  
> +	ENGINE_TRACE(engine,
> +		     "bumping queue-priority-hint:%d for rq:%llx:%lld, inflight:%llx:%lld prio %d\n",
> +		     prio,
> +		     rq->fence.context, rq->fence.seqno,
> +		     inflight->fence.context, inflight->fence.seqno,
> +		     inflight->sched.attr.priority);
> +
> +	engine->execlists.queue_priority_hint = prio;
>  	if (need_preempt(prio, rq_prio(inflight)))
>  		tasklet_hi_schedule(&engine->execlists.tasklet);
>  
> -- 
> 2.20.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat Chris Wilson
@ 2020-05-19 13:41   ` Mika Kuoppala
  0 siblings, 0 replies; 22+ messages in thread
From: Mika Kuoppala @ 2020-05-19 13:41 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx; +Cc: Chris Wilson

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Since we temporarily disable the heartbeat and restore back to the
> default value, we can use the stored defaults on the engine and avoid
> using a local.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

>  drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 25 +++----
>  drivers/gpu/drm/i915/gt/selftest_lrc.c       | 67 +++++++------------
>  drivers/gpu/drm/i915/gt/selftest_rps.c       | 69 ++++++++------------
>  drivers/gpu/drm/i915/gt/selftest_timeline.c  | 15 ++---
>  4 files changed, 67 insertions(+), 109 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> index 2b2efff6e19d..4aa4cc917d8b 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> @@ -310,22 +310,20 @@ static bool wait_until_running(struct hang *h, struct i915_request *rq)
>  			  1000));
>  }
>  
> -static void engine_heartbeat_disable(struct intel_engine_cs *engine,
> -				     unsigned long *saved)
> +static void engine_heartbeat_disable(struct intel_engine_cs *engine)
>  {
> -	*saved = engine->props.heartbeat_interval_ms;
>  	engine->props.heartbeat_interval_ms = 0;
>  
>  	intel_engine_pm_get(engine);
>  	intel_engine_park_heartbeat(engine);
>  }
>  
> -static void engine_heartbeat_enable(struct intel_engine_cs *engine,
> -				    unsigned long saved)
> +static void engine_heartbeat_enable(struct intel_engine_cs *engine)
>  {
>  	intel_engine_pm_put(engine);
>  
> -	engine->props.heartbeat_interval_ms = saved;
> +	engine->props.heartbeat_interval_ms =
> +		engine->defaults.heartbeat_interval_ms;
>  }
>  
>  static int igt_hang_sanitycheck(void *arg)
> @@ -473,7 +471,6 @@ static int igt_reset_nop_engine(void *arg)
>  	for_each_engine(engine, gt, id) {
>  		unsigned int reset_count, reset_engine_count, count;
>  		struct intel_context *ce;
> -		unsigned long heartbeat;
>  		IGT_TIMEOUT(end_time);
>  		int err;
>  
> @@ -485,7 +482,7 @@ static int igt_reset_nop_engine(void *arg)
>  		reset_engine_count = i915_reset_engine_count(global, engine);
>  		count = 0;
>  
> -		engine_heartbeat_disable(engine, &heartbeat);
> +		engine_heartbeat_disable(engine);
>  		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
>  		do {
>  			int i;
> @@ -529,7 +526,7 @@ static int igt_reset_nop_engine(void *arg)
>  			}
>  		} while (time_before(jiffies, end_time));
>  		clear_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
> -		engine_heartbeat_enable(engine, heartbeat);
> +		engine_heartbeat_enable(engine);
>  
>  		pr_info("%s(%s): %d resets\n", __func__, engine->name, count);
>  
> @@ -564,7 +561,6 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
>  
>  	for_each_engine(engine, gt, id) {
>  		unsigned int reset_count, reset_engine_count;
> -		unsigned long heartbeat;
>  		IGT_TIMEOUT(end_time);
>  
>  		if (active && !intel_engine_can_store_dword(engine))
> @@ -580,7 +576,7 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
>  		reset_count = i915_reset_count(global);
>  		reset_engine_count = i915_reset_engine_count(global, engine);
>  
> -		engine_heartbeat_disable(engine, &heartbeat);
> +		engine_heartbeat_disable(engine);
>  		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
>  		do {
>  			if (active) {
> @@ -632,7 +628,7 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
>  			}
>  		} while (time_before(jiffies, end_time));
>  		clear_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
> -		engine_heartbeat_enable(engine, heartbeat);
> +		engine_heartbeat_enable(engine);
>  
>  		if (err)
>  			break;
> @@ -789,7 +785,6 @@ static int __igt_reset_engines(struct intel_gt *gt,
>  		struct active_engine threads[I915_NUM_ENGINES] = {};
>  		unsigned long device = i915_reset_count(global);
>  		unsigned long count = 0, reported;
> -		unsigned long heartbeat;
>  		IGT_TIMEOUT(end_time);
>  
>  		if (flags & TEST_ACTIVE &&
> @@ -832,7 +827,7 @@ static int __igt_reset_engines(struct intel_gt *gt,
>  
>  		yield(); /* start all threads before we begin */
>  
> -		engine_heartbeat_disable(engine, &heartbeat);
> +		engine_heartbeat_disable(engine);
>  		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
>  		do {
>  			struct i915_request *rq = NULL;
> @@ -906,7 +901,7 @@ static int __igt_reset_engines(struct intel_gt *gt,
>  			}
>  		} while (time_before(jiffies, end_time));
>  		clear_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
> -		engine_heartbeat_enable(engine, heartbeat);
> +		engine_heartbeat_enable(engine);
>  
>  		pr_info("i915_reset_engine(%s:%s): %lu resets\n",
>  			engine->name, test_name, count);
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index 3e042fa4b94b..b71f04db9c6e 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -51,22 +51,20 @@ static struct i915_vma *create_scratch(struct intel_gt *gt)
>  	return vma;
>  }
>  
> -static void engine_heartbeat_disable(struct intel_engine_cs *engine,
> -				     unsigned long *saved)
> +static void engine_heartbeat_disable(struct intel_engine_cs *engine)
>  {
> -	*saved = engine->props.heartbeat_interval_ms;
>  	engine->props.heartbeat_interval_ms = 0;
>  
>  	intel_engine_pm_get(engine);
>  	intel_engine_park_heartbeat(engine);
>  }
>  
> -static void engine_heartbeat_enable(struct intel_engine_cs *engine,
> -				    unsigned long saved)
> +static void engine_heartbeat_enable(struct intel_engine_cs *engine)
>  {
>  	intel_engine_pm_put(engine);
>  
> -	engine->props.heartbeat_interval_ms = saved;
> +	engine->props.heartbeat_interval_ms =
> +		engine->defaults.heartbeat_interval_ms;
>  }
>  
>  static bool is_active(struct i915_request *rq)
> @@ -224,7 +222,6 @@ static int live_unlite_restore(struct intel_gt *gt, int prio)
>  		struct intel_context *ce[2] = {};
>  		struct i915_request *rq[2];
>  		struct igt_live_test t;
> -		unsigned long saved;
>  		int n;
>  
>  		if (prio && !intel_engine_has_preemption(engine))
> @@ -237,7 +234,7 @@ static int live_unlite_restore(struct intel_gt *gt, int prio)
>  			err = -EIO;
>  			break;
>  		}
> -		engine_heartbeat_disable(engine, &saved);
> +		engine_heartbeat_disable(engine);
>  
>  		for (n = 0; n < ARRAY_SIZE(ce); n++) {
>  			struct intel_context *tmp;
> @@ -345,7 +342,7 @@ static int live_unlite_restore(struct intel_gt *gt, int prio)
>  			intel_context_put(ce[n]);
>  		}
>  
> -		engine_heartbeat_enable(engine, saved);
> +		engine_heartbeat_enable(engine);
>  		if (igt_live_test_end(&t))
>  			err = -EIO;
>  		if (err)
> @@ -466,7 +463,6 @@ static int live_hold_reset(void *arg)
>  
>  	for_each_engine(engine, gt, id) {
>  		struct intel_context *ce;
> -		unsigned long heartbeat;
>  		struct i915_request *rq;
>  
>  		ce = intel_context_create(engine);
> @@ -475,7 +471,7 @@ static int live_hold_reset(void *arg)
>  			break;
>  		}
>  
> -		engine_heartbeat_disable(engine, &heartbeat);
> +		engine_heartbeat_disable(engine);
>  
>  		rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK);
>  		if (IS_ERR(rq)) {
> @@ -535,7 +531,7 @@ static int live_hold_reset(void *arg)
>  		i915_request_put(rq);
>  
>  out:
> -		engine_heartbeat_enable(engine, heartbeat);
> +		engine_heartbeat_enable(engine);
>  		intel_context_put(ce);
>  		if (err)
>  			break;
> @@ -580,10 +576,9 @@ static int live_error_interrupt(void *arg)
>  
>  	for_each_engine(engine, gt, id) {
>  		const struct error_phase *p;
> -		unsigned long heartbeat;
>  		int err = 0;
>  
> -		engine_heartbeat_disable(engine, &heartbeat);
> +		engine_heartbeat_disable(engine);
>  
>  		for (p = phases; p->error[0] != GOOD; p++) {
>  			struct i915_request *client[ARRAY_SIZE(phases->error)];
> @@ -682,7 +677,7 @@ static int live_error_interrupt(void *arg)
>  			}
>  		}
>  
> -		engine_heartbeat_enable(engine, heartbeat);
> +		engine_heartbeat_enable(engine);
>  		if (err) {
>  			intel_gt_set_wedged(gt);
>  			return err;
> @@ -895,16 +890,14 @@ static int live_timeslice_preempt(void *arg)
>  		enum intel_engine_id id;
>  
>  		for_each_engine(engine, gt, id) {
> -			unsigned long saved;
> -
>  			if (!intel_engine_has_preemption(engine))
>  				continue;
>  
>  			memset(vaddr, 0, PAGE_SIZE);
>  
> -			engine_heartbeat_disable(engine, &saved);
> +			engine_heartbeat_disable(engine);
>  			err = slice_semaphore_queue(engine, vma, count);
> -			engine_heartbeat_enable(engine, saved);
> +			engine_heartbeat_enable(engine);
>  			if (err)
>  				goto err_pin;
>  
> @@ -1009,7 +1002,6 @@ static int live_timeslice_rewind(void *arg)
>  		enum { X = 1, Z, Y };
>  		struct i915_request *rq[3] = {};
>  		struct intel_context *ce;
> -		unsigned long heartbeat;
>  		unsigned long timeslice;
>  		int i, err = 0;
>  		u32 *slot;
> @@ -1028,7 +1020,7 @@ static int live_timeslice_rewind(void *arg)
>  		 * Expect execution/evaluation order XZY
>  		 */
>  
> -		engine_heartbeat_disable(engine, &heartbeat);
> +		engine_heartbeat_disable(engine);
>  		timeslice = xchg(&engine->props.timeslice_duration_ms, 1);
>  
>  		slot = memset32(engine->status_page.addr + 1000, 0, 4);
> @@ -1122,7 +1114,7 @@ static int live_timeslice_rewind(void *arg)
>  		wmb();
>  
>  		engine->props.timeslice_duration_ms = timeslice;
> -		engine_heartbeat_enable(engine, heartbeat);
> +		engine_heartbeat_enable(engine);
>  		for (i = 0; i < 3; i++)
>  			i915_request_put(rq[i]);
>  		if (igt_flush_test(gt->i915))
> @@ -1202,12 +1194,11 @@ static int live_timeslice_queue(void *arg)
>  			.priority = I915_USER_PRIORITY(I915_PRIORITY_MAX),
>  		};
>  		struct i915_request *rq, *nop;
> -		unsigned long saved;
>  
>  		if (!intel_engine_has_preemption(engine))
>  			continue;
>  
> -		engine_heartbeat_disable(engine, &saved);
> +		engine_heartbeat_disable(engine);
>  		memset(vaddr, 0, PAGE_SIZE);
>  
>  		/* ELSP[0]: semaphore wait */
> @@ -1284,7 +1275,7 @@ static int live_timeslice_queue(void *arg)
>  err_rq:
>  		i915_request_put(rq);
>  err_heartbeat:
> -		engine_heartbeat_enable(engine, saved);
> +		engine_heartbeat_enable(engine);
>  		if (err)
>  			break;
>  	}
> @@ -4145,7 +4136,6 @@ static int reset_virtual_engine(struct intel_gt *gt,
>  {
>  	struct intel_engine_cs *engine;
>  	struct intel_context *ve;
> -	unsigned long *heartbeat;
>  	struct igt_spinner spin;
>  	struct i915_request *rq;
>  	unsigned int n;
> @@ -4157,15 +4147,9 @@ static int reset_virtual_engine(struct intel_gt *gt,
>  	 * descendents are not executed while the capture is in progress.
>  	 */
>  
> -	heartbeat = kmalloc_array(nsibling, sizeof(*heartbeat), GFP_KERNEL);
> -	if (!heartbeat)
> +	if (igt_spinner_init(&spin, gt))
>  		return -ENOMEM;
>  
> -	if (igt_spinner_init(&spin, gt)) {
> -		err = -ENOMEM;
> -		goto out_free;
> -	}
> -
>  	ve = intel_execlists_create_virtual(siblings, nsibling);
>  	if (IS_ERR(ve)) {
>  		err = PTR_ERR(ve);
> @@ -4173,7 +4157,7 @@ static int reset_virtual_engine(struct intel_gt *gt,
>  	}
>  
>  	for (n = 0; n < nsibling; n++)
> -		engine_heartbeat_disable(siblings[n], &heartbeat[n]);
> +		engine_heartbeat_disable(siblings[n]);
>  
>  	rq = igt_spinner_create_request(&spin, ve, MI_ARB_CHECK);
>  	if (IS_ERR(rq)) {
> @@ -4244,13 +4228,11 @@ static int reset_virtual_engine(struct intel_gt *gt,
>  	i915_request_put(rq);
>  out_heartbeat:
>  	for (n = 0; n < nsibling; n++)
> -		engine_heartbeat_enable(siblings[n], heartbeat[n]);
> +		engine_heartbeat_enable(siblings[n]);
>  
>  	intel_context_put(ve);
>  out_spin:
>  	igt_spinner_fini(&spin);
> -out_free:
> -	kfree(heartbeat);
>  	return err;
>  }
>  
> @@ -4918,9 +4900,7 @@ static int live_lrc_gpr(void *arg)
>  		return PTR_ERR(scratch);
>  
>  	for_each_engine(engine, gt, id) {
> -		unsigned long heartbeat;
> -
> -		engine_heartbeat_disable(engine, &heartbeat);
> +		engine_heartbeat_disable(engine);
>  
>  		err = __live_lrc_gpr(engine, scratch, false);
>  		if (err)
> @@ -4931,7 +4911,7 @@ static int live_lrc_gpr(void *arg)
>  			goto err;
>  
>  err:
> -		engine_heartbeat_enable(engine, heartbeat);
> +		engine_heartbeat_enable(engine);
>  		if (igt_flush_test(gt->i915))
>  			err = -EIO;
>  		if (err)
> @@ -5078,10 +5058,9 @@ static int live_lrc_timestamp(void *arg)
>  	 */
>  
>  	for_each_engine(data.engine, gt, id) {
> -		unsigned long heartbeat;
>  		int i, err = 0;
>  
> -		engine_heartbeat_disable(data.engine, &heartbeat);
> +		engine_heartbeat_disable(data.engine);
>  
>  		for (i = 0; i < ARRAY_SIZE(data.ce); i++) {
>  			struct intel_context *tmp;
> @@ -5114,7 +5093,7 @@ static int live_lrc_timestamp(void *arg)
>  		}
>  
>  err:
> -		engine_heartbeat_enable(data.engine, heartbeat);
> +		engine_heartbeat_enable(data.engine);
>  		for (i = 0; i < ARRAY_SIZE(data.ce); i++) {
>  			if (!data.ce[i])
>  				break;
> diff --git a/drivers/gpu/drm/i915/gt/selftest_rps.c b/drivers/gpu/drm/i915/gt/selftest_rps.c
> index 6275d69aa9cc..5049c3dd08a6 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_rps.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_rps.c
> @@ -20,24 +20,20 @@
>  /* Try to isolate the impact of cstates from determing frequency response */
>  #define CPU_LATENCY 0 /* -1 to disable pm_qos, 0 to disable cstates */
>  
> -static unsigned long engine_heartbeat_disable(struct intel_engine_cs *engine)
> +static void engine_heartbeat_disable(struct intel_engine_cs *engine)
>  {
> -	unsigned long old;
> -
> -	old = fetch_and_zero(&engine->props.heartbeat_interval_ms);
> +	engine->props.heartbeat_interval_ms = 0;
>  
>  	intel_engine_pm_get(engine);
>  	intel_engine_park_heartbeat(engine);
> -
> -	return old;
>  }
>  
> -static void engine_heartbeat_enable(struct intel_engine_cs *engine,
> -				    unsigned long saved)
> +static void engine_heartbeat_enable(struct intel_engine_cs *engine)
>  {
>  	intel_engine_pm_put(engine);
>  
> -	engine->props.heartbeat_interval_ms = saved;
> +	engine->props.heartbeat_interval_ms =
> +		engine->defaults.heartbeat_interval_ms;
>  }
>  
>  static void dummy_rps_work(struct work_struct *wrk)
> @@ -246,7 +242,6 @@ int live_rps_clock_interval(void *arg)
>  	intel_gt_check_clock_frequency(gt);
>  
>  	for_each_engine(engine, gt, id) {
> -		unsigned long saved_heartbeat;
>  		struct i915_request *rq;
>  		u32 cycles;
>  		u64 dt;
> @@ -254,13 +249,13 @@ int live_rps_clock_interval(void *arg)
>  		if (!intel_engine_can_store_dword(engine))
>  			continue;
>  
> -		saved_heartbeat = engine_heartbeat_disable(engine);
> +		engine_heartbeat_disable(engine);
>  
>  		rq = igt_spinner_create_request(&spin,
>  						engine->kernel_context,
>  						MI_NOOP);
>  		if (IS_ERR(rq)) {
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			err = PTR_ERR(rq);
>  			break;
>  		}
> @@ -271,7 +266,7 @@ int live_rps_clock_interval(void *arg)
>  			pr_err("%s: RPS spinner did not start\n",
>  			       engine->name);
>  			igt_spinner_end(&spin);
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			intel_gt_set_wedged(engine->gt);
>  			err = -EIO;
>  			break;
> @@ -327,7 +322,7 @@ int live_rps_clock_interval(void *arg)
>  		intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL);
>  
>  		igt_spinner_end(&spin);
> -		engine_heartbeat_enable(engine, saved_heartbeat);
> +		engine_heartbeat_enable(engine);
>  
>  		if (err == 0) {
>  			u64 time = intel_gt_pm_interval_to_ns(gt, cycles);
> @@ -405,7 +400,6 @@ int live_rps_control(void *arg)
>  
>  	intel_gt_pm_get(gt);
>  	for_each_engine(engine, gt, id) {
> -		unsigned long saved_heartbeat;
>  		struct i915_request *rq;
>  		ktime_t min_dt, max_dt;
>  		int f, limit;
> @@ -414,7 +408,7 @@ int live_rps_control(void *arg)
>  		if (!intel_engine_can_store_dword(engine))
>  			continue;
>  
> -		saved_heartbeat = engine_heartbeat_disable(engine);
> +		engine_heartbeat_disable(engine);
>  
>  		rq = igt_spinner_create_request(&spin,
>  						engine->kernel_context,
> @@ -430,7 +424,7 @@ int live_rps_control(void *arg)
>  			pr_err("%s: RPS spinner did not start\n",
>  			       engine->name);
>  			igt_spinner_end(&spin);
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			intel_gt_set_wedged(engine->gt);
>  			err = -EIO;
>  			break;
> @@ -440,7 +434,7 @@ int live_rps_control(void *arg)
>  			pr_err("%s: could not set minimum frequency [%x], only %x!\n",
>  			       engine->name, rps->min_freq, read_cagf(rps));
>  			igt_spinner_end(&spin);
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			show_pstate_limits(rps);
>  			err = -EINVAL;
>  			break;
> @@ -457,7 +451,7 @@ int live_rps_control(void *arg)
>  			pr_err("%s: could not restore minimum frequency [%x], only %x!\n",
>  			       engine->name, rps->min_freq, read_cagf(rps));
>  			igt_spinner_end(&spin);
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			show_pstate_limits(rps);
>  			err = -EINVAL;
>  			break;
> @@ -472,7 +466,7 @@ int live_rps_control(void *arg)
>  		min_dt = ktime_sub(ktime_get(), min_dt);
>  
>  		igt_spinner_end(&spin);
> -		engine_heartbeat_enable(engine, saved_heartbeat);
> +		engine_heartbeat_enable(engine);
>  
>  		pr_info("%s: range:[%x:%uMHz, %x:%uMHz] limit:[%x:%uMHz], %x:%x response %lluns:%lluns\n",
>  			engine->name,
> @@ -635,7 +629,6 @@ int live_rps_frequency_cs(void *arg)
>  	rps->work.func = dummy_rps_work;
>  
>  	for_each_engine(engine, gt, id) {
> -		unsigned long saved_heartbeat;
>  		struct i915_request *rq;
>  		struct i915_vma *vma;
>  		u32 *cancel, *cntr;
> @@ -644,14 +637,14 @@ int live_rps_frequency_cs(void *arg)
>  			int freq;
>  		} min, max;
>  
> -		saved_heartbeat = engine_heartbeat_disable(engine);
> +		engine_heartbeat_disable(engine);
>  
>  		vma = create_spin_counter(engine,
>  					  engine->kernel_context->vm, false,
>  					  &cancel, &cntr);
>  		if (IS_ERR(vma)) {
>  			err = PTR_ERR(vma);
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			break;
>  		}
>  
> @@ -732,7 +725,7 @@ int live_rps_frequency_cs(void *arg)
>  		i915_vma_unpin(vma);
>  		i915_vma_put(vma);
>  
> -		engine_heartbeat_enable(engine, saved_heartbeat);
> +		engine_heartbeat_enable(engine);
>  		if (igt_flush_test(gt->i915))
>  			err = -EIO;
>  		if (err)
> @@ -778,7 +771,6 @@ int live_rps_frequency_srm(void *arg)
>  	rps->work.func = dummy_rps_work;
>  
>  	for_each_engine(engine, gt, id) {
> -		unsigned long saved_heartbeat;
>  		struct i915_request *rq;
>  		struct i915_vma *vma;
>  		u32 *cancel, *cntr;
> @@ -787,14 +779,14 @@ int live_rps_frequency_srm(void *arg)
>  			int freq;
>  		} min, max;
>  
> -		saved_heartbeat = engine_heartbeat_disable(engine);
> +		engine_heartbeat_disable(engine);
>  
>  		vma = create_spin_counter(engine,
>  					  engine->kernel_context->vm, true,
>  					  &cancel, &cntr);
>  		if (IS_ERR(vma)) {
>  			err = PTR_ERR(vma);
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			break;
>  		}
>  
> @@ -874,7 +866,7 @@ int live_rps_frequency_srm(void *arg)
>  		i915_vma_unpin(vma);
>  		i915_vma_put(vma);
>  
> -		engine_heartbeat_enable(engine, saved_heartbeat);
> +		engine_heartbeat_enable(engine);
>  		if (igt_flush_test(gt->i915))
>  			err = -EIO;
>  		if (err)
> @@ -1066,16 +1058,14 @@ int live_rps_interrupt(void *arg)
>  	for_each_engine(engine, gt, id) {
>  		/* Keep the engine busy with a spinner; expect an UP! */
>  		if (pm_events & GEN6_PM_RP_UP_THRESHOLD) {
> -			unsigned long saved_heartbeat;
> -
>  			intel_gt_pm_wait_for_idle(engine->gt);
>  			GEM_BUG_ON(intel_rps_is_active(rps));
>  
> -			saved_heartbeat = engine_heartbeat_disable(engine);
> +			engine_heartbeat_disable(engine);
>  
>  			err = __rps_up_interrupt(rps, engine, &spin);
>  
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			if (err)
>  				goto out;
>  
> @@ -1084,15 +1074,13 @@ int live_rps_interrupt(void *arg)
>  
>  		/* Keep the engine awake but idle and check for DOWN */
>  		if (pm_events & GEN6_PM_RP_DOWN_THRESHOLD) {
> -			unsigned long saved_heartbeat;
> -
> -			saved_heartbeat = engine_heartbeat_disable(engine);
> +			engine_heartbeat_disable(engine);
>  			intel_rc6_disable(&gt->rc6);
>  
>  			err = __rps_down_interrupt(rps, engine);
>  
>  			intel_rc6_enable(&gt->rc6);
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			if (err)
>  				goto out;
>  		}
> @@ -1168,7 +1156,6 @@ int live_rps_power(void *arg)
>  	rps->work.func = dummy_rps_work;
>  
>  	for_each_engine(engine, gt, id) {
> -		unsigned long saved_heartbeat;
>  		struct i915_request *rq;
>  		struct {
>  			u64 power;
> @@ -1178,13 +1165,13 @@ int live_rps_power(void *arg)
>  		if (!intel_engine_can_store_dword(engine))
>  			continue;
>  
> -		saved_heartbeat = engine_heartbeat_disable(engine);
> +		engine_heartbeat_disable(engine);
>  
>  		rq = igt_spinner_create_request(&spin,
>  						engine->kernel_context,
>  						MI_NOOP);
>  		if (IS_ERR(rq)) {
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			err = PTR_ERR(rq);
>  			break;
>  		}
> @@ -1195,7 +1182,7 @@ int live_rps_power(void *arg)
>  			pr_err("%s: RPS spinner did not start\n",
>  			       engine->name);
>  			igt_spinner_end(&spin);
> -			engine_heartbeat_enable(engine, saved_heartbeat);
> +			engine_heartbeat_enable(engine);
>  			intel_gt_set_wedged(engine->gt);
>  			err = -EIO;
>  			break;
> @@ -1208,7 +1195,7 @@ int live_rps_power(void *arg)
>  		min.power = measure_power_at(rps, &min.freq);
>  
>  		igt_spinner_end(&spin);
> -		engine_heartbeat_enable(engine, saved_heartbeat);
> +		engine_heartbeat_enable(engine);
>  
>  		pr_info("%s: min:%llumW @ %uMHz, max:%llumW @ %uMHz\n",
>  			engine->name,
> diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> index c2578a0f2f14..ef1c35073dc0 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> @@ -751,22 +751,20 @@ static int live_hwsp_wrap(void *arg)
>  	return err;
>  }
>  
> -static void engine_heartbeat_disable(struct intel_engine_cs *engine,
> -				     unsigned long *saved)
> +static void engine_heartbeat_disable(struct intel_engine_cs *engine)
>  {
> -	*saved = engine->props.heartbeat_interval_ms;
>  	engine->props.heartbeat_interval_ms = 0;
>  
>  	intel_engine_pm_get(engine);
>  	intel_engine_park_heartbeat(engine);
>  }
>  
> -static void engine_heartbeat_enable(struct intel_engine_cs *engine,
> -				    unsigned long saved)
> +static void engine_heartbeat_enable(struct intel_engine_cs *engine)
>  {
>  	intel_engine_pm_put(engine);
>  
> -	engine->props.heartbeat_interval_ms = saved;
> +	engine->props.heartbeat_interval_ms =
> +		engine->defaults.heartbeat_interval_ms;
>  }
>  
>  static int live_hwsp_rollover_kernel(void *arg)
> @@ -785,10 +783,9 @@ static int live_hwsp_rollover_kernel(void *arg)
>  		struct intel_context *ce = engine->kernel_context;
>  		struct intel_timeline *tl = ce->timeline;
>  		struct i915_request *rq[3] = {};
> -		unsigned long heartbeat;
>  		int i;
>  
> -		engine_heartbeat_disable(engine, &heartbeat);
> +		engine_heartbeat_disable(engine);
>  		if (intel_gt_wait_for_idle(gt, HZ / 2)) {
>  			err = -EIO;
>  			goto out;
> @@ -839,7 +836,7 @@ static int live_hwsp_rollover_kernel(void *arg)
>  out:
>  		for (i = 0; i < ARRAY_SIZE(rq); i++)
>  			i915_request_put(rq[i]);
> -		engine_heartbeat_enable(engine, heartbeat);
> +		engine_heartbeat_enable(engine);
>  		if (err)
>  			break;
>  	}
> -- 
> 2.20.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection Chris Wilson
@ 2020-05-19 13:58   ` Mika Kuoppala
  0 siblings, 0 replies; 22+ messages in thread
From: Mika Kuoppala @ 2020-05-19 13:58 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx; +Cc: Chris Wilson

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Check for integer overflow in the priority chain, rather than against a
> type-constricted max-priority check.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

> ---
>  drivers/gpu/drm/i915/gt/selftest_lrc.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index 94854a467e66..3e042fa4b94b 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -2735,12 +2735,12 @@ static int live_preempt_gang(void *arg)
>  			/* Submit each spinner at increasing priority */
>  			engine->schedule(rq, &attr);
>  
> +			if (prio < attr.priority)
> +				break;
> +
>  			if (prio <= I915_PRIORITY_MAX)
>  				continue;
>  
> -			if (prio > (INT_MAX >> I915_USER_PRIORITY_SHIFT))
> -				break;
> -
>  			if (__igt_timeout(end_time, NULL))
>  				break;
>  		} while (1);
> -- 
> 2.20.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit()
  2020-05-19  6:31 ` [Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit() Chris Wilson
@ 2020-05-19 14:20   ` Mika Kuoppala
  0 siblings, 0 replies; 22+ messages in thread
From: Mika Kuoppala @ 2020-05-19 14:20 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx; +Cc: Chris Wilson

Chris Wilson <chris@chris-wilson.co.uk> writes:

> When we look at i915_request_is_started() we must be careful in case we
> are using a request that does not have the initial-breadcrumb and
> instead the is-started is being compared against the end of the previous
> request. This will make wait_for_submit() declare that a request has
> been already submitted too early.


submitted....started...handled_by_gpu.

I guess wait_for_submit is generic enough to cater all cases
but we actually do not wait for submit but we wait for
gpu to reach it.

>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

> ---
>  drivers/gpu/drm/i915/gt/selftest_lrc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index b71f04db9c6e..f6949cd55e92 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -75,7 +75,7 @@ static bool is_active(struct i915_request *rq)
>  	if (i915_request_on_hold(rq))
>  		return true;
>  
> -	if (i915_request_started(rq))
> +	if (i915_request_has_initial_breadcrumb(rq) && i915_request_started(rq))
>  		return true;
>  
>  	return false;
> -- 
> 2.20.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2020-05-19 14:23 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-19  6:31 [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Chris Wilson
2020-05-19  6:31 ` [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection Chris Wilson
2020-05-19 13:58   ` Mika Kuoppala
2020-05-19  6:31 ` [Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat Chris Wilson
2020-05-19 13:41   ` Mika Kuoppala
2020-05-19  6:31 ` [Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit() Chris Wilson
2020-05-19 14:20   ` Mika Kuoppala
2020-05-19  6:31 ` [Intel-gfx] [PATCH 05/12] drm/i915/execlists: Shortcircuit queue_prio() for no internal levels Chris Wilson
2020-05-19  6:31 ` [Intel-gfx] [PATCH 06/12] drm/i915: Move saturated workload detection back to the context Chris Wilson
2020-05-19  6:31 ` [Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines Chris Wilson
2020-05-19  9:39   ` Tvrtko Ursulin
2020-05-19  6:31 ` [Intel-gfx] [PATCH 08/12] drm/i915/gt: Kick virtual siblings on timeslice out Chris Wilson
2020-05-19  6:31 ` [Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing Chris Wilson
2020-05-19  9:40   ` Tvrtko Ursulin
2020-05-19  6:31 ` [Intel-gfx] [PATCH 10/12] drm/i915/gt: Use virtual_engine during execlists_dequeue Chris Wilson
2020-05-19  6:31 ` [Intel-gfx] [PATCH 11/12] drm/i915/gt: Decouple inflight virtual engines Chris Wilson
2020-05-19  6:31 ` [Intel-gfx] [PATCH 12/12] drm/i915/gt: Resubmit the virtual engine on schedule-out Chris Wilson
2020-05-19  7:15 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule Patchwork
2020-05-19  7:16 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-05-19  7:40 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-05-19  9:21 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2020-05-19 13:38 ` [Intel-gfx] [PATCH 01/12] " Mika Kuoppala

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).