All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH i-g-t 01/10] i915/gem_userptr_blits: Tighten has_userptr()
@ 2020-10-14 10:40 ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

We use has_userptr() to determine if the different flags are supported,
so it helps not to override the flags inside the test.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_userptr_blits.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/tests/i915/gem_userptr_blits.c b/tests/i915/gem_userptr_blits.c
index 268423dcd..01498edad 100644
--- a/tests/i915/gem_userptr_blits.c
+++ b/tests/i915/gem_userptr_blits.c
@@ -71,8 +71,7 @@
 #define PAGE_SIZE 4096
 #endif
 
-static uint32_t userptr_flags = I915_USERPTR_UNSYNCHRONIZED;
-
+static uint32_t userptr_flags;
 static bool *can_mmap;
 
 #define WIDTH 512
@@ -504,14 +503,11 @@ static int has_userptr(int fd)
 {
 	uint32_t handle = 0;
 	void *ptr;
-	uint32_t oldflags;
 	int ret;
 
 	igt_assert(posix_memalign(&ptr, PAGE_SIZE, PAGE_SIZE) == 0);
-	oldflags = userptr_flags;
-	gem_userptr_test_unsynchronized();
 	ret = __gem_userptr(fd, ptr, PAGE_SIZE, 0, userptr_flags, &handle);
-	userptr_flags = oldflags;
+	errno = 0;
 	if (ret != 0) {
 		free(ptr);
 		return 0;
@@ -2112,6 +2108,10 @@ igt_main_args("c:", NULL, help_str, opt_handler, NULL)
 
 	igt_subtest_group {
 		igt_fixture {
+			/* Either mode will do for parameter checking */
+			gem_userptr_test_synchronized();
+			if (!has_userptr(fd))
+				gem_userptr_test_unsynchronized();
 			igt_require(has_userptr(fd));
 		}
 
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [igt-dev] [PATCH i-g-t 01/10] i915/gem_userptr_blits: Tighten has_userptr()
@ 2020-10-14 10:40 ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

We use has_userptr() to determine if the different flags are supported,
so it helps not to override the flags inside the test.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_userptr_blits.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/tests/i915/gem_userptr_blits.c b/tests/i915/gem_userptr_blits.c
index 268423dcd..01498edad 100644
--- a/tests/i915/gem_userptr_blits.c
+++ b/tests/i915/gem_userptr_blits.c
@@ -71,8 +71,7 @@
 #define PAGE_SIZE 4096
 #endif
 
-static uint32_t userptr_flags = I915_USERPTR_UNSYNCHRONIZED;
-
+static uint32_t userptr_flags;
 static bool *can_mmap;
 
 #define WIDTH 512
@@ -504,14 +503,11 @@ static int has_userptr(int fd)
 {
 	uint32_t handle = 0;
 	void *ptr;
-	uint32_t oldflags;
 	int ret;
 
 	igt_assert(posix_memalign(&ptr, PAGE_SIZE, PAGE_SIZE) == 0);
-	oldflags = userptr_flags;
-	gem_userptr_test_unsynchronized();
 	ret = __gem_userptr(fd, ptr, PAGE_SIZE, 0, userptr_flags, &handle);
-	userptr_flags = oldflags;
+	errno = 0;
 	if (ret != 0) {
 		free(ptr);
 		return 0;
@@ -2112,6 +2108,10 @@ igt_main_args("c:", NULL, help_str, opt_handler, NULL)
 
 	igt_subtest_group {
 		igt_fixture {
+			/* Either mode will do for parameter checking */
+			gem_userptr_test_synchronized();
+			if (!has_userptr(fd))
+				gem_userptr_test_unsynchronized();
 			igt_require(has_userptr(fd));
 		}
 
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 02/10] i915/gem_exec_balancer: Check balancer submission latency
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
@ 2020-10-14 10:40   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

While CI is unreliable in terms of detecting performance deltas, it
should still be able to detect when we are orders of magnitude off
expectations. In this case, latency/throughput when submitting to a load
balancer should be on par with a native engine.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_exec_balancer.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_exec_balancer.c b/tests/i915/gem_exec_balancer.c
index 35a032ccb..0c334b91b 100644
--- a/tests/i915/gem_exec_balancer.c
+++ b/tests/i915/gem_exec_balancer.c
@@ -2380,6 +2380,7 @@ static void nop(int i915)
 	for (int class = 0; class < 32; class++) {
 		struct i915_engine_class_instance *ci;
 		unsigned int count;
+		double max = 0;
 		uint32_t ctx;
 
 		ci = list_engines(i915, 1u << class, &count);
@@ -2410,6 +2411,8 @@ static void nop(int i915)
 
 			t = igt_nsec_elapsed(&tv) * 1e-3 / nops;
 			igt_info("%s:%d %.3fus\n", class_to_str(class), n, t);
+			if (t > max)
+				max = t;
 		}
 
 		{
@@ -2433,9 +2436,10 @@ static void nop(int i915)
 
 			t = igt_nsec_elapsed(&tv) * 1e-3 / nops;
 			igt_info("%s:* %.3fus\n", class_to_str(class), t);
+			if (t > 10 * max)
+				igt_warn("Balancer submission %.1fx worse than normal!\n", t / max);
 		}
 
-
 		igt_fork(child, count) {
 			struct drm_i915_gem_execbuffer2 execbuf = {
 				.buffers_ptr = to_user_pointer(&batch),
@@ -2476,6 +2480,8 @@ static void nop(int i915)
 			t = igt_nsec_elapsed(&tv) * 1e-3 / nops;
 			igt_info("[%d] %s:* %.3fus\n",
 				 child, class_to_str(class), t);
+			if (t > 20 * max)
+				igt_warn("[%d] Balancer submission %.1fx worse than normal!\n", child, t / max);
 
 			gem_context_destroy(i915, execbuf.rsvd1);
 		}
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [igt-dev] [PATCH i-g-t 02/10] i915/gem_exec_balancer: Check balancer submission latency
@ 2020-10-14 10:40   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

While CI is unreliable in terms of detecting performance deltas, it
should still be able to detect when we are orders of magnitude off
expectations. In this case, latency/throughput when submitting to a load
balancer should be on par with a native engine.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_exec_balancer.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_exec_balancer.c b/tests/i915/gem_exec_balancer.c
index 35a032ccb..0c334b91b 100644
--- a/tests/i915/gem_exec_balancer.c
+++ b/tests/i915/gem_exec_balancer.c
@@ -2380,6 +2380,7 @@ static void nop(int i915)
 	for (int class = 0; class < 32; class++) {
 		struct i915_engine_class_instance *ci;
 		unsigned int count;
+		double max = 0;
 		uint32_t ctx;
 
 		ci = list_engines(i915, 1u << class, &count);
@@ -2410,6 +2411,8 @@ static void nop(int i915)
 
 			t = igt_nsec_elapsed(&tv) * 1e-3 / nops;
 			igt_info("%s:%d %.3fus\n", class_to_str(class), n, t);
+			if (t > max)
+				max = t;
 		}
 
 		{
@@ -2433,9 +2436,10 @@ static void nop(int i915)
 
 			t = igt_nsec_elapsed(&tv) * 1e-3 / nops;
 			igt_info("%s:* %.3fus\n", class_to_str(class), t);
+			if (t > 10 * max)
+				igt_warn("Balancer submission %.1fx worse than normal!\n", t / max);
 		}
 
-
 		igt_fork(child, count) {
 			struct drm_i915_gem_execbuffer2 execbuf = {
 				.buffers_ptr = to_user_pointer(&batch),
@@ -2476,6 +2480,8 @@ static void nop(int i915)
 			t = igt_nsec_elapsed(&tv) * 1e-3 / nops;
 			igt_info("[%d] %s:* %.3fus\n",
 				 child, class_to_str(class), t);
+			if (t > 20 * max)
+				igt_warn("[%d] Balancer submission %.1fx worse than normal!\n", child, t / max);
 
 			gem_context_destroy(i915, execbuf.rsvd1);
 		}
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 03/10] i915/gen9_exec_parse: Check oversized batch with length==0
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
@ 2020-10-14 10:40   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Matthew Auld, Chris Wilson

Include the implicit eb.batch_len=0 into the mix of various offsets and
lengths.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
---
 tests/i915/gen9_exec_parse.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tests/i915/gen9_exec_parse.c b/tests/i915/gen9_exec_parse.c
index 7ddb5bf2b..087d6f35f 100644
--- a/tests/i915/gen9_exec_parse.c
+++ b/tests/i915/gen9_exec_parse.c
@@ -628,6 +628,8 @@ static void test_bb_oversize(int i915)
 	gem_write(i915, obj.handle, (4ull << 30) - sizeof(bbe),
 		  &bbe, sizeof(bbe));
 
+	igt_assert_eq(__checked_execbuf(i915, &execbuf), 0);
+
 	for (int i = 13; i <= 32; i++) {
 		igt_debug("Checking length %#llx\n", 1ull << i);
 
@@ -638,6 +640,9 @@ static void test_bb_oversize(int i915)
 		igt_assert_eq(__checked_execbuf(i915, &execbuf), 0);
 	}
 
+	execbuf.batch_len = 0;
+	igt_assert_eq(__checked_execbuf(i915, &execbuf), 0);
+
 	gem_close(i915, obj.handle);
 }
 
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [igt-dev] [PATCH i-g-t 03/10] i915/gen9_exec_parse: Check oversized batch with length==0
@ 2020-10-14 10:40   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Matthew Auld, Chris Wilson

Include the implicit eb.batch_len=0 into the mix of various offsets and
lengths.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
---
 tests/i915/gen9_exec_parse.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tests/i915/gen9_exec_parse.c b/tests/i915/gen9_exec_parse.c
index 7ddb5bf2b..087d6f35f 100644
--- a/tests/i915/gen9_exec_parse.c
+++ b/tests/i915/gen9_exec_parse.c
@@ -628,6 +628,8 @@ static void test_bb_oversize(int i915)
 	gem_write(i915, obj.handle, (4ull << 30) - sizeof(bbe),
 		  &bbe, sizeof(bbe));
 
+	igt_assert_eq(__checked_execbuf(i915, &execbuf), 0);
+
 	for (int i = 13; i <= 32; i++) {
 		igt_debug("Checking length %#llx\n", 1ull << i);
 
@@ -638,6 +640,9 @@ static void test_bb_oversize(int i915)
 		igt_assert_eq(__checked_execbuf(i915, &execbuf), 0);
 	}
 
+	execbuf.batch_len = 0;
+	igt_assert_eq(__checked_execbuf(i915, &execbuf), 0);
+
 	gem_close(i915, obj.handle);
 }
 
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 04/10] i915/gem_exec_schedule: Include userptr scheduling tests
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
                   ` (2 preceding siblings ...)
  (?)
@ 2020-10-14 10:40 ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

In practice, it turns out that compute likes to use userptr for
everything, and so in turn so must we.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_exec_schedule.c | 79 ++++++++++++++++++++++++----------
 1 file changed, 57 insertions(+), 22 deletions(-)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index e316cf4d7..53462c425 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -308,7 +308,7 @@ static void implicit_rw(int i915, unsigned ring, enum implicit_dir dir)
 		igt_assert_eq_u32(result, ring);
 }
 
-static void independent(int fd, unsigned int engine)
+static void independent(int fd, unsigned int engine, unsigned long flags)
 {
 	const struct intel_execution_engine2 *e;
 	IGT_CORK_FENCE(cork);
@@ -332,7 +332,9 @@ static void independent(int fd, unsigned int engine)
 			continue;
 
 		if (spin == NULL) {
-			spin = __igt_spin_new(fd, .engine = e->flags);
+			spin = __igt_spin_new(fd,
+					      .engine = e->flags,
+					      .flags = flags);
 		} else {
 			struct drm_i915_gem_execbuffer2 eb = {
 				.buffer_count = 1,
@@ -628,7 +630,7 @@ static void timesliceN(int i915, unsigned int engine, int count)
 	munmap(result, sz);
 }
 
-static void lateslice(int i915, unsigned int engine)
+static void lateslice(int i915, unsigned int engine, unsigned long flags)
 {
 	igt_spin_t *spin[3];
 	uint32_t ctx;
@@ -640,7 +642,8 @@ static void lateslice(int i915, unsigned int engine)
 	ctx = gem_context_create(i915);
 	spin[0] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
 			       .flags = (IGT_SPIN_POLL_RUN |
-					 IGT_SPIN_FENCE_OUT));
+					 IGT_SPIN_FENCE_OUT |
+					 flags));
 	gem_context_destroy(i915, ctx);
 
 	igt_spin_busywait_until_started(spin[0]);
@@ -649,7 +652,8 @@ static void lateslice(int i915, unsigned int engine)
 	spin[1] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
 			       .fence = spin[0]->out_fence,
 			       .flags = (IGT_SPIN_POLL_RUN |
-					 IGT_SPIN_FENCE_IN));
+					 IGT_SPIN_FENCE_IN |
+					 flags));
 	gem_context_destroy(i915, ctx);
 
 	usleep(5000); /* give some time for the new spinner to be scheduled */
@@ -663,7 +667,7 @@ static void lateslice(int i915, unsigned int engine)
 
 	ctx = gem_context_create(i915);
 	spin[2] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
-			       .flags = IGT_SPIN_POLL_RUN);
+			       .flags = IGT_SPIN_POLL_RUN | flags);
 	gem_context_destroy(i915, ctx);
 
 	igt_spin_busywait_until_started(spin[2]);
@@ -722,6 +726,7 @@ static void submit_slice(int i915,
 			 unsigned int flags)
 #define EARLY_SUBMIT 0x1
 #define LATE_SUBMIT 0x2
+#define USERPTR 0x4
 {
 	I915_DEFINE_CONTEXT_PARAM_ENGINES(engines , 1) = {};
 	const struct intel_execution_engine2 *cancel;
@@ -766,6 +771,7 @@ static void submit_slice(int i915,
 				    .flags =
 				    IGT_SPIN_POLL_RUN |
 				    (flags & LATE_SUBMIT ? IGT_SPIN_FENCE_IN : 0) |
+				    (flags & USERPTR ? IGT_SPIN_USERPTR : 0) |
 				    IGT_SPIN_FENCE_OUT);
 		if (fence != -1)
 			close(fence);
@@ -805,7 +811,7 @@ static uint32_t batch_create(int i915)
 	return __batch_create(i915, 0);
 }
 
-static void semaphore_userlock(int i915)
+static void semaphore_userlock(int i915, unsigned long flags)
 {
 	const struct intel_execution_engine2 *e;
 	struct drm_i915_gem_exec_object2 obj = {
@@ -828,7 +834,8 @@ static void semaphore_userlock(int i915)
 		if (!spin) {
 			spin = igt_spin_new(i915,
 					    .dependency = scratch,
-					    .engine = e->flags);
+					    .engine = e->flags,
+					    .flags = flags);
 		} else {
 			uint64_t saved = spin->execbuf.flags;
 
@@ -869,7 +876,7 @@ static void semaphore_userlock(int i915)
 	igt_spin_free(i915, spin);
 }
 
-static void semaphore_codependency(int i915)
+static void semaphore_codependency(int i915, unsigned long flags)
 {
 	const struct intel_execution_engine2 *e;
 	struct {
@@ -903,7 +910,7 @@ static void semaphore_codependency(int i915)
 			__igt_spin_new(i915,
 				       .ctx = ctx,
 				       .engine = e->flags,
-				       .flags = IGT_SPIN_POLL_RUN);
+				       .flags = IGT_SPIN_POLL_RUN | flags);
 		igt_spin_busywait_until_started(task[i].xcs);
 
 		/* Common rcs tasks will be queued in FIFO */
@@ -925,13 +932,18 @@ static void semaphore_codependency(int i915)
 	igt_spin_end(task[1].rcs);
 	gem_sync(i915, task[1].rcs->handle); /* to hang if task[0] hogs rcs */
 
+	for (i = 0; i < ARRAY_SIZE(task); i++) {
+		igt_spin_end(task[i].xcs);
+		igt_spin_end(task[i].rcs);
+	}
+
 	for (i = 0; i < ARRAY_SIZE(task); i++) {
 		igt_spin_free(i915, task[i].xcs);
 		igt_spin_free(i915, task[i].rcs);
 	}
 }
 
-static void semaphore_resolve(int i915)
+static void semaphore_resolve(int i915, unsigned long flags)
 {
 	const struct intel_execution_engine2 *e;
 	const uint32_t SEMAPHORE_ADDR = 64 << 10;
@@ -966,7 +978,7 @@ static void semaphore_resolve(int i915)
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
-		spin = __igt_spin_new(i915, .engine = e->flags);
+		spin = __igt_spin_new(i915, .engine = e->flags, .flags = flags);
 		igt_spin_end(spin); /* we just want its address for later */
 		gem_sync(i915, spin->handle);
 		igt_spin_reset(spin);
@@ -1060,7 +1072,7 @@ static void semaphore_resolve(int i915)
 	gem_context_destroy(i915, outer);
 }
 
-static void semaphore_noskip(int i915)
+static void semaphore_noskip(int i915, unsigned long flags)
 {
 	const int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *outer, *inner;
@@ -1081,9 +1093,9 @@ static void semaphore_noskip(int i915)
 		    !gem_class_can_store_dword(i915, inner->class))
 			continue;
 
-		chain = __igt_spin_new(i915, .engine = outer->flags);
+		chain = __igt_spin_new(i915, .engine = outer->flags, .flags = flags);
 
-		spin = __igt_spin_new(i915, .engine = inner->flags);
+		spin = __igt_spin_new(i915, .engine = inner->flags, .flags = flags);
 		igt_spin_end(spin); /* we just want its address for later */
 		gem_sync(i915, spin->handle);
 		igt_spin_reset(spin);
@@ -1274,7 +1286,8 @@ static void preempt(int fd, const struct intel_execution_engine2 *e, unsigned fl
 		}
 		spin[n] = __igt_spin_new(fd,
 					 .ctx = ctx[LO],
-					 .engine = e->flags);
+					 .engine = e->flags,
+					 .flags = flags & USERPTR ? IGT_SPIN_USERPTR : 0);
 		igt_debug("spin[%d].handle=%d\n", n, spin[n]->handle);
 
 		store_dword(fd, ctx[HI], e->flags, result, 0, n + 1, I915_GEM_DOMAIN_RENDER);
@@ -2561,7 +2574,9 @@ igt_main
 			implicit_rw(fd, e->flags, READ_WRITE | WRITE_READ);
 
 		test_each_engine_store("independent", fd, e)
-			independent(fd, e->flags);
+			independent(fd, e->flags, 0);
+		test_each_engine_store("u-independent", fd, e)
+			independent(fd, e->flags, IGT_SPIN_USERPTR);
 	}
 
 	igt_subtest_group {
@@ -2582,23 +2597,40 @@ igt_main
 			timesliceN(fd, e->flags, 67);
 
 		test_each_engine("lateslice", fd, e)
-			lateslice(fd, e->flags);
+			lateslice(fd, e->flags, 0);
+		test_each_engine("u-lateslice", fd, e)
+			lateslice(fd, e->flags, IGT_SPIN_USERPTR);
 
 		test_each_engine("submit-early-slice", fd, e)
 			submit_slice(fd, e, EARLY_SUBMIT);
+		test_each_engine("u-submit-early-slice", fd, e)
+			submit_slice(fd, e, EARLY_SUBMIT | USERPTR);
 		test_each_engine("submit-golden-slice", fd, e)
 			submit_slice(fd, e, 0);
+		test_each_engine("u-submit-golden-slice", fd, e)
+			submit_slice(fd, e, USERPTR);
 		test_each_engine("submit-late-slice", fd, e)
 			submit_slice(fd, e, LATE_SUBMIT);
+		test_each_engine("u-submit-late-slice", fd, e)
+			submit_slice(fd, e, LATE_SUBMIT | USERPTR);
 
 		igt_subtest("semaphore-user")
-			semaphore_userlock(fd);
+			semaphore_userlock(fd, 0);
 		igt_subtest("semaphore-codependency")
-			semaphore_codependency(fd);
+			semaphore_codependency(fd, 0);
 		igt_subtest("semaphore-resolve")
-			semaphore_resolve(fd);
+			semaphore_resolve(fd, 0);
 		igt_subtest("semaphore-noskip")
-			semaphore_noskip(fd);
+			semaphore_noskip(fd, 0);
+
+		igt_subtest("u-semaphore-user")
+			semaphore_userlock(fd, IGT_SPIN_USERPTR);
+		igt_subtest("u-semaphore-codependency")
+			semaphore_codependency(fd, IGT_SPIN_USERPTR);
+		igt_subtest("u-semaphore-resolve")
+			semaphore_resolve(fd, IGT_SPIN_USERPTR);
+		igt_subtest("u-semaphore-noskip")
+			semaphore_noskip(fd, IGT_SPIN_USERPTR);
 
 		igt_subtest("smoketest-all")
 			smoketest(fd, ALL_ENGINES, 30);
@@ -2623,6 +2655,9 @@ igt_main
 			test_each_engine_store("preempt-contexts", fd, e)
 				preempt(fd, e, NEW_CTX);
 
+			test_each_engine_store("preempt-user", fd, e)
+				preempt(fd, e, USERPTR);
+
 			test_each_engine_store("preempt-self", fd, e)
 				preempt_self(fd, e->flags);
 
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 05/10] i915/gem_exec_balancer: Check interactions between bonds and userptr
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
@ 2020-10-14 10:40   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_exec_balancer.c | 46 +++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 15 deletions(-)

diff --git a/tests/i915/gem_exec_balancer.c b/tests/i915/gem_exec_balancer.c
index 0c334b91b..adba776e7 100644
--- a/tests/i915/gem_exec_balancer.c
+++ b/tests/i915/gem_exec_balancer.c
@@ -34,6 +34,10 @@
 
 IGT_TEST_DESCRIPTION("Exercise in-kernel load-balancing");
 
+#define CORK		(1ul << 0)
+#define VIRTUAL_ENGINE	(1ul << 1)
+#define USERPTR		(1ul << 2)
+
 #define MI_SEMAPHORE_WAIT		(0x1c << 23)
 #define   MI_SEMAPHORE_POLL             (1 << 15)
 #define   MI_SEMAPHORE_SAD_GT_SDD       (0 << 12)
@@ -578,7 +582,6 @@ static void individual(int i915)
 }
 
 static void bonded(int i915, unsigned int flags)
-#define CORK 0x1
 {
 	I915_DEFINE_CONTEXT_ENGINES_BOND(bonds[16], 1);
 	struct i915_engine_class_instance *master_engines;
@@ -660,13 +663,15 @@ static void bonded(int i915, unsigned int flags)
 				plug = __igt_spin_new(i915,
 						      .ctx = master,
 						      .engine = bond,
-						      .dependency = igt_cork_plug(&cork, i915));
+						      .dependency = igt_cork_plug(&cork, i915),
+						      .flags = (flags & USERPTR ? IGT_SPIN_USERPTR : 0));
 			}
 
 			spin = __igt_spin_new(i915,
 					      .ctx = master,
 					      .engine = bond,
-					      .flags = IGT_SPIN_FENCE_OUT);
+					      .flags = IGT_SPIN_FENCE_OUT |
+					      (flags & USERPTR ? IGT_SPIN_USERPTR : 0));
 
 			eb = spin->execbuf;
 			eb.rsvd1 = ctx;
@@ -717,8 +722,6 @@ static void bonded(int i915, unsigned int flags)
 	gem_context_destroy(i915, master);
 }
 
-#define VIRTUAL_ENGINE (1u << 0)
-
 static unsigned int offset_in_page(void *addr)
 {
 	return (uintptr_t)addr & 4095;
@@ -1057,7 +1060,8 @@ static void bonded_chain(int i915)
 
 static void __bonded_sema(int i915, uint32_t ctx,
 			  const struct i915_engine_class_instance *siblings,
-			  unsigned int count)
+			  unsigned int count,
+			  unsigned long flags)
 {
 	const int priorities[] = { -1023, 0, 1023 };
 	struct drm_i915_gem_exec_object2 batch = {
@@ -1074,7 +1078,8 @@ static void __bonded_sema(int i915, uint32_t ctx,
 		/* A: spin forever on seperate render engine */
 		spin = igt_spin_new(i915,
 				    .flags = (IGT_SPIN_POLL_RUN |
-					      IGT_SPIN_FENCE_OUT));
+					      IGT_SPIN_FENCE_OUT |
+					      (flags & USERPTR ? IGT_SPIN_USERPTR : 0)));
 		igt_spin_busywait_until_started(spin);
 
 		/*
@@ -1128,7 +1133,7 @@ static void __bonded_sema(int i915, uint32_t ctx,
 	gem_close(i915, batch.handle);
 }
 
-static void bonded_semaphore(int i915)
+static void bonded_semaphore(int i915, unsigned long flags)
 {
 	uint32_t ctx;
 
@@ -1149,7 +1154,7 @@ static void bonded_semaphore(int i915)
 
 		siblings = list_engines(i915, 1u << class, &count);
 		if (count > 1)
-			__bonded_sema(i915, ctx, siblings, count);
+			__bonded_sema(i915, ctx, siblings, count, flags);
 		free(siblings);
 	}
 
@@ -1839,7 +1844,7 @@ static void __bonded_early(int i915, uint32_t ctx,
 	spin = igt_spin_new(i915,
 			    .ctx = ctx,
 			    .engine = (flags & VIRTUAL_ENGINE) ? 0 : 1,
-			    .flags = IGT_SPIN_NO_PREEMPTION);
+			    .flags = IGT_SPIN_NO_PREEMPTION | (flags & USERPTR ? IGT_SPIN_USERPTR : 0));
 
 	/* B: runs after A on engine 1 */
 	execbuf.flags = I915_EXEC_FENCE_OUT;
@@ -1882,7 +1887,7 @@ static void __bonded_early(int i915, uint32_t ctx,
 	igt_spin_free(i915, spin);
 }
 
-static void bonded_early(int i915)
+static void bonded_early(int i915, unsigned long flags)
 {
 	uint32_t ctx;
 
@@ -1909,8 +1914,8 @@ static void bonded_early(int i915)
 
 		siblings = list_engines(i915, 1u << class, &count);
 		if (count > 1) {
-			__bonded_early(i915, ctx, siblings, count, 0);
-			__bonded_early(i915, ctx, siblings, count, VIRTUAL_ENGINE);
+			__bonded_early(i915, ctx, siblings, count, flags);
+			__bonded_early(i915, ctx, siblings, count, flags | VIRTUAL_ENGINE);
 		}
 		free(siblings);
 	}
@@ -2882,7 +2887,16 @@ igt_main
 			bonded(i915, CORK);
 
 		igt_subtest("bonded-early")
-			bonded_early(i915);
+			bonded_early(i915, 0);
+
+		igt_subtest("u-bonded-imm")
+			bonded(i915, USERPTR);
+
+		igt_subtest("u-bonded-cork")
+			bonded(i915, CORK | USERPTR);
+
+		igt_subtest("u-bonded-early")
+			bonded_early(i915, USERPTR);
 	}
 
 	igt_subtest("bonded-slice")
@@ -2892,7 +2906,9 @@ igt_main
 		bonded_chain(i915);
 
 	igt_subtest("bonded-semaphore")
-		bonded_semaphore(i915);
+		bonded_semaphore(i915, 0);
+	igt_subtest("u-bonded-semaphore")
+		bonded_semaphore(i915, USERPTR);
 
 	igt_subtest("bonded-pair")
 		bonded_runner(i915, __bonded_pair);
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [igt-dev] [PATCH i-g-t 05/10] i915/gem_exec_balancer: Check interactions between bonds and userptr
@ 2020-10-14 10:40   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_exec_balancer.c | 46 +++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 15 deletions(-)

diff --git a/tests/i915/gem_exec_balancer.c b/tests/i915/gem_exec_balancer.c
index 0c334b91b..adba776e7 100644
--- a/tests/i915/gem_exec_balancer.c
+++ b/tests/i915/gem_exec_balancer.c
@@ -34,6 +34,10 @@
 
 IGT_TEST_DESCRIPTION("Exercise in-kernel load-balancing");
 
+#define CORK		(1ul << 0)
+#define VIRTUAL_ENGINE	(1ul << 1)
+#define USERPTR		(1ul << 2)
+
 #define MI_SEMAPHORE_WAIT		(0x1c << 23)
 #define   MI_SEMAPHORE_POLL             (1 << 15)
 #define   MI_SEMAPHORE_SAD_GT_SDD       (0 << 12)
@@ -578,7 +582,6 @@ static void individual(int i915)
 }
 
 static void bonded(int i915, unsigned int flags)
-#define CORK 0x1
 {
 	I915_DEFINE_CONTEXT_ENGINES_BOND(bonds[16], 1);
 	struct i915_engine_class_instance *master_engines;
@@ -660,13 +663,15 @@ static void bonded(int i915, unsigned int flags)
 				plug = __igt_spin_new(i915,
 						      .ctx = master,
 						      .engine = bond,
-						      .dependency = igt_cork_plug(&cork, i915));
+						      .dependency = igt_cork_plug(&cork, i915),
+						      .flags = (flags & USERPTR ? IGT_SPIN_USERPTR : 0));
 			}
 
 			spin = __igt_spin_new(i915,
 					      .ctx = master,
 					      .engine = bond,
-					      .flags = IGT_SPIN_FENCE_OUT);
+					      .flags = IGT_SPIN_FENCE_OUT |
+					      (flags & USERPTR ? IGT_SPIN_USERPTR : 0));
 
 			eb = spin->execbuf;
 			eb.rsvd1 = ctx;
@@ -717,8 +722,6 @@ static void bonded(int i915, unsigned int flags)
 	gem_context_destroy(i915, master);
 }
 
-#define VIRTUAL_ENGINE (1u << 0)
-
 static unsigned int offset_in_page(void *addr)
 {
 	return (uintptr_t)addr & 4095;
@@ -1057,7 +1060,8 @@ static void bonded_chain(int i915)
 
 static void __bonded_sema(int i915, uint32_t ctx,
 			  const struct i915_engine_class_instance *siblings,
-			  unsigned int count)
+			  unsigned int count,
+			  unsigned long flags)
 {
 	const int priorities[] = { -1023, 0, 1023 };
 	struct drm_i915_gem_exec_object2 batch = {
@@ -1074,7 +1078,8 @@ static void __bonded_sema(int i915, uint32_t ctx,
 		/* A: spin forever on seperate render engine */
 		spin = igt_spin_new(i915,
 				    .flags = (IGT_SPIN_POLL_RUN |
-					      IGT_SPIN_FENCE_OUT));
+					      IGT_SPIN_FENCE_OUT |
+					      (flags & USERPTR ? IGT_SPIN_USERPTR : 0)));
 		igt_spin_busywait_until_started(spin);
 
 		/*
@@ -1128,7 +1133,7 @@ static void __bonded_sema(int i915, uint32_t ctx,
 	gem_close(i915, batch.handle);
 }
 
-static void bonded_semaphore(int i915)
+static void bonded_semaphore(int i915, unsigned long flags)
 {
 	uint32_t ctx;
 
@@ -1149,7 +1154,7 @@ static void bonded_semaphore(int i915)
 
 		siblings = list_engines(i915, 1u << class, &count);
 		if (count > 1)
-			__bonded_sema(i915, ctx, siblings, count);
+			__bonded_sema(i915, ctx, siblings, count, flags);
 		free(siblings);
 	}
 
@@ -1839,7 +1844,7 @@ static void __bonded_early(int i915, uint32_t ctx,
 	spin = igt_spin_new(i915,
 			    .ctx = ctx,
 			    .engine = (flags & VIRTUAL_ENGINE) ? 0 : 1,
-			    .flags = IGT_SPIN_NO_PREEMPTION);
+			    .flags = IGT_SPIN_NO_PREEMPTION | (flags & USERPTR ? IGT_SPIN_USERPTR : 0));
 
 	/* B: runs after A on engine 1 */
 	execbuf.flags = I915_EXEC_FENCE_OUT;
@@ -1882,7 +1887,7 @@ static void __bonded_early(int i915, uint32_t ctx,
 	igt_spin_free(i915, spin);
 }
 
-static void bonded_early(int i915)
+static void bonded_early(int i915, unsigned long flags)
 {
 	uint32_t ctx;
 
@@ -1909,8 +1914,8 @@ static void bonded_early(int i915)
 
 		siblings = list_engines(i915, 1u << class, &count);
 		if (count > 1) {
-			__bonded_early(i915, ctx, siblings, count, 0);
-			__bonded_early(i915, ctx, siblings, count, VIRTUAL_ENGINE);
+			__bonded_early(i915, ctx, siblings, count, flags);
+			__bonded_early(i915, ctx, siblings, count, flags | VIRTUAL_ENGINE);
 		}
 		free(siblings);
 	}
@@ -2882,7 +2887,16 @@ igt_main
 			bonded(i915, CORK);
 
 		igt_subtest("bonded-early")
-			bonded_early(i915);
+			bonded_early(i915, 0);
+
+		igt_subtest("u-bonded-imm")
+			bonded(i915, USERPTR);
+
+		igt_subtest("u-bonded-cork")
+			bonded(i915, CORK | USERPTR);
+
+		igt_subtest("u-bonded-early")
+			bonded_early(i915, USERPTR);
 	}
 
 	igt_subtest("bonded-slice")
@@ -2892,7 +2906,9 @@ igt_main
 		bonded_chain(i915);
 
 	igt_subtest("bonded-semaphore")
-		bonded_semaphore(i915);
+		bonded_semaphore(i915, 0);
+	igt_subtest("u-bonded-semaphore")
+		bonded_semaphore(i915, USERPTR);
 
 	igt_subtest("bonded-pair")
 		bonded_runner(i915, __bonded_pair);
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 06/10] i915/gem_exec_reloc: Continuing the trend of checking userptr
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
@ 2020-10-14 10:40   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_exec_reloc.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/tests/i915/gem_exec_reloc.c b/tests/i915/gem_exec_reloc.c
index cb1a04b11..fc2bd0a56 100644
--- a/tests/i915/gem_exec_reloc.c
+++ b/tests/i915/gem_exec_reloc.c
@@ -429,7 +429,7 @@ static unsigned int offset_in_page(void *addr)
 	return (uintptr_t)addr & 4095;
 }
 
-static void active_spin(int fd, unsigned engine)
+static void active_spin(int fd, unsigned engine, unsigned long flags)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_relocation_entry reloc;
@@ -439,7 +439,7 @@ static void active_spin(int fd, unsigned engine)
 
 	spin = igt_spin_new(fd,
 			    .engine = engine,
-			    .flags = IGT_SPIN_NO_PREEMPTION);
+			    .flags = IGT_SPIN_NO_PREEMPTION | flags);
 
 	memset(obj, 0, sizeof(obj));
 	obj[0] = spin->obj[IGT_SPIN_BATCH];
@@ -1266,7 +1266,14 @@ igt_main
 	igt_subtest_with_dynamic("basic-spin") {
 		__for_each_physical_engine(fd, e) {
 			igt_dynamic_f("%s", e->name)
-				active_spin(fd, e->flags);
+				active_spin(fd, e->flags, 0);
+		}
+	}
+
+	igt_subtest_with_dynamic("basic-spin-user") {
+		__for_each_physical_engine(fd, e) {
+			igt_dynamic_f("%s", e->name)
+				active_spin(fd, e->flags, IGT_SPIN_USERPTR);
 		}
 	}
 
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [igt-dev] [PATCH i-g-t 06/10] i915/gem_exec_reloc: Continuing the trend of checking userptr
@ 2020-10-14 10:40   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_exec_reloc.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/tests/i915/gem_exec_reloc.c b/tests/i915/gem_exec_reloc.c
index cb1a04b11..fc2bd0a56 100644
--- a/tests/i915/gem_exec_reloc.c
+++ b/tests/i915/gem_exec_reloc.c
@@ -429,7 +429,7 @@ static unsigned int offset_in_page(void *addr)
 	return (uintptr_t)addr & 4095;
 }
 
-static void active_spin(int fd, unsigned engine)
+static void active_spin(int fd, unsigned engine, unsigned long flags)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_relocation_entry reloc;
@@ -439,7 +439,7 @@ static void active_spin(int fd, unsigned engine)
 
 	spin = igt_spin_new(fd,
 			    .engine = engine,
-			    .flags = IGT_SPIN_NO_PREEMPTION);
+			    .flags = IGT_SPIN_NO_PREEMPTION | flags);
 
 	memset(obj, 0, sizeof(obj));
 	obj[0] = spin->obj[IGT_SPIN_BATCH];
@@ -1266,7 +1266,14 @@ igt_main
 	igt_subtest_with_dynamic("basic-spin") {
 		__for_each_physical_engine(fd, e) {
 			igt_dynamic_f("%s", e->name)
-				active_spin(fd, e->flags);
+				active_spin(fd, e->flags, 0);
+		}
+	}
+
+	igt_subtest_with_dynamic("basic-spin-user") {
+		__for_each_physical_engine(fd, e) {
+			igt_dynamic_f("%s", e->name)
+				active_spin(fd, e->flags, IGT_SPIN_USERPTR);
 		}
 	}
 
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 07/10] i915/gem_userptr_blits: Test execution isolation
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
                   ` (5 preceding siblings ...)
  (?)
@ 2020-10-14 10:40 ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_userptr_blits.c | 40 ++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/tests/i915/gem_userptr_blits.c b/tests/i915/gem_userptr_blits.c
index 01498edad..6f2e89269 100644
--- a/tests/i915/gem_userptr_blits.c
+++ b/tests/i915/gem_userptr_blits.c
@@ -586,6 +586,40 @@ static int test_access_control(int fd)
 	return 0;
 }
 
+static void test_exec_isolation(int fd)
+{
+	igt_fork(child, 1) {
+		igt_spin_t *spin = igt_spin_new(fd, .flags = IGT_SPIN_USERPTR);
+		gem_execbuf(fd, &spin->execbuf);
+		igt_spin_free(fd, spin);
+	}
+
+	igt_set_timeout(10, "blocked!");
+	igt_until_timeout(2)
+		igt_spin_free(fd, igt_spin_new(fd, .flags = IGT_SPIN_USERPTR));
+	igt_reset_timeout();
+
+	igt_waitchildren();
+}
+
+static void test_unmap_isolation(int fd)
+{
+	igt_spin_t *spin = igt_spin_new(fd, .flags = IGT_SPIN_USERPTR);
+
+	igt_fork(child, 1) {
+		gem_execbuf(fd, &spin->execbuf);
+		munmap(spin->batch, 4096);
+	}
+	igt_waitchildren();
+
+	igt_spin_end(spin);
+	mprotect(spin->batch, 4096, PROT_READ);
+	igt_assert_eq(__gem_execbuf(fd, &spin->execbuf), -EFAULT);
+	mprotect(spin->batch, 4096, PROT_WRITE);
+
+	igt_spin_free(fd, spin);
+}
+
 static int test_invalid_null_pointer(int fd)
 {
 	uint32_t handle;
@@ -2388,6 +2422,12 @@ igt_main_args("c:", NULL, help_str, opt_handler, NULL)
 	igt_subtest("access-control")
 		test_access_control(fd);
 
+	igt_subtest("exec-isolation")
+		test_exec_isolation(fd);
+
+	igt_subtest("unmap-isolation")
+		test_unmap_isolation(fd);
+
 	igt_fixture
 		free(can_mmap);
 }
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 08/10] lib: Use unsigned gen for forward compatible tests
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
@ 2020-10-14 10:40   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

Unknown, so future, gen are marked as -1 which we want to treat as -1u
so that always pass >= gen checks.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2298
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
---
 lib/intel_batchbuffer.c | 10 +++++-----
 lib/intel_batchbuffer.h | 10 ++++++----
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 60dbfe261..fc73495c0 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -414,7 +414,7 @@ intel_blt_copy(struct intel_batchbuffer *batch,
 	       drm_intel_bo *dst_bo, int dst_x1, int dst_y1, int dst_pitch,
 	       int width, int height, int bpp)
 {
-	const int gen = batch->gen;
+	const unsigned int gen = batch->gen;
 	uint32_t src_tiling, dst_tiling, swizzle;
 	uint32_t cmd_bits = 0;
 	uint32_t br13_bits;
@@ -553,7 +553,7 @@ unsigned igt_buf_height(const struct igt_buf *buf)
  * Returns:
  * The width of the ccs buffer data.
  */
-unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf)
+unsigned int igt_buf_intel_ccs_width(unsigned int gen, const struct igt_buf *buf)
 {
 	/*
 	 * GEN12+: The CCS unit size is 64 bytes mapping 4 main surface
@@ -576,7 +576,7 @@ unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf)
  * Returns:
  * The height of the ccs buffer data.
  */
-unsigned int igt_buf_intel_ccs_height(int gen, const struct igt_buf *buf)
+unsigned int igt_buf_intel_ccs_height(unsigned int gen, const struct igt_buf *buf)
 {
 	/*
 	 * GEN12+: The CCS unit size is 64 bytes mapping 4 main surface
@@ -703,7 +703,7 @@ fill_object(struct drm_i915_gem_exec_object2 *obj, uint32_t gem_handle,
 
 static void exec_blit(int fd,
 		      struct drm_i915_gem_exec_object2 *objs, uint32_t count,
-		      int gen)
+		      unsigned int gen)
 {
 	struct drm_i915_gem_execbuffer2 exec = {
 		.buffers_ptr = to_user_pointer(objs),
@@ -2416,7 +2416,7 @@ void intel_bb_emit_blt_copy(struct intel_bb *ibb,
 			    int dst_x1, int dst_y1, int dst_pitch,
 			    int width, int height, int bpp)
 {
-	const int gen = ibb->gen;
+	const unsigned int gen = ibb->gen;
 	uint32_t cmd_bits = 0;
 	uint32_t br13_bits;
 	uint32_t mask;
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index d20b4e66a..ab1b0c286 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -15,7 +15,7 @@
 struct intel_batchbuffer {
 	drm_intel_bufmgr *bufmgr;
 	uint32_t devid;
-	int gen;
+	unsigned int gen;
 
 	drm_intel_context *ctx;
 	drm_intel_bo *bo;
@@ -263,8 +263,10 @@ static inline bool igt_buf_compressed(const struct igt_buf *buf)
 
 unsigned igt_buf_width(const struct igt_buf *buf);
 unsigned igt_buf_height(const struct igt_buf *buf);
-unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf);
-unsigned int igt_buf_intel_ccs_height(int gen, const struct igt_buf *buf);
+unsigned int igt_buf_intel_ccs_width(unsigned int gen,
+				     const struct igt_buf *buf);
+unsigned int igt_buf_intel_ccs_height(unsigned int gen,
+				      const struct igt_buf *buf);
 
 void igt_blitter_src_copy(int fd,
 			  /* src */
@@ -434,7 +436,7 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
  */
 struct intel_bb {
 	int i915;
-	int gen;
+	unsigned int gen;
 	bool debug;
 	bool dump_base64;
 	bool enforce_relocs;
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [igt-dev] [PATCH i-g-t 08/10] lib: Use unsigned gen for forward compatible tests
@ 2020-10-14 10:40   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

Unknown, so future, gen are marked as -1 which we want to treat as -1u
so that always pass >= gen checks.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2298
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
---
 lib/intel_batchbuffer.c | 10 +++++-----
 lib/intel_batchbuffer.h | 10 ++++++----
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 60dbfe261..fc73495c0 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -414,7 +414,7 @@ intel_blt_copy(struct intel_batchbuffer *batch,
 	       drm_intel_bo *dst_bo, int dst_x1, int dst_y1, int dst_pitch,
 	       int width, int height, int bpp)
 {
-	const int gen = batch->gen;
+	const unsigned int gen = batch->gen;
 	uint32_t src_tiling, dst_tiling, swizzle;
 	uint32_t cmd_bits = 0;
 	uint32_t br13_bits;
@@ -553,7 +553,7 @@ unsigned igt_buf_height(const struct igt_buf *buf)
  * Returns:
  * The width of the ccs buffer data.
  */
-unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf)
+unsigned int igt_buf_intel_ccs_width(unsigned int gen, const struct igt_buf *buf)
 {
 	/*
 	 * GEN12+: The CCS unit size is 64 bytes mapping 4 main surface
@@ -576,7 +576,7 @@ unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf)
  * Returns:
  * The height of the ccs buffer data.
  */
-unsigned int igt_buf_intel_ccs_height(int gen, const struct igt_buf *buf)
+unsigned int igt_buf_intel_ccs_height(unsigned int gen, const struct igt_buf *buf)
 {
 	/*
 	 * GEN12+: The CCS unit size is 64 bytes mapping 4 main surface
@@ -703,7 +703,7 @@ fill_object(struct drm_i915_gem_exec_object2 *obj, uint32_t gem_handle,
 
 static void exec_blit(int fd,
 		      struct drm_i915_gem_exec_object2 *objs, uint32_t count,
-		      int gen)
+		      unsigned int gen)
 {
 	struct drm_i915_gem_execbuffer2 exec = {
 		.buffers_ptr = to_user_pointer(objs),
@@ -2416,7 +2416,7 @@ void intel_bb_emit_blt_copy(struct intel_bb *ibb,
 			    int dst_x1, int dst_y1, int dst_pitch,
 			    int width, int height, int bpp)
 {
-	const int gen = ibb->gen;
+	const unsigned int gen = ibb->gen;
 	uint32_t cmd_bits = 0;
 	uint32_t br13_bits;
 	uint32_t mask;
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index d20b4e66a..ab1b0c286 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -15,7 +15,7 @@
 struct intel_batchbuffer {
 	drm_intel_bufmgr *bufmgr;
 	uint32_t devid;
-	int gen;
+	unsigned int gen;
 
 	drm_intel_context *ctx;
 	drm_intel_bo *bo;
@@ -263,8 +263,10 @@ static inline bool igt_buf_compressed(const struct igt_buf *buf)
 
 unsigned igt_buf_width(const struct igt_buf *buf);
 unsigned igt_buf_height(const struct igt_buf *buf);
-unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf);
-unsigned int igt_buf_intel_ccs_height(int gen, const struct igt_buf *buf);
+unsigned int igt_buf_intel_ccs_width(unsigned int gen,
+				     const struct igt_buf *buf);
+unsigned int igt_buf_intel_ccs_height(unsigned int gen,
+				      const struct igt_buf *buf);
 
 void igt_blitter_src_copy(int fd,
 			  /* src */
@@ -434,7 +436,7 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
  */
 struct intel_bb {
 	int i915;
-	int gen;
+	unsigned int gen;
 	bool debug;
 	bool dump_base64;
 	bool enforce_relocs;
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 09/10] tests/i915: Treat gen as unsigned for forward compatibility
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
@ 2020-10-14 10:40   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

We want to recognise future devices (gen = -1u) and treat them as an
extension of the latest known device, which is typically true.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_bad_reloc.c            |  2 +-
 tests/i915/gem_ctx_create.c           |  2 +-
 tests/i915/gem_ctx_engines.c          |  2 +-
 tests/i915/gem_ctx_isolation.c        |  2 +-
 tests/i915/gem_ctx_shared.c           |  4 ++--
 tests/i915/gem_ctx_thrash.c           |  2 +-
 tests/i915/gem_exec_async.c           |  2 +-
 tests/i915/gem_exec_await.c           |  2 +-
 tests/i915/gem_exec_capture.c         |  4 ++--
 tests/i915/gem_exec_fence.c           | 10 +++++-----
 tests/i915/gem_exec_flush.c           |  4 ++--
 tests/i915/gem_exec_gttfill.c         |  2 +-
 tests/i915/gem_exec_latency.c         |  4 ++--
 tests/i915/gem_exec_nop.c             |  4 ++--
 tests/i915/gem_exec_parallel.c        |  2 +-
 tests/i915/gem_exec_params.c          |  2 +-
 tests/i915/gem_exec_reloc.c           |  8 ++++----
 tests/i915/gem_exec_schedule.c        |  8 ++++----
 tests/i915/gem_exec_store.c           |  6 +++---
 tests/i915/gem_exec_suspend.c         |  2 +-
 tests/i915/gem_exec_whisper.c         |  2 +-
 tests/i915/gem_render_copy.c          |  4 ++--
 tests/i915/gem_ringfill.c             |  2 +-
 tests/i915/gem_softpin.c              |  4 ++--
 tests/i915/gem_sync.c                 |  8 ++++----
 tests/i915/gem_tiled_fence_blits.c    |  2 +-
 tests/i915/gem_userptr_blits.c        |  4 ++--
 tests/i915/gem_vm_create.c            |  2 +-
 tests/i915/i915_module_load.c         |  4 ++--
 tests/i915/i915_pm_rc6_residency.c    |  4 ++--
 tests/i915/sysfs_timeslice_duration.c |  2 +-
 31 files changed, 56 insertions(+), 56 deletions(-)

diff --git a/tests/i915/gem_bad_reloc.c b/tests/i915/gem_bad_reloc.c
index 7eb7fa538..6acc1724f 100644
--- a/tests/i915/gem_bad_reloc.c
+++ b/tests/i915/gem_bad_reloc.c
@@ -113,7 +113,7 @@ static void negative_reloc(int fd, unsigned flags)
 
 static void negative_reloc_blt(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj[1024][2];
 	struct drm_i915_gem_relocation_entry reloc;
diff --git a/tests/i915/gem_ctx_create.c b/tests/i915/gem_ctx_create.c
index 39305f026..c7295f705 100644
--- a/tests/i915/gem_ctx_create.c
+++ b/tests/i915/gem_ctx_create.c
@@ -419,7 +419,7 @@ static void basic_ext_param(int i915)
 static void check_single_timeline(int i915, uint32_t ctx, int num_engines)
 {
 #define RCS_TIMESTAMP (0x2000 + 0x358)
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const int has_64bit_reloc = gen >= 8;
 	struct drm_i915_gem_exec_object2 results = { .handle = gem_create(i915, 4096) };
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
diff --git a/tests/i915/gem_ctx_engines.c b/tests/i915/gem_ctx_engines.c
index e6def511b..7d4abdb5c 100644
--- a/tests/i915/gem_ctx_engines.c
+++ b/tests/i915/gem_ctx_engines.c
@@ -482,7 +482,7 @@ static uint32_t read_result(int timeline, uint32_t *map, int idx)
 static void independent(int i915)
 {
 #define RCS_TIMESTAMP (0x2000 + 0x358)
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const int has_64bit_reloc = gen >= 8;
 	I915_DEFINE_CONTEXT_PARAM_ENGINES(engines , I915_EXEC_RING_MASK + 1);
 	struct drm_i915_gem_context_param param = {
diff --git a/tests/i915/gem_ctx_isolation.c b/tests/i915/gem_ctx_isolation.c
index 9fdf78bb8..58a35b487 100644
--- a/tests/i915/gem_ctx_isolation.c
+++ b/tests/i915/gem_ctx_isolation.c
@@ -501,7 +501,7 @@ static void dump_regs(int fd,
 		      const struct intel_execution_engine2 *e,
 		      unsigned int regs)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const unsigned int gen_bit = 1 << gen;
 	const unsigned int engine_bit = ENGINE(e->class, e->instance);
 	const uint32_t mmio_base = gem_engine_mmio_base(fd, e->name);
diff --git a/tests/i915/gem_ctx_shared.c b/tests/i915/gem_ctx_shared.c
index 55678d96f..616462d79 100644
--- a/tests/i915/gem_ctx_shared.c
+++ b/tests/i915/gem_ctx_shared.c
@@ -186,7 +186,7 @@ static void exhaust_shared_gtt(int i915, unsigned int flags)
 
 static void exec_shared_gtt(int i915, unsigned int ring)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 obj = {};
 	struct drm_i915_gem_execbuffer2 execbuf = {
@@ -436,7 +436,7 @@ static void store_dword(int i915, uint32_t ctx, unsigned ring,
 			uint32_t target, uint32_t offset, uint32_t value,
 			uint32_t cork, unsigned write_domain)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_ctx_thrash.c b/tests/i915/gem_ctx_thrash.c
index dc7259c18..d32619d5d 100644
--- a/tests/i915/gem_ctx_thrash.c
+++ b/tests/i915/gem_ctx_thrash.c
@@ -46,7 +46,7 @@ static void xchg_int(void *array, unsigned i, unsigned j)
 
 static unsigned context_size(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 
 	switch (gen) {
 	case 0:
diff --git a/tests/i915/gem_exec_async.c b/tests/i915/gem_exec_async.c
index 035e78377..9f2c80f05 100644
--- a/tests/i915/gem_exec_async.c
+++ b/tests/i915/gem_exec_async.c
@@ -29,7 +29,7 @@ IGT_TEST_DESCRIPTION("Check that we can issue concurrent writes across the engin
 static void store_dword(int fd, unsigned ring,
 			uint32_t target, uint32_t offset, uint32_t value)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_exec_await.c b/tests/i915/gem_exec_await.c
index 6bc624e4a..70fda968e 100644
--- a/tests/i915/gem_exec_await.c
+++ b/tests/i915/gem_exec_await.c
@@ -59,7 +59,7 @@ static void wide(int fd, int ring_size, int timeout, unsigned int flags)
 {
 	const struct intel_execution_engine2 *engine;
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct {
 		struct drm_i915_gem_exec_object2 *obj;
 		struct drm_i915_gem_exec_object2 exec[2];
diff --git a/tests/i915/gem_exec_capture.c b/tests/i915/gem_exec_capture.c
index 85645a267..cb0d3151b 100644
--- a/tests/i915/gem_exec_capture.c
+++ b/tests/i915/gem_exec_capture.c
@@ -61,7 +61,7 @@ static void check_error_state(int dir, struct drm_i915_gem_exec_object2 *obj)
 
 static void __capture1(int fd, int dir, unsigned ring, uint32_t target)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[4];
 #define SCRATCH 0
 #define CAPTURE 1
@@ -197,7 +197,7 @@ static struct offset {
 #define INCREMENTAL 0x1
 #define ASYNC 0x2
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 *obj;
 	struct drm_i915_gem_relocation_entry reloc[2];
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_exec_fence.c b/tests/i915/gem_exec_fence.c
index 0b8ab1400..56469ebab 100644
--- a/tests/i915/gem_exec_fence.c
+++ b/tests/i915/gem_exec_fence.c
@@ -61,7 +61,7 @@ static void store(int fd, const struct intel_execution_engine2 *e,
 {
 	const int SCRATCH = 0;
 	const int BATCH = 1;
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -122,7 +122,7 @@ static bool fence_busy(int fence)
 static void test_fence_busy(int fd, const struct intel_execution_engine2 *e,
 			    unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj;
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -218,7 +218,7 @@ static void test_fence_busy(int fd, const struct intel_execution_engine2 *e,
 static void test_fence_busy_all(int fd, unsigned flags)
 {
 	const struct intel_execution_engine2 *e;
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj;
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -598,7 +598,7 @@ static int __execbuf(int fd, struct drm_i915_gem_execbuffer2 *execbuf)
 static void test_parallel(int i915, const struct intel_execution_engine2 *e)
 {
 	const struct intel_execution_engine2 *e2;
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	uint32_t scratch = gem_create(i915, 4096);
 	uint32_t *out = gem_mmap__wc(i915, scratch, 0, 4096, PROT_READ);
 	uint32_t handle[I915_EXEC_RING_MASK];
@@ -704,7 +704,7 @@ static void test_parallel(int i915, const struct intel_execution_engine2 *e)
 
 static void test_concurrent(int i915, const struct intel_execution_engine2 *e)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct drm_i915_gem_relocation_entry reloc = {
 		.target_handle =  gem_create(i915, 4096),
 		.write_domain = I915_GEM_DOMAIN_RENDER,
diff --git a/tests/i915/gem_exec_flush.c b/tests/i915/gem_exec_flush.c
index 7d9fcbfcb..403e498bd 100644
--- a/tests/i915/gem_exec_flush.c
+++ b/tests/i915/gem_exec_flush.c
@@ -78,7 +78,7 @@ static uint32_t movnt(uint32_t *map, int i)
 static void run(int fd, unsigned ring, int nchild, int timeout,
 		unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 
 	/* The crux of this testing is whether writes by the GPU are coherent
 	 * from the CPU.
@@ -355,7 +355,7 @@ enum batch_mode {
 static void batch(int fd, unsigned ring, int nchild, int timeout,
 		  enum batch_mode mode, unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 
 	if (mode == BATCH_GTT)
 		gem_require_mappable_ggtt(fd);
diff --git a/tests/i915/gem_exec_gttfill.c b/tests/i915/gem_exec_gttfill.c
index 7a6d7c0fb..8f2336a30 100644
--- a/tests/i915/gem_exec_gttfill.c
+++ b/tests/i915/gem_exec_gttfill.c
@@ -107,7 +107,7 @@ static void submit(int fd, int gen,
 
 static void fillgtt(int fd, unsigned ring, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_relocation_entry reloc[2];
 	volatile uint64_t *shared;
diff --git a/tests/i915/gem_exec_latency.c b/tests/i915/gem_exec_latency.c
index 198e54fd2..568d727f2 100644
--- a/tests/i915/gem_exec_latency.c
+++ b/tests/i915/gem_exec_latency.c
@@ -109,7 +109,7 @@ static void latency_on_ring(int fd,
 			    unsigned ring, const char *name,
 			    unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const int has_64bit_reloc = gen >= 8;
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc;
@@ -258,7 +258,7 @@ static void latency_from_ring(int fd,
 			      unsigned ring, const char *name,
 			      unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const int has_64bit_reloc = gen >= 8;
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc;
diff --git a/tests/i915/gem_exec_nop.c b/tests/i915/gem_exec_nop.c
index 21a937c83..62554ecb2 100644
--- a/tests/i915/gem_exec_nop.c
+++ b/tests/i915/gem_exec_nop.c
@@ -104,7 +104,7 @@ static double nop_on_ring(int fd, uint32_t handle,
 static void poll_ring(int fd, const struct intel_execution_engine2 *e,
 		      int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t MI_ARB_CHK = 0x5 << 23;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj;
@@ -214,7 +214,7 @@ static void poll_ring(int fd, const struct intel_execution_engine2 *e,
 
 static void poll_sequential(int fd, const char *name, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const struct intel_execution_engine2 *e;
 	const uint32_t MI_ARB_CHK = 0x5 << 23;
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_exec_parallel.c b/tests/i915/gem_exec_parallel.c
index 96feb8250..bdb8e3e90 100644
--- a/tests/i915/gem_exec_parallel.c
+++ b/tests/i915/gem_exec_parallel.c
@@ -191,7 +191,7 @@ static void handle_close(int fd, unsigned int flags, uint32_t handle, void *data
 
 static void all(int fd, struct intel_execution_engine2 *engine, unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	pthread_mutex_t mutex;
 	pthread_cond_t cond;
 	struct thread *threads;
diff --git a/tests/i915/gem_exec_params.c b/tests/i915/gem_exec_params.c
index f8a940740..e0bbea94b 100644
--- a/tests/i915/gem_exec_params.c
+++ b/tests/i915/gem_exec_params.c
@@ -91,7 +91,7 @@ static bool has_resource_streamer(int fd)
 
 static void test_batch_first(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc[2];
diff --git a/tests/i915/gem_exec_reloc.c b/tests/i915/gem_exec_reloc.c
index fc2bd0a56..299c2c79b 100644
--- a/tests/i915/gem_exec_reloc.c
+++ b/tests/i915/gem_exec_reloc.c
@@ -64,7 +64,7 @@ static void write_dword(int fd,
 			uint64_t target_offset,
 			uint32_t value)
 {
-	int gen = intel_gen(intel_get_drm_devid(fd));
+	unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
@@ -266,7 +266,7 @@ static void check_bo(int fd, uint32_t handle)
 
 static void active(int fd, unsigned engine)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -872,7 +872,7 @@ static void basic_softpin(int fd)
 static uint64_t concurrent_relocs(int i915, int idx, int count)
 {
 	struct drm_i915_gem_relocation_entry *reloc;
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	unsigned long sz;
 	int offset;
 
@@ -972,7 +972,7 @@ static void concurrent_child(int i915,
 
 static uint32_t create_concurrent_batch(int i915, unsigned int count)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	size_t sz = ALIGN(4 * (1 + 4 * count), 4096);
 	uint32_t handle = gem_create(i915, sz);
 	uint32_t *map, *cs;
diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index 53462c425..74d77d3e6 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -94,7 +94,7 @@ static uint32_t __store_dword(int fd, uint32_t ctx, unsigned ring,
 			      uint32_t target, uint32_t offset, uint32_t value,
 			      uint32_t cork, int fence, unsigned write_domain)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -1074,7 +1074,7 @@ static void semaphore_resolve(int i915, unsigned long flags)
 
 static void semaphore_noskip(int i915, unsigned long flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *outer, *inner;
 	uint32_t ctx;
 
@@ -1723,7 +1723,7 @@ static void deep(int fd, unsigned ring)
 
 	/* Create a deep dependency chain, with a few branches */
 	for (n = 0; n < nreq && igt_seconds_elapsed(&tv) < 2; n++) {
-		const int gen = intel_gen(intel_get_drm_devid(fd));
+		const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 		struct drm_i915_gem_exec_object2 obj[3];
 		struct drm_i915_gem_relocation_entry reloc;
 		struct drm_i915_gem_execbuffer2 eb = {
@@ -1876,7 +1876,7 @@ static void wide(int fd, unsigned ring)
 static void reorder_wide(int fd, unsigned ring)
 {
 	const unsigned int ring_size = gem_submission_measure(fd, ring);
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const int priorities[] = { MIN_PRIO, MAX_PRIO };
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_exec_object2 obj[2];
diff --git a/tests/i915/gem_exec_store.c b/tests/i915/gem_exec_store.c
index 272ab9cd8..771ee1690 100644
--- a/tests/i915/gem_exec_store.c
+++ b/tests/i915/gem_exec_store.c
@@ -38,7 +38,7 @@
 
 static void store_dword(int fd, const struct intel_execution_engine2 *e)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -96,7 +96,7 @@ static void store_dword(int fd, const struct intel_execution_engine2 *e)
 static void store_cachelines(int fd, const struct intel_execution_engine2 *e,
 			     unsigned int flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 *obj;
 	struct drm_i915_gem_relocation_entry *reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -172,7 +172,7 @@ static void store_cachelines(int fd, const struct intel_execution_engine2 *e,
 
 static void store_all(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct intel_execution_engine2 *engine;
 	struct drm_i915_gem_relocation_entry *reloc;
diff --git a/tests/i915/gem_exec_suspend.c b/tests/i915/gem_exec_suspend.c
index d768db911..6886bccd4 100644
--- a/tests/i915/gem_exec_suspend.c
+++ b/tests/i915/gem_exec_suspend.c
@@ -89,7 +89,7 @@ static bool has_semaphores(int fd)
 
 static void run_test(int fd, unsigned engine, unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
diff --git a/tests/i915/gem_exec_whisper.c b/tests/i915/gem_exec_whisper.c
index 1fded7618..9acf6c306 100644
--- a/tests/i915/gem_exec_whisper.c
+++ b/tests/i915/gem_exec_whisper.c
@@ -168,7 +168,7 @@ static void ctx_set_random_priority(int fd, uint32_t ctx)
 static void whisper(int fd, unsigned engine, unsigned flags)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 batches[QLEN];
 	struct drm_i915_gem_relocation_entry inter[QLEN];
 	struct drm_i915_gem_relocation_entry reloc;
diff --git a/tests/i915/gem_render_copy.c b/tests/i915/gem_render_copy.c
index ae6e18334..afc490f1a 100644
--- a/tests/i915/gem_render_copy.c
+++ b/tests/i915/gem_render_copy.c
@@ -101,7 +101,7 @@ copy_from_linear_buf(data_t *data, struct intel_buf *src, struct intel_buf *dst)
 static void *linear_copy_ccs(data_t *data, struct intel_buf *buf)
 {
 	void *ccs_data, *linear;
-	int gen = intel_gen(data->devid);
+	unsigned int gen = intel_gen(data->devid);
 	int ccs_size = intel_buf_ccs_width(gen, buf) *
 		intel_buf_ccs_height(gen, buf);
 	int bo_size = intel_buf_bo_size(buf);
@@ -295,7 +295,7 @@ scratch_buf_check_all(data_t *data,
 static void scratch_buf_ccs_check(data_t *data,
 				  struct intel_buf *buf)
 {
-	int gen = intel_gen(data->devid);
+	unsigned int gen = intel_gen(data->devid);
 	int ccs_size = intel_buf_ccs_width(gen, buf) *
 		intel_buf_ccs_height(gen, buf);
 	uint8_t *linear;
diff --git a/tests/i915/gem_ringfill.c b/tests/i915/gem_ringfill.c
index 3e24ccf18..c499cb0dd 100644
--- a/tests/i915/gem_ringfill.c
+++ b/tests/i915/gem_ringfill.c
@@ -99,7 +99,7 @@ static void setup_execbuf(int fd,
 			  struct drm_i915_gem_relocation_entry *reloc,
 			  unsigned int ring)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	uint32_t *batch, *b;
 	int i;
diff --git a/tests/i915/gem_softpin.c b/tests/i915/gem_softpin.c
index 202abdd88..fcaf8ef30 100644
--- a/tests/i915/gem_softpin.c
+++ b/tests/i915/gem_softpin.c
@@ -265,7 +265,7 @@ static void test_reverse(int i915)
 
 static uint64_t busy_batch(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	unsigned const int gen = intel_gen(intel_get_drm_devid(fd));
 	const int has_64bit_reloc = gen >= 8;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 object[2];
@@ -452,7 +452,7 @@ static void xchg_offset(void *array, unsigned i, unsigned j)
 enum sleep { NOSLEEP, SUSPEND, HIBERNATE };
 static void test_noreloc(int fd, enum sleep sleep, unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	unsigned const int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t size = 4096;
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_sync.c b/tests/i915/gem_sync.c
index b317a3927..a82bda924 100644
--- a/tests/i915/gem_sync.c
+++ b/tests/i915/gem_sync.c
@@ -491,7 +491,7 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen)
 static void
 store_ring(int fd, unsigned ring, int num_children, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
 	ied = list_store_engines(fd, ring);
@@ -587,7 +587,7 @@ store_ring(int fd, unsigned ring, int num_children, int timeout)
 static void
 switch_ring(int fd, unsigned ring, int num_children, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
 	gem_require_contexts(fd);
@@ -766,7 +766,7 @@ static void *waiter(void *arg)
 static void
 __store_many(int fd, unsigned ring, int timeout, unsigned long *cycles)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 object[2];
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -971,7 +971,7 @@ sync_all(int fd, int num_children, int timeout)
 static void
 store_all(int fd, int num_children, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
 	ied = list_store_engines(fd, ALL_ENGINES);
diff --git a/tests/i915/gem_tiled_fence_blits.c b/tests/i915/gem_tiled_fence_blits.c
index 99ec78f9b..0a633d91b 100644
--- a/tests/i915/gem_tiled_fence_blits.c
+++ b/tests/i915/gem_tiled_fence_blits.c
@@ -88,7 +88,7 @@ static void check_bo(int fd, uint32_t handle, uint32_t start_val)
 static uint32_t
 create_batch(int fd, struct drm_i915_gem_relocation_entry *reloc)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const bool has_64b_reloc = gen >= 8;
 	uint32_t *batch;
 	uint32_t handle;
diff --git a/tests/i915/gem_userptr_blits.c b/tests/i915/gem_userptr_blits.c
index 6f2e89269..5f47a5f41 100644
--- a/tests/i915/gem_userptr_blits.c
+++ b/tests/i915/gem_userptr_blits.c
@@ -299,7 +299,7 @@ blit(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo)
 static void store_dword(int fd, uint32_t target,
 			uint32_t offset, uint32_t value)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -1155,7 +1155,7 @@ static void store_dword_rand(int i915, unsigned int engine,
 			     uint32_t target, uint64_t sz,
 			     int count)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct drm_i915_gem_relocation_entry *reloc;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_execbuffer2 exec;
diff --git a/tests/i915/gem_vm_create.c b/tests/i915/gem_vm_create.c
index e8af68f19..8843b1b3b 100644
--- a/tests/i915/gem_vm_create.c
+++ b/tests/i915/gem_vm_create.c
@@ -250,7 +250,7 @@ static void execbuf(int i915)
 static void
 write_to_address(int fd, uint32_t ctx, uint64_t addr, uint32_t value)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 batch = {
 		.handle = gem_create(fd, 4096)
 	};
diff --git a/tests/i915/i915_module_load.c b/tests/i915/i915_module_load.c
index 77aaac5c6..aa998b992 100644
--- a/tests/i915/i915_module_load.c
+++ b/tests/i915/i915_module_load.c
@@ -40,7 +40,7 @@
 
 static void store_dword(int fd, unsigned ring)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -102,7 +102,7 @@ static void store_dword(int fd, unsigned ring)
 
 static void store_all(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc[32];
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/i915_pm_rc6_residency.c b/tests/i915/i915_pm_rc6_residency.c
index 6fdc607e3..d484121e7 100644
--- a/tests/i915/i915_pm_rc6_residency.c
+++ b/tests/i915/i915_pm_rc6_residency.c
@@ -361,7 +361,7 @@ static void rc6_idle(int i915)
 {
 	const int64_t duration_ns = SLEEP_DURATION * (int64_t)NSEC_PER_SEC;
 	const int tolerance = 20; /* Some RC6 is better than none! */
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct {
 		const char *name;
 		unsigned int flags;
@@ -452,7 +452,7 @@ static void rc6_fence(int i915)
 {
 	const int64_t duration_ns = SLEEP_DURATION * (int64_t)NSEC_PER_SEC;
 	const int tolerance = 20; /* Some RC6 is better than none! */
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *e;
 	struct power_sample sample[2];
 	unsigned long slept;
diff --git a/tests/i915/sysfs_timeslice_duration.c b/tests/i915/sysfs_timeslice_duration.c
index 2b1e52c80..b5b6ded78 100644
--- a/tests/i915/sysfs_timeslice_duration.c
+++ b/tests/i915/sysfs_timeslice_duration.c
@@ -186,7 +186,7 @@ static uint64_t __test_duration(int i915, int engine, unsigned int timeout)
 		.buffer_count = ARRAY_SIZE(obj),
 		.buffers_ptr = to_user_pointer(obj),
 	};
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	double duration = clockrate(i915);
 	unsigned int class, inst, mmio;
 	uint32_t *cs, *map;
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [igt-dev] [PATCH i-g-t 09/10] tests/i915: Treat gen as unsigned for forward compatibility
@ 2020-10-14 10:40   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

We want to recognise future devices (gen = -1u) and treat them as an
extension of the latest known device, which is typically true.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_bad_reloc.c            |  2 +-
 tests/i915/gem_ctx_create.c           |  2 +-
 tests/i915/gem_ctx_engines.c          |  2 +-
 tests/i915/gem_ctx_isolation.c        |  2 +-
 tests/i915/gem_ctx_shared.c           |  4 ++--
 tests/i915/gem_ctx_thrash.c           |  2 +-
 tests/i915/gem_exec_async.c           |  2 +-
 tests/i915/gem_exec_await.c           |  2 +-
 tests/i915/gem_exec_capture.c         |  4 ++--
 tests/i915/gem_exec_fence.c           | 10 +++++-----
 tests/i915/gem_exec_flush.c           |  4 ++--
 tests/i915/gem_exec_gttfill.c         |  2 +-
 tests/i915/gem_exec_latency.c         |  4 ++--
 tests/i915/gem_exec_nop.c             |  4 ++--
 tests/i915/gem_exec_parallel.c        |  2 +-
 tests/i915/gem_exec_params.c          |  2 +-
 tests/i915/gem_exec_reloc.c           |  8 ++++----
 tests/i915/gem_exec_schedule.c        |  8 ++++----
 tests/i915/gem_exec_store.c           |  6 +++---
 tests/i915/gem_exec_suspend.c         |  2 +-
 tests/i915/gem_exec_whisper.c         |  2 +-
 tests/i915/gem_render_copy.c          |  4 ++--
 tests/i915/gem_ringfill.c             |  2 +-
 tests/i915/gem_softpin.c              |  4 ++--
 tests/i915/gem_sync.c                 |  8 ++++----
 tests/i915/gem_tiled_fence_blits.c    |  2 +-
 tests/i915/gem_userptr_blits.c        |  4 ++--
 tests/i915/gem_vm_create.c            |  2 +-
 tests/i915/i915_module_load.c         |  4 ++--
 tests/i915/i915_pm_rc6_residency.c    |  4 ++--
 tests/i915/sysfs_timeslice_duration.c |  2 +-
 31 files changed, 56 insertions(+), 56 deletions(-)

diff --git a/tests/i915/gem_bad_reloc.c b/tests/i915/gem_bad_reloc.c
index 7eb7fa538..6acc1724f 100644
--- a/tests/i915/gem_bad_reloc.c
+++ b/tests/i915/gem_bad_reloc.c
@@ -113,7 +113,7 @@ static void negative_reloc(int fd, unsigned flags)
 
 static void negative_reloc_blt(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj[1024][2];
 	struct drm_i915_gem_relocation_entry reloc;
diff --git a/tests/i915/gem_ctx_create.c b/tests/i915/gem_ctx_create.c
index 39305f026..c7295f705 100644
--- a/tests/i915/gem_ctx_create.c
+++ b/tests/i915/gem_ctx_create.c
@@ -419,7 +419,7 @@ static void basic_ext_param(int i915)
 static void check_single_timeline(int i915, uint32_t ctx, int num_engines)
 {
 #define RCS_TIMESTAMP (0x2000 + 0x358)
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const int has_64bit_reloc = gen >= 8;
 	struct drm_i915_gem_exec_object2 results = { .handle = gem_create(i915, 4096) };
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
diff --git a/tests/i915/gem_ctx_engines.c b/tests/i915/gem_ctx_engines.c
index e6def511b..7d4abdb5c 100644
--- a/tests/i915/gem_ctx_engines.c
+++ b/tests/i915/gem_ctx_engines.c
@@ -482,7 +482,7 @@ static uint32_t read_result(int timeline, uint32_t *map, int idx)
 static void independent(int i915)
 {
 #define RCS_TIMESTAMP (0x2000 + 0x358)
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const int has_64bit_reloc = gen >= 8;
 	I915_DEFINE_CONTEXT_PARAM_ENGINES(engines , I915_EXEC_RING_MASK + 1);
 	struct drm_i915_gem_context_param param = {
diff --git a/tests/i915/gem_ctx_isolation.c b/tests/i915/gem_ctx_isolation.c
index 9fdf78bb8..58a35b487 100644
--- a/tests/i915/gem_ctx_isolation.c
+++ b/tests/i915/gem_ctx_isolation.c
@@ -501,7 +501,7 @@ static void dump_regs(int fd,
 		      const struct intel_execution_engine2 *e,
 		      unsigned int regs)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const unsigned int gen_bit = 1 << gen;
 	const unsigned int engine_bit = ENGINE(e->class, e->instance);
 	const uint32_t mmio_base = gem_engine_mmio_base(fd, e->name);
diff --git a/tests/i915/gem_ctx_shared.c b/tests/i915/gem_ctx_shared.c
index 55678d96f..616462d79 100644
--- a/tests/i915/gem_ctx_shared.c
+++ b/tests/i915/gem_ctx_shared.c
@@ -186,7 +186,7 @@ static void exhaust_shared_gtt(int i915, unsigned int flags)
 
 static void exec_shared_gtt(int i915, unsigned int ring)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 obj = {};
 	struct drm_i915_gem_execbuffer2 execbuf = {
@@ -436,7 +436,7 @@ static void store_dword(int i915, uint32_t ctx, unsigned ring,
 			uint32_t target, uint32_t offset, uint32_t value,
 			uint32_t cork, unsigned write_domain)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_ctx_thrash.c b/tests/i915/gem_ctx_thrash.c
index dc7259c18..d32619d5d 100644
--- a/tests/i915/gem_ctx_thrash.c
+++ b/tests/i915/gem_ctx_thrash.c
@@ -46,7 +46,7 @@ static void xchg_int(void *array, unsigned i, unsigned j)
 
 static unsigned context_size(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 
 	switch (gen) {
 	case 0:
diff --git a/tests/i915/gem_exec_async.c b/tests/i915/gem_exec_async.c
index 035e78377..9f2c80f05 100644
--- a/tests/i915/gem_exec_async.c
+++ b/tests/i915/gem_exec_async.c
@@ -29,7 +29,7 @@ IGT_TEST_DESCRIPTION("Check that we can issue concurrent writes across the engin
 static void store_dword(int fd, unsigned ring,
 			uint32_t target, uint32_t offset, uint32_t value)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_exec_await.c b/tests/i915/gem_exec_await.c
index 6bc624e4a..70fda968e 100644
--- a/tests/i915/gem_exec_await.c
+++ b/tests/i915/gem_exec_await.c
@@ -59,7 +59,7 @@ static void wide(int fd, int ring_size, int timeout, unsigned int flags)
 {
 	const struct intel_execution_engine2 *engine;
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct {
 		struct drm_i915_gem_exec_object2 *obj;
 		struct drm_i915_gem_exec_object2 exec[2];
diff --git a/tests/i915/gem_exec_capture.c b/tests/i915/gem_exec_capture.c
index 85645a267..cb0d3151b 100644
--- a/tests/i915/gem_exec_capture.c
+++ b/tests/i915/gem_exec_capture.c
@@ -61,7 +61,7 @@ static void check_error_state(int dir, struct drm_i915_gem_exec_object2 *obj)
 
 static void __capture1(int fd, int dir, unsigned ring, uint32_t target)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[4];
 #define SCRATCH 0
 #define CAPTURE 1
@@ -197,7 +197,7 @@ static struct offset {
 #define INCREMENTAL 0x1
 #define ASYNC 0x2
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 *obj;
 	struct drm_i915_gem_relocation_entry reloc[2];
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_exec_fence.c b/tests/i915/gem_exec_fence.c
index 0b8ab1400..56469ebab 100644
--- a/tests/i915/gem_exec_fence.c
+++ b/tests/i915/gem_exec_fence.c
@@ -61,7 +61,7 @@ static void store(int fd, const struct intel_execution_engine2 *e,
 {
 	const int SCRATCH = 0;
 	const int BATCH = 1;
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -122,7 +122,7 @@ static bool fence_busy(int fence)
 static void test_fence_busy(int fd, const struct intel_execution_engine2 *e,
 			    unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj;
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -218,7 +218,7 @@ static void test_fence_busy(int fd, const struct intel_execution_engine2 *e,
 static void test_fence_busy_all(int fd, unsigned flags)
 {
 	const struct intel_execution_engine2 *e;
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj;
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -598,7 +598,7 @@ static int __execbuf(int fd, struct drm_i915_gem_execbuffer2 *execbuf)
 static void test_parallel(int i915, const struct intel_execution_engine2 *e)
 {
 	const struct intel_execution_engine2 *e2;
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	uint32_t scratch = gem_create(i915, 4096);
 	uint32_t *out = gem_mmap__wc(i915, scratch, 0, 4096, PROT_READ);
 	uint32_t handle[I915_EXEC_RING_MASK];
@@ -704,7 +704,7 @@ static void test_parallel(int i915, const struct intel_execution_engine2 *e)
 
 static void test_concurrent(int i915, const struct intel_execution_engine2 *e)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct drm_i915_gem_relocation_entry reloc = {
 		.target_handle =  gem_create(i915, 4096),
 		.write_domain = I915_GEM_DOMAIN_RENDER,
diff --git a/tests/i915/gem_exec_flush.c b/tests/i915/gem_exec_flush.c
index 7d9fcbfcb..403e498bd 100644
--- a/tests/i915/gem_exec_flush.c
+++ b/tests/i915/gem_exec_flush.c
@@ -78,7 +78,7 @@ static uint32_t movnt(uint32_t *map, int i)
 static void run(int fd, unsigned ring, int nchild, int timeout,
 		unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 
 	/* The crux of this testing is whether writes by the GPU are coherent
 	 * from the CPU.
@@ -355,7 +355,7 @@ enum batch_mode {
 static void batch(int fd, unsigned ring, int nchild, int timeout,
 		  enum batch_mode mode, unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 
 	if (mode == BATCH_GTT)
 		gem_require_mappable_ggtt(fd);
diff --git a/tests/i915/gem_exec_gttfill.c b/tests/i915/gem_exec_gttfill.c
index 7a6d7c0fb..8f2336a30 100644
--- a/tests/i915/gem_exec_gttfill.c
+++ b/tests/i915/gem_exec_gttfill.c
@@ -107,7 +107,7 @@ static void submit(int fd, int gen,
 
 static void fillgtt(int fd, unsigned ring, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_relocation_entry reloc[2];
 	volatile uint64_t *shared;
diff --git a/tests/i915/gem_exec_latency.c b/tests/i915/gem_exec_latency.c
index 198e54fd2..568d727f2 100644
--- a/tests/i915/gem_exec_latency.c
+++ b/tests/i915/gem_exec_latency.c
@@ -109,7 +109,7 @@ static void latency_on_ring(int fd,
 			    unsigned ring, const char *name,
 			    unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const int has_64bit_reloc = gen >= 8;
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc;
@@ -258,7 +258,7 @@ static void latency_from_ring(int fd,
 			      unsigned ring, const char *name,
 			      unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const int has_64bit_reloc = gen >= 8;
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc;
diff --git a/tests/i915/gem_exec_nop.c b/tests/i915/gem_exec_nop.c
index 21a937c83..62554ecb2 100644
--- a/tests/i915/gem_exec_nop.c
+++ b/tests/i915/gem_exec_nop.c
@@ -104,7 +104,7 @@ static double nop_on_ring(int fd, uint32_t handle,
 static void poll_ring(int fd, const struct intel_execution_engine2 *e,
 		      int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t MI_ARB_CHK = 0x5 << 23;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj;
@@ -214,7 +214,7 @@ static void poll_ring(int fd, const struct intel_execution_engine2 *e,
 
 static void poll_sequential(int fd, const char *name, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const struct intel_execution_engine2 *e;
 	const uint32_t MI_ARB_CHK = 0x5 << 23;
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_exec_parallel.c b/tests/i915/gem_exec_parallel.c
index 96feb8250..bdb8e3e90 100644
--- a/tests/i915/gem_exec_parallel.c
+++ b/tests/i915/gem_exec_parallel.c
@@ -191,7 +191,7 @@ static void handle_close(int fd, unsigned int flags, uint32_t handle, void *data
 
 static void all(int fd, struct intel_execution_engine2 *engine, unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	pthread_mutex_t mutex;
 	pthread_cond_t cond;
 	struct thread *threads;
diff --git a/tests/i915/gem_exec_params.c b/tests/i915/gem_exec_params.c
index f8a940740..e0bbea94b 100644
--- a/tests/i915/gem_exec_params.c
+++ b/tests/i915/gem_exec_params.c
@@ -91,7 +91,7 @@ static bool has_resource_streamer(int fd)
 
 static void test_batch_first(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc[2];
diff --git a/tests/i915/gem_exec_reloc.c b/tests/i915/gem_exec_reloc.c
index fc2bd0a56..299c2c79b 100644
--- a/tests/i915/gem_exec_reloc.c
+++ b/tests/i915/gem_exec_reloc.c
@@ -64,7 +64,7 @@ static void write_dword(int fd,
 			uint64_t target_offset,
 			uint32_t value)
 {
-	int gen = intel_gen(intel_get_drm_devid(fd));
+	unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
@@ -266,7 +266,7 @@ static void check_bo(int fd, uint32_t handle)
 
 static void active(int fd, unsigned engine)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -872,7 +872,7 @@ static void basic_softpin(int fd)
 static uint64_t concurrent_relocs(int i915, int idx, int count)
 {
 	struct drm_i915_gem_relocation_entry *reloc;
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	unsigned long sz;
 	int offset;
 
@@ -972,7 +972,7 @@ static void concurrent_child(int i915,
 
 static uint32_t create_concurrent_batch(int i915, unsigned int count)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	size_t sz = ALIGN(4 * (1 + 4 * count), 4096);
 	uint32_t handle = gem_create(i915, sz);
 	uint32_t *map, *cs;
diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index 53462c425..74d77d3e6 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -94,7 +94,7 @@ static uint32_t __store_dword(int fd, uint32_t ctx, unsigned ring,
 			      uint32_t target, uint32_t offset, uint32_t value,
 			      uint32_t cork, int fence, unsigned write_domain)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -1074,7 +1074,7 @@ static void semaphore_resolve(int i915, unsigned long flags)
 
 static void semaphore_noskip(int i915, unsigned long flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *outer, *inner;
 	uint32_t ctx;
 
@@ -1723,7 +1723,7 @@ static void deep(int fd, unsigned ring)
 
 	/* Create a deep dependency chain, with a few branches */
 	for (n = 0; n < nreq && igt_seconds_elapsed(&tv) < 2; n++) {
-		const int gen = intel_gen(intel_get_drm_devid(fd));
+		const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 		struct drm_i915_gem_exec_object2 obj[3];
 		struct drm_i915_gem_relocation_entry reloc;
 		struct drm_i915_gem_execbuffer2 eb = {
@@ -1876,7 +1876,7 @@ static void wide(int fd, unsigned ring)
 static void reorder_wide(int fd, unsigned ring)
 {
 	const unsigned int ring_size = gem_submission_measure(fd, ring);
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const int priorities[] = { MIN_PRIO, MAX_PRIO };
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_exec_object2 obj[2];
diff --git a/tests/i915/gem_exec_store.c b/tests/i915/gem_exec_store.c
index 272ab9cd8..771ee1690 100644
--- a/tests/i915/gem_exec_store.c
+++ b/tests/i915/gem_exec_store.c
@@ -38,7 +38,7 @@
 
 static void store_dword(int fd, const struct intel_execution_engine2 *e)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -96,7 +96,7 @@ static void store_dword(int fd, const struct intel_execution_engine2 *e)
 static void store_cachelines(int fd, const struct intel_execution_engine2 *e,
 			     unsigned int flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 *obj;
 	struct drm_i915_gem_relocation_entry *reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -172,7 +172,7 @@ static void store_cachelines(int fd, const struct intel_execution_engine2 *e,
 
 static void store_all(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct intel_execution_engine2 *engine;
 	struct drm_i915_gem_relocation_entry *reloc;
diff --git a/tests/i915/gem_exec_suspend.c b/tests/i915/gem_exec_suspend.c
index d768db911..6886bccd4 100644
--- a/tests/i915/gem_exec_suspend.c
+++ b/tests/i915/gem_exec_suspend.c
@@ -89,7 +89,7 @@ static bool has_semaphores(int fd)
 
 static void run_test(int fd, unsigned engine, unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
diff --git a/tests/i915/gem_exec_whisper.c b/tests/i915/gem_exec_whisper.c
index 1fded7618..9acf6c306 100644
--- a/tests/i915/gem_exec_whisper.c
+++ b/tests/i915/gem_exec_whisper.c
@@ -168,7 +168,7 @@ static void ctx_set_random_priority(int fd, uint32_t ctx)
 static void whisper(int fd, unsigned engine, unsigned flags)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 batches[QLEN];
 	struct drm_i915_gem_relocation_entry inter[QLEN];
 	struct drm_i915_gem_relocation_entry reloc;
diff --git a/tests/i915/gem_render_copy.c b/tests/i915/gem_render_copy.c
index ae6e18334..afc490f1a 100644
--- a/tests/i915/gem_render_copy.c
+++ b/tests/i915/gem_render_copy.c
@@ -101,7 +101,7 @@ copy_from_linear_buf(data_t *data, struct intel_buf *src, struct intel_buf *dst)
 static void *linear_copy_ccs(data_t *data, struct intel_buf *buf)
 {
 	void *ccs_data, *linear;
-	int gen = intel_gen(data->devid);
+	unsigned int gen = intel_gen(data->devid);
 	int ccs_size = intel_buf_ccs_width(gen, buf) *
 		intel_buf_ccs_height(gen, buf);
 	int bo_size = intel_buf_bo_size(buf);
@@ -295,7 +295,7 @@ scratch_buf_check_all(data_t *data,
 static void scratch_buf_ccs_check(data_t *data,
 				  struct intel_buf *buf)
 {
-	int gen = intel_gen(data->devid);
+	unsigned int gen = intel_gen(data->devid);
 	int ccs_size = intel_buf_ccs_width(gen, buf) *
 		intel_buf_ccs_height(gen, buf);
 	uint8_t *linear;
diff --git a/tests/i915/gem_ringfill.c b/tests/i915/gem_ringfill.c
index 3e24ccf18..c499cb0dd 100644
--- a/tests/i915/gem_ringfill.c
+++ b/tests/i915/gem_ringfill.c
@@ -99,7 +99,7 @@ static void setup_execbuf(int fd,
 			  struct drm_i915_gem_relocation_entry *reloc,
 			  unsigned int ring)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	uint32_t *batch, *b;
 	int i;
diff --git a/tests/i915/gem_softpin.c b/tests/i915/gem_softpin.c
index 202abdd88..fcaf8ef30 100644
--- a/tests/i915/gem_softpin.c
+++ b/tests/i915/gem_softpin.c
@@ -265,7 +265,7 @@ static void test_reverse(int i915)
 
 static uint64_t busy_batch(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	unsigned const int gen = intel_gen(intel_get_drm_devid(fd));
 	const int has_64bit_reloc = gen >= 8;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 object[2];
@@ -452,7 +452,7 @@ static void xchg_offset(void *array, unsigned i, unsigned j)
 enum sleep { NOSLEEP, SUSPEND, HIBERNATE };
 static void test_noreloc(int fd, enum sleep sleep, unsigned flags)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	unsigned const int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t size = 4096;
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/gem_sync.c b/tests/i915/gem_sync.c
index b317a3927..a82bda924 100644
--- a/tests/i915/gem_sync.c
+++ b/tests/i915/gem_sync.c
@@ -491,7 +491,7 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen)
 static void
 store_ring(int fd, unsigned ring, int num_children, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
 	ied = list_store_engines(fd, ring);
@@ -587,7 +587,7 @@ store_ring(int fd, unsigned ring, int num_children, int timeout)
 static void
 switch_ring(int fd, unsigned ring, int num_children, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
 	gem_require_contexts(fd);
@@ -766,7 +766,7 @@ static void *waiter(void *arg)
 static void
 __store_many(int fd, unsigned ring, int timeout, unsigned long *cycles)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 object[2];
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -971,7 +971,7 @@ sync_all(int fd, int num_children, int timeout)
 static void
 store_all(int fd, int num_children, int timeout)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
 	ied = list_store_engines(fd, ALL_ENGINES);
diff --git a/tests/i915/gem_tiled_fence_blits.c b/tests/i915/gem_tiled_fence_blits.c
index 99ec78f9b..0a633d91b 100644
--- a/tests/i915/gem_tiled_fence_blits.c
+++ b/tests/i915/gem_tiled_fence_blits.c
@@ -88,7 +88,7 @@ static void check_bo(int fd, uint32_t handle, uint32_t start_val)
 static uint32_t
 create_batch(int fd, struct drm_i915_gem_relocation_entry *reloc)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const bool has_64b_reloc = gen >= 8;
 	uint32_t *batch;
 	uint32_t handle;
diff --git a/tests/i915/gem_userptr_blits.c b/tests/i915/gem_userptr_blits.c
index 6f2e89269..5f47a5f41 100644
--- a/tests/i915/gem_userptr_blits.c
+++ b/tests/i915/gem_userptr_blits.c
@@ -299,7 +299,7 @@ blit(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo)
 static void store_dword(int fd, uint32_t target,
 			uint32_t offset, uint32_t value)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -1155,7 +1155,7 @@ static void store_dword_rand(int i915, unsigned int engine,
 			     uint32_t target, uint64_t sz,
 			     int count)
 {
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct drm_i915_gem_relocation_entry *reloc;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_execbuffer2 exec;
diff --git a/tests/i915/gem_vm_create.c b/tests/i915/gem_vm_create.c
index e8af68f19..8843b1b3b 100644
--- a/tests/i915/gem_vm_create.c
+++ b/tests/i915/gem_vm_create.c
@@ -250,7 +250,7 @@ static void execbuf(int i915)
 static void
 write_to_address(int fd, uint32_t ctx, uint64_t addr, uint32_t value)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 batch = {
 		.handle = gem_create(fd, 4096)
 	};
diff --git a/tests/i915/i915_module_load.c b/tests/i915/i915_module_load.c
index 77aaac5c6..aa998b992 100644
--- a/tests/i915/i915_module_load.c
+++ b/tests/i915/i915_module_load.c
@@ -40,7 +40,7 @@
 
 static void store_dword(int fd, unsigned ring)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -102,7 +102,7 @@ static void store_dword(int fd, unsigned ring)
 
 static void store_all(int fd)
 {
-	const int gen = intel_gen(intel_get_drm_devid(fd));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc[32];
 	struct drm_i915_gem_execbuffer2 execbuf;
diff --git a/tests/i915/i915_pm_rc6_residency.c b/tests/i915/i915_pm_rc6_residency.c
index 6fdc607e3..d484121e7 100644
--- a/tests/i915/i915_pm_rc6_residency.c
+++ b/tests/i915/i915_pm_rc6_residency.c
@@ -361,7 +361,7 @@ static void rc6_idle(int i915)
 {
 	const int64_t duration_ns = SLEEP_DURATION * (int64_t)NSEC_PER_SEC;
 	const int tolerance = 20; /* Some RC6 is better than none! */
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct {
 		const char *name;
 		unsigned int flags;
@@ -452,7 +452,7 @@ static void rc6_fence(int i915)
 {
 	const int64_t duration_ns = SLEEP_DURATION * (int64_t)NSEC_PER_SEC;
 	const int tolerance = 20; /* Some RC6 is better than none! */
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *e;
 	struct power_sample sample[2];
 	unsigned long slept;
diff --git a/tests/i915/sysfs_timeslice_duration.c b/tests/i915/sysfs_timeslice_duration.c
index 2b1e52c80..b5b6ded78 100644
--- a/tests/i915/sysfs_timeslice_duration.c
+++ b/tests/i915/sysfs_timeslice_duration.c
@@ -186,7 +186,7 @@ static uint64_t __test_duration(int i915, int engine, unsigned int timeout)
 		.buffer_count = ARRAY_SIZE(obj),
 		.buffers_ptr = to_user_pointer(obj),
 	};
-	const int gen = intel_gen(intel_get_drm_devid(i915));
+	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	double duration = clockrate(i915);
 	unsigned int class, inst, mmio;
 	uint32_t *cs, *map;
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Intel-gfx] [PATCH i-g-t 10/10] i915/gem_exec_schedule: Try to spot unfairness
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
@ 2020-10-14 10:40   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Chris Wilson

An important property for multi-client systems is that each client gets
a 'fair' allotment of system time. (Where fairness is at the whim of the
context properties, such as priorities.) This test forks N independent
clients (albeit they happen to share a single vm), and does an equal
amount of work in client and asserts that they take an equal amount of
time.

Though we have never claimed to have a completely fair scheduler, that
is what is expected.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Ramalingam C <ramalingam.c@intel.com>
---
 tests/i915/gem_exec_schedule.c | 847 +++++++++++++++++++++++++++++++++
 1 file changed, 847 insertions(+)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index 74d77d3e6..da5a6d248 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -29,6 +29,7 @@
 #include <sys/poll.h>
 #include <sys/ioctl.h>
 #include <sys/mman.h>
+#include <sys/resource.h>
 #include <sys/syscall.h>
 #include <sched.h>
 #include <signal.h>
@@ -2531,6 +2532,819 @@ static void measure_semaphore_power(int i915)
 	rapl_close(&pkg);
 }
 
+static int read_timestamp_frequency(int i915)
+{
+	int value = 0;
+	drm_i915_getparam_t gp = {
+		.value = &value,
+		.param = I915_PARAM_CS_TIMESTAMP_FREQUENCY,
+	};
+	ioctl(i915, DRM_IOCTL_I915_GETPARAM, &gp);
+	return value;
+}
+
+static uint64_t div64_u64_round_up(uint64_t x, uint64_t y)
+{
+	return (x + y - 1) / y;
+}
+
+static uint64_t ns_to_ctx_ticks(int i915, uint64_t ns)
+{
+	int f = read_timestamp_frequency(i915);
+	if (intel_gen(intel_get_drm_devid(i915)) == 11)
+		f = 12500000; /* icl!!! are you feeling alright? CTX vs CS */
+	return div64_u64_round_up(ns * f, NSEC_PER_SEC);
+}
+
+static uint64_t ticks_to_ns(int i915, uint64_t ticks)
+{
+	return div64_u64_round_up(ticks * NSEC_PER_SEC,
+				  read_timestamp_frequency(i915));
+}
+
+#define MI_INSTR(opcode, flags) (((opcode) << 23) | (flags))
+
+#define MI_MATH(x)                      MI_INSTR(0x1a, (x) - 1)
+#define MI_MATH_INSTR(opcode, op1, op2) ((opcode) << 20 | (op1) << 10 | (op2))
+/* Opcodes for MI_MATH_INSTR */
+#define   MI_MATH_NOOP                  MI_MATH_INSTR(0x000, 0x0, 0x0)
+#define   MI_MATH_LOAD(op1, op2)        MI_MATH_INSTR(0x080, op1, op2)
+#define   MI_MATH_LOADINV(op1, op2)     MI_MATH_INSTR(0x480, op1, op2)
+#define   MI_MATH_LOAD0(op1)            MI_MATH_INSTR(0x081, op1)
+#define   MI_MATH_LOAD1(op1)            MI_MATH_INSTR(0x481, op1)
+#define   MI_MATH_ADD                   MI_MATH_INSTR(0x100, 0x0, 0x0)
+#define   MI_MATH_SUB                   MI_MATH_INSTR(0x101, 0x0, 0x0)
+#define   MI_MATH_AND                   MI_MATH_INSTR(0x102, 0x0, 0x0)
+#define   MI_MATH_OR                    MI_MATH_INSTR(0x103, 0x0, 0x0)
+#define   MI_MATH_XOR                   MI_MATH_INSTR(0x104, 0x0, 0x0)
+#define   MI_MATH_STORE(op1, op2)       MI_MATH_INSTR(0x180, op1, op2)
+#define   MI_MATH_STOREINV(op1, op2)    MI_MATH_INSTR(0x580, op1, op2)
+/* Registers used as operands in MI_MATH_INSTR */
+#define   MI_MATH_REG(x)                (x)
+#define   MI_MATH_REG_SRCA              0x20
+#define   MI_MATH_REG_SRCB              0x21
+#define   MI_MATH_REG_ACCU              0x31
+#define   MI_MATH_REG_ZF                0x32
+#define   MI_MATH_REG_CF                0x33
+
+#define MI_LOAD_REGISTER_REG    MI_INSTR(0x2A, 1)
+
+static void delay(int i915,
+		  const struct intel_execution_engine2 *e,
+		  uint32_t handle,
+		  uint64_t addr,
+		  uint64_t ns)
+{
+	const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
+	const uint32_t base = gem_engine_mmio_base(i915, e->name);
+#define CS_GPR(x) (base + 0x600 + 8 * (x))
+#define RUNTIME (base + 0x3a8)
+	enum { START_TS, NOW_TS };
+	uint32_t *map, *cs, *jmp;
+
+	igt_require(base);
+
+	/* Loop until CTX_TIMESTAMP - initial > @ns */
+
+	cs = map = gem_mmap__device_coherent(i915, handle, 0, 4096, PROT_WRITE);
+
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(START_TS) + 4;
+	*cs++ = 0;
+	*cs++ = MI_LOAD_REGISTER_REG;
+	*cs++ = RUNTIME;
+	*cs++ = CS_GPR(START_TS);
+
+	while (offset_in_page(cs) & 63)
+		*cs++ = 0;
+	jmp = cs;
+
+	*cs++ = 0x5 << 23; /* MI_ARB_CHECK */
+
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(NOW_TS) + 4;
+	*cs++ = 0;
+	*cs++ = MI_LOAD_REGISTER_REG;
+	*cs++ = RUNTIME;
+	*cs++ = CS_GPR(NOW_TS);
+
+	/* delta = now - start; inverted to match COND_BBE */
+	*cs++ = MI_MATH(4);
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(NOW_TS));
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(START_TS));
+	*cs++ = MI_MATH_SUB;
+	*cs++ = MI_MATH_STOREINV(MI_MATH_REG(NOW_TS), MI_MATH_REG_ACCU);
+
+	/* Save delta for reading by COND_BBE */
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = CS_GPR(NOW_TS);
+	*cs++ = addr + 4000;
+	*cs++ = addr >> 32;
+
+	/* Delay between SRM and COND_BBE to post the writes */
+	for (int n = 0; n < 8; n++) {
+		*cs++ = MI_STORE_DWORD_IMM;
+		if (use_64b) {
+			*cs++ = addr + 4064;
+			*cs++ = addr >> 32;
+		} else {
+			*cs++ = 0;
+			*cs++ = addr + 4064;
+		}
+		*cs++ = 0;
+	}
+
+	/* Break if delta [time elapsed] > ns */
+	*cs++ = MI_COND_BATCH_BUFFER_END | MI_DO_COMPARE | (1 + use_64b);
+	*cs++ = ~ns_to_ctx_ticks(i915, ns);
+	*cs++ = addr + 4000;
+	*cs++ = addr >> 32;
+
+	/* Otherwise back to recalculating delta */
+	*cs++ = MI_BATCH_BUFFER_START | 1 << 8 | use_64b;
+	*cs++ = addr + offset_in_page(jmp);
+	*cs++ = addr >> 32;
+
+	munmap(map, 4096);
+}
+
+static struct drm_i915_gem_exec_object2
+delay_create(int i915, uint32_t ctx,
+	     const struct intel_execution_engine2 *e,
+	     uint64_t target_ns)
+{
+	struct drm_i915_gem_exec_object2 obj = {
+		.handle = batch_create(i915),
+		.flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS,
+	};
+	struct drm_i915_gem_execbuffer2 execbuf = {
+		.buffers_ptr = to_user_pointer(&obj),
+		.buffer_count = 1,
+		.rsvd1 = ctx,
+		.flags = e->flags,
+	};
+
+	obj.offset = obj.handle << 12;
+	gem_execbuf(i915, &execbuf);
+	gem_sync(i915, obj.handle);
+
+	delay(i915, e, obj.handle, obj.offset, target_ns);
+
+	obj.flags |= EXEC_OBJECT_PINNED;
+	return obj;
+}
+
+static void tslog(int i915,
+		  const struct intel_execution_engine2 *e,
+		  uint32_t handle,
+		  uint64_t addr)
+{
+	const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
+	const uint32_t base = gem_engine_mmio_base(i915, e->name);
+#define CS_GPR(x) (base + 0x600 + 8 * (x))
+#define CS_TIMESTAMP (base + 0x358)
+	enum { INC, MASK, ADDR };
+	uint32_t *timestamp_lo, *addr_lo;
+	uint32_t *map, *cs;
+
+	igt_require(base);
+
+	map = gem_mmap__device_coherent(i915, handle, 0, 4096, PROT_WRITE);
+	cs = map + 512;
+
+	/* Record the current CS_TIMESTAMP into a journal [a 512 slot ring]. */
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = CS_TIMESTAMP;
+	timestamp_lo = cs;
+	*cs++ = addr;
+	*cs++ = addr >> 32;
+
+	/* Load the address + inc & mask variables */
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(ADDR);
+	addr_lo = cs;
+	*cs++ = addr;
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(ADDR) + 4;
+	*cs++ = addr >> 32;
+
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(INC);
+	*cs++ = 4;
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(INC) + 4;
+	*cs++ = 0;
+
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(MASK);
+	*cs++ = 0xfffff7ff;
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(MASK) + 4;
+	*cs++ = 0xffffffff;
+
+	/* Increment the [ring] address for saving CS_TIMESTAMP */
+	*cs++ = MI_MATH(8);
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(INC));
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(ADDR));
+	*cs++ = MI_MATH_ADD;
+	*cs++ = MI_MATH_STORE(MI_MATH_REG(ADDR), MI_MATH_REG_ACCU);
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(ADDR));
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(MASK));
+	*cs++ = MI_MATH_AND;
+	*cs++ = MI_MATH_STORE(MI_MATH_REG(ADDR), MI_MATH_REG_ACCU);
+
+	/* Rewrite the batch buffer for the next execution */
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = CS_GPR(ADDR);
+	*cs++ = addr + offset_in_page(timestamp_lo);
+	*cs++ = addr >> 32;
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = CS_GPR(ADDR);
+	*cs++ = addr + offset_in_page(addr_lo);
+	*cs++ = addr >> 32;
+
+	*cs++ = MI_BATCH_BUFFER_END;
+
+	munmap(map, 4096);
+}
+
+static struct drm_i915_gem_exec_object2
+tslog_create(int i915, uint32_t ctx, const struct intel_execution_engine2 *e)
+{
+	struct drm_i915_gem_exec_object2 obj = {
+		.handle = batch_create(i915),
+		.flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS,
+	};
+	struct drm_i915_gem_execbuffer2 execbuf = {
+		.buffers_ptr = to_user_pointer(&obj),
+		.buffer_count = 1,
+		.rsvd1 = ctx,
+		.flags = e->flags,
+	};
+
+	obj.offset = obj.handle << 12;
+	gem_execbuf(i915, &execbuf);
+	gem_sync(i915, obj.handle);
+
+	tslog(i915, e, obj.handle, obj.offset);
+
+	obj.flags |= EXEC_OBJECT_PINNED;
+	return obj;
+}
+
+static int cmp_u32(const void *A, const void *B)
+{
+	const uint32_t *a = A, *b = B;
+
+	if (*a < *b)
+		return -1;
+	else if (*a > *b)
+		return 1;
+	else
+		return 0;
+}
+
+static bool has_ctx_timestamp(int i915, const struct intel_execution_engine2 *e)
+{
+	const int gen = intel_gen(intel_get_drm_devid(i915));
+
+	if (gen == 8 && e->class == I915_ENGINE_CLASS_VIDEO)
+		return false; /* looks fubar */
+
+	return true;
+}
+
+static struct intel_execution_engine2
+pick_random_engine(int i915, const struct intel_execution_engine2 *not)
+{
+	const struct intel_execution_engine2 *e;
+	unsigned int count = 0;
+
+	__for_each_physical_engine(i915, e) {
+		if (e->flags == not->flags)
+			continue;
+		if (!gem_class_has_mutable_submission(i915, e->class))
+			continue;
+		count++;
+	}
+	if (!count)
+		return *not;
+
+	count = rand() % count;
+	__for_each_physical_engine(i915, e) {
+		if (e->flags == not->flags)
+			continue;
+		if (!gem_class_has_mutable_submission(i915, e->class))
+			continue;
+		if (!count--)
+			break;
+	}
+
+	return *e;
+}
+
+static void fair_child(int i915, uint32_t ctx,
+		       const struct intel_execution_engine2 *e,
+		       uint64_t frame_ns,
+		       int timeline,
+		       uint32_t common,
+		       unsigned int flags,
+		       unsigned long *ctl,
+		       unsigned long *out)
+#define F_SYNC		(1 << 0)
+#define F_PACE		(1 << 1)
+#define F_FLOW		(1 << 2)
+#define F_HALF		(1 << 3)
+#define F_SOLO		(1 << 4)
+#define F_SPARE		(1 << 5)
+#define F_NEXT		(1 << 6)
+#define F_VIP		(1 << 7)
+#define F_RRUL		(1 << 8)
+#define F_SHARE		(1 << 9)
+#define F_PING		(1 << 10)
+#define F_THROTTLE	(1 << 11)
+#define F_ISOLATE	(1 << 12)
+{
+	const int batches_per_frame = flags & F_SOLO ? 1 : 3;
+	struct drm_i915_gem_exec_object2 obj[4] = {
+		{},
+		{
+			.handle = common ?: gem_create(i915, 4096),
+		},
+		delay_create(i915, ctx, e, frame_ns / batches_per_frame),
+		delay_create(i915, ctx, e, frame_ns / batches_per_frame),
+	};
+	struct intel_execution_engine2 ping = *e;
+	int p_fence = -1, n_fence = -1;
+	unsigned long count = 0;
+	int n;
+
+	srandom(getpid());
+	if (flags & F_PING)
+		ping = pick_random_engine(i915, e);
+	obj[0] = tslog_create(i915, ctx, &ping);
+
+	while (!READ_ONCE(*ctl)) {
+		struct drm_i915_gem_execbuffer2 execbuf = {
+			.buffers_ptr = to_user_pointer(obj),
+			.buffer_count = 4,
+			.rsvd1 = ctx,
+			.rsvd2 = -1,
+			.flags = e->flags,
+		};
+
+		if (flags & F_FLOW) {
+			unsigned int seq;
+
+			seq = count;
+			if (flags & F_NEXT)
+				seq++;
+
+			execbuf.rsvd2 =
+				sw_sync_timeline_create_fence(timeline, seq);
+			execbuf.flags |= I915_EXEC_FENCE_IN;
+		}
+
+		execbuf.flags |= I915_EXEC_FENCE_OUT;
+		gem_execbuf_wr(i915, &execbuf);
+		n_fence = execbuf.rsvd2 >> 32;
+		execbuf.flags &= ~(I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_IN);
+		for (n = 1; n < batches_per_frame; n++)
+			gem_execbuf(i915, &execbuf);
+		close(execbuf.rsvd2);
+
+		execbuf.buffer_count = 1;
+		execbuf.batch_start_offset = 2048;
+		execbuf.flags = ping.flags | I915_EXEC_FENCE_IN;
+		execbuf.rsvd2 = n_fence;
+		gem_execbuf(i915, &execbuf);
+
+		if (flags & F_PACE && p_fence != -1) {
+			struct pollfd pfd = {
+				.fd = p_fence,
+				.events = POLLIN,
+			};
+			poll(&pfd, 1, -1);
+		}
+		close(p_fence);
+
+		if (flags & F_SYNC) {
+			struct pollfd pfd = {
+				.fd = n_fence,
+				.events = POLLIN,
+			};
+			poll(&pfd, 1, -1);
+		}
+
+		if (flags & F_THROTTLE)
+			igt_ioctl(i915, DRM_IOCTL_I915_GEM_THROTTLE, 0);
+
+		igt_swap(obj[2], obj[3]);
+		igt_swap(p_fence, n_fence);
+		count++;
+	}
+	close(p_fence);
+
+	gem_close(i915, obj[3].handle);
+	gem_close(i915, obj[2].handle);
+	if (obj[1].handle != common)
+		gem_close(i915, obj[1].handle);
+
+	gem_sync(i915, obj[0].handle);
+	if (out) {
+		uint32_t *map;
+
+		map = gem_mmap__device_coherent(i915, obj[0].handle,
+						0, 4096, PROT_WRITE);
+		for (n = 1; n < min(count, 512); n++) {
+			igt_assert(map[n]);
+			map[n - 1] = map[n] - map[n - 1];
+		}
+		qsort(map, --n, sizeof(*map), cmp_u32);
+		*out = ticks_to_ns(i915, map[n / 2]);
+		munmap(map, 4096);
+	}
+	gem_close(i915, obj[0].handle);
+}
+
+static int cmp_ul(const void *A, const void *B)
+{
+	const unsigned long *a = A, *b = B;
+
+	if (*a < *b)
+		return -1;
+	else if (*a > *b)
+		return 1;
+	else
+		return 0;
+}
+
+static uint64_t d_cpu_time(const struct rusage *a, const struct rusage *b)
+{
+	uint64_t cpu_time = 0;
+
+	cpu_time += (a->ru_utime.tv_sec - b->ru_utime.tv_sec) * NSEC_PER_SEC;
+	cpu_time += (a->ru_utime.tv_usec - b->ru_utime.tv_usec) * 1000;
+
+	cpu_time += (a->ru_stime.tv_sec - b->ru_stime.tv_sec) * NSEC_PER_SEC;
+	cpu_time += (a->ru_stime.tv_usec - b->ru_stime.tv_usec) * 1000;
+
+	return cpu_time;
+}
+
+static void timeline_advance(int timeline, int delay_ns)
+{
+	struct timespec tv = { .tv_nsec = delay_ns };
+	nanosleep(&tv, NULL);
+	sw_sync_timeline_inc(timeline, 1);
+}
+
+static void fairness(int i915,
+		     const struct intel_execution_engine2 *e,
+		     int timeout, unsigned int flags)
+{
+	const int frame_ns = 16666 * 1000;
+	const int fence_ns = flags & F_HALF ? 2 * frame_ns : frame_ns;
+	unsigned long *result;
+	uint32_t common = 0;
+
+	igt_require(has_ctx_timestamp(i915, e));
+	igt_require(gem_class_has_mutable_submission(i915, e->class));
+
+	if (flags & F_SHARE)
+		common = gem_create(i915, 4095);
+
+	result = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+
+	for (int n = 2; n <= 64; n <<= 1) { /* 32 == 500us per client */
+		int timeline = sw_sync_timeline_create();
+		int nfences = timeout * NSEC_PER_SEC / fence_ns + 1;
+		const int nchild = n - 1; /* odd for easy medians */
+		const int child_ns = frame_ns / (nchild + !!(flags & F_SPARE));
+		const int lo = nchild / 4;
+		const int hi = (3 * nchild + 3) / 4 - 1;
+		struct rusage old_usage, usage;
+		uint64_t cpu_time, d_time;
+		unsigned long vip = -1;
+		struct timespec tv;
+		struct igt_mean m;
+
+		if (flags & F_PING) {
+			struct intel_execution_engine2 *ping;
+
+			__for_each_physical_engine(i915, ping) {
+				if (ping->flags == e->flags)
+					continue;
+
+				igt_fork(child, 1) {
+					uint32_t ctx = gem_context_clone_with_engines(i915, 0);
+
+					fair_child(i915, ctx, ping,
+						   child_ns / 8,
+						   -1, common,
+						   F_SOLO | F_PACE | F_SHARE,
+						   &result[nchild],
+						   NULL);
+
+					gem_context_destroy(i915, ctx);
+				}
+			}
+		}
+
+		memset(result, 0, (nchild + 1) * sizeof(result[0]));
+		getrusage(RUSAGE_CHILDREN, &old_usage);
+		igt_nsec_elapsed(memset(&tv, 0, sizeof(tv)));
+		igt_fork(child, nchild) {
+			uint32_t ctx;
+
+			if (flags & F_ISOLATE) {
+				int clone, dmabuf = -1;
+
+				if (common)
+					dmabuf = prime_handle_to_fd(i915, common);
+
+				clone = gem_reopen_driver(i915);
+				gem_context_copy_engines(i915, 0, clone, 0);
+				i915 = clone;
+
+				if (dmabuf != -1)
+					common = prime_fd_to_handle(i915, dmabuf);
+			}
+
+			ctx = gem_context_clone_with_engines(i915, 0);
+
+			if (flags & F_VIP && child == 0) {
+				gem_context_set_priority(i915, ctx, MAX_PRIO);
+				flags |= F_FLOW;
+			}
+			if (flags & F_RRUL && child == 0)
+				flags |= F_SOLO | F_FLOW | F_SYNC;
+
+			fair_child(i915, ctx, e, child_ns,
+				   timeline, common, flags,
+				   &result[nchild],
+				   &result[child]);
+
+			gem_context_destroy(i915, ctx);
+		}
+
+		while (nfences--)
+			timeline_advance(timeline, fence_ns);
+
+		result[nchild] = 1;
+		for (int child = 0; child < nchild; child++) {
+			while (!READ_ONCE(result[child]))
+				timeline_advance(timeline, fence_ns);
+		}
+
+		igt_waitchildren();
+		close(timeline);
+
+		/* Are we running out of CPU time, and fail to submit frames? */
+		d_time = igt_nsec_elapsed(&tv);
+		getrusage(RUSAGE_CHILDREN, &usage);
+		cpu_time = d_cpu_time(&usage, &old_usage);
+		if (10 * cpu_time > 9 * d_time) {
+			if (nchild > 7)
+				break;
+
+			igt_skip_on_f(10 * cpu_time > 9 * d_time,
+				      "%.0f%% CPU usage, presuming capacity exceeded\n",
+				      100. * cpu_time / d_time);
+		}
+
+		igt_mean_init(&m);
+		for (int child = 0; child < nchild; child++)
+			igt_mean_add(&m, result[child]);
+
+		if (flags & (F_VIP | F_RRUL))
+			vip = result[0];
+
+		qsort(result, nchild, sizeof(*result), cmp_ul);
+		igt_info("%2d clients, range: [%.1f, %.1f], iqr: [%.1f, %.1f], median: %.1f, mean: %.1f ± %.2f ms\n",
+			 nchild,
+			 1e-6 * result[0],  1e-6 * result[nchild - 1],
+			 1e-6 * result[lo], 1e-6 * result[hi],
+			 1e-6 * result[nchild / 2],
+			 1e-6 * igt_mean_get(&m),
+			 1e-6 * sqrt(igt_mean_get_variance(&m)));
+
+		if (vip != -1) {
+			igt_info("VIP interval %.2f ms\n", 1e-6 * vip);
+			igt_assert(4 * vip > 3 * fence_ns &&
+				   3 * vip < 4 * fence_ns);
+		}
+
+		/* May be slowed due to sheer volume of context switches */
+		igt_assert(4 * igt_mean_get(&m) > 3 * fence_ns &&
+			       igt_mean_get(&m) < 3 * fence_ns);
+
+		igt_assert(4 * igt_mean_get(&m) > 3 * result[nchild / 2] &&
+			   3 * igt_mean_get(&m) < 4 * result[nchild / 2]);
+
+		igt_assert(2 * (result[hi] - result[lo]) < result[nchild / 2]);
+	}
+
+	munmap(result, 4096);
+	if (common)
+		gem_close(i915, common);
+}
+
+static void test_fairness(int i915, int timeout)
+{
+	static const struct {
+		const char *name;
+		unsigned int flags;
+	} fair[] = {
+		/*
+		 * none - maximal greed in each client
+		 *
+		 * Push as many frames from each client as fast as possible
+		 */
+		{ "none",       0 },
+		{ "none-vip",   F_VIP }, /* one vip client must meet deadlines */
+		{ "none-solo",  F_SOLO }, /* 1 batch per frame per client */
+		{ "none-share", F_SHARE }, /* read from a common buffer */
+		{ "none-rrul",  F_RRUL }, /* "realtime-response under load" */
+		{ "none-ping",  F_PING }, /* measure inter-engine fairness */
+
+		/*
+		 * throttle - original per client throttling
+		 *
+		 * Used for front buffering rendering where there is no
+		 * extenal frame marker. Each client tries to only keep
+		 * 20ms of work submitted, though that measurement is
+		 * flawed...
+		 *
+		 * This is used by Xorg to try and maintain some resembalance
+		 * of input/output consistency when being feed a continuous
+		 * stream of X11 draw requests straight into scanout, where
+		 * the clients may submit the work faster than can be drawn.
+		 *
+		 * Throttling tracks requests per-file (and assumes that
+		 * all requests are in submission order across the whole file),
+		 * so we split each child to its own fd.
+		 */
+		{ "throttle",       F_THROTTLE | F_ISOLATE },
+		{ "throttle-vip",   F_THROTTLE | F_ISOLATE | F_VIP },
+		{ "throttle-solo",  F_THROTTLE | F_ISOLATE | F_SOLO },
+		{ "throttle-share", F_THROTTLE | F_ISOLATE | F_SHARE },
+		{ "throttle-rrul",  F_THROTTLE | F_ISOLATE | F_RRUL },
+
+		/*
+		 * pace - mesa "submit double buffering"
+		 *
+		 * Submit a frame, wait for previous frame to start. This
+		 * prevents each client from getting too far ahead of its
+		 * rendering, maintaining a consistent input/output latency.
+		 */
+		{ "pace",       F_PACE },
+		{ "pace-solo",  F_PACE | F_SOLO},
+		{ "pace-share", F_PACE | F_SHARE},
+		{ "pace-ping",  F_PACE | F_SHARE | F_PING},
+
+		/* sync - only submit a frame at a time */
+		{ "sync",      F_SYNC },
+		{ "sync-vip",  F_SYNC | F_VIP },
+		{ "sync-solo", F_SYNC | F_SOLO },
+
+		/* flow - synchronise execution against the clock (vblank) */
+		{ "flow",       F_PACE | F_FLOW },
+		{ "flow-share", F_PACE | F_FLOW | F_SHARE },
+		{ "flow-ping",  F_PACE | F_FLOW | F_SHARE | F_PING },
+
+		/* next - submit ahead of the clock (vblank double buffering) */
+		{ "next",       F_PACE | F_FLOW | F_NEXT },
+		{ "next-share", F_PACE | F_FLOW | F_NEXT | F_SHARE },
+		{ "next-ping",  F_PACE | F_FLOW | F_NEXT | F_SHARE | F_PING },
+
+		/* spare - underutilise by a single client timeslice */
+		{ "spare", F_PACE | F_FLOW | F_SPARE },
+
+		/* half - run at half pace (submit 16ms of work every 32ms) */
+		{ "half",  F_PACE | F_FLOW | F_HALF },
+
+		{}
+	};
+
+	igt_fixture {
+		igt_info("CS timestamp frequency: %d\n",
+			 read_timestamp_frequency(i915));
+
+		igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
+	}
+
+	for (typeof(*fair) *f = fair; f->name; f++) {
+		igt_subtest_with_dynamic_f("fair-%s", f->name)  {
+			const struct intel_execution_engine2 *e;
+
+			__for_each_physical_engine(i915, e) {
+				if (!gem_class_can_store_dword(i915, e->class))
+					continue;
+
+				igt_dynamic_f("%s", e->name)
+					fairness(i915, e, timeout, f->flags);
+			}
+		}
+	}
+}
+
+static uint32_t read_ctx_timestamp(int i915,
+				   uint32_t ctx,
+				   const struct intel_execution_engine2 *e)
+{
+	const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
+	const uint32_t base = gem_engine_mmio_base(i915, e->name);
+	struct drm_i915_gem_relocation_entry reloc;
+	struct drm_i915_gem_exec_object2 obj = {
+		.handle = gem_create(i915, 4096),
+		.offset = 32 << 20,
+		.relocs_ptr = to_user_pointer(&reloc),
+		.relocation_count = 1,
+	};
+	struct drm_i915_gem_execbuffer2 execbuf = {
+		.buffers_ptr = to_user_pointer(&obj),
+		.buffer_count = 1,
+		.flags = e->flags,
+		.rsvd1 = ctx,
+	};
+#define RUNTIME (base + 0x3a8)
+	uint32_t *map, *cs;
+	uint32_t ts;
+
+	igt_require(base);
+
+	cs = map = gem_mmap__device_coherent(i915, obj.handle,
+					     0, 4096, PROT_WRITE);
+
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = RUNTIME;
+	memset(&reloc, 0, sizeof(reloc));
+	reloc.target_handle = obj.handle;
+	reloc.presumed_offset = obj.offset;
+	reloc.offset = offset_in_page(cs);
+	reloc.delta = 4000;
+	*cs++ = obj.offset + 4000;
+	*cs++ = obj.offset >> 32;
+
+	*cs++ = MI_BATCH_BUFFER_END;
+
+	gem_execbuf(i915, &execbuf);
+	gem_sync(i915, obj.handle);
+	gem_close(i915, obj.handle);
+
+	ts = map[1000];
+	munmap(map, 4096);
+
+	return ts;
+}
+
+static void fairslice(int i915,
+		      const struct intel_execution_engine2 *e,
+		      unsigned long flags)
+{
+	igt_spin_t *spin = NULL;
+	uint32_t ctx[3];
+	uint32_t ts[3];
+
+	for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
+		ctx[i] = gem_context_clone_with_engines(i915, 0);
+		if (spin == NULL) {
+			spin = __igt_spin_new(i915,
+					      .ctx = ctx[i],
+					      .engine = e->flags,
+					      .flags = flags);
+		} else {
+			struct drm_i915_gem_execbuffer2 eb = {
+				.buffer_count = 1,
+				.buffers_ptr = to_user_pointer(&spin->obj[IGT_SPIN_BATCH]),
+				.flags = e->flags,
+				.rsvd1 = ctx[i],
+			};
+			gem_execbuf(i915, &eb);
+		}
+	}
+
+	sleep(2); /* over the course of many timeslices */
+
+	igt_assert(gem_bo_busy(i915, spin->handle));
+	igt_spin_end(spin);
+	for (int i = 0; i < ARRAY_SIZE(ctx); i++)
+		ts[i] = read_ctx_timestamp(i915, ctx[i], e);
+
+	for (int i = 0; i < ARRAY_SIZE(ctx); i++)
+		gem_context_destroy(i915, ctx[i]);
+	igt_spin_free(i915, spin);
+
+	qsort(ts, 3, sizeof(*ts), cmp_u32);
+	igt_info("%s: [%.1f, %.1f] ms\n", e->name,
+		 1e-6 * ticks_to_ns(i915, ts[0]),
+		 1e-6 * ticks_to_ns(i915, ts[2]));
+
+	igt_assert(ts[0] && ts[2] > ts[0]);
+	igt_assert(4 * ts[0] > 3 * ts[2]);
+}
+
 #define test_each_engine(T, i915, e) \
 	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
 		igt_dynamic_f("%s", e->name)
@@ -2601,6 +3415,35 @@ igt_main
 		test_each_engine("u-lateslice", fd, e)
 			lateslice(fd, e->flags, IGT_SPIN_USERPTR);
 
+		igt_subtest_group {
+			igt_fixture {
+				igt_require(gem_scheduler_has_semaphores(fd));
+				igt_require(gem_scheduler_has_preemption(fd));
+				igt_require(intel_gen(intel_get_drm_devid(fd)) >= 8);
+			}
+
+			test_each_engine("fairslice", fd, e)
+				fairslice(fd, e, 0);
+
+			test_each_engine("u-fairslice", fd, e)
+				fairslice(fd, e, IGT_SPIN_USERPTR);
+
+			igt_subtest("fairslice-all")  {
+				__for_each_physical_engine(fd, e) {
+					igt_fork(child, 1)
+						fairslice(fd, e, 0);
+				}
+				igt_waitchildren();
+			}
+			igt_subtest("u-fairslice-all")  {
+				__for_each_physical_engine(fd, e) {
+					igt_fork(child, 1)
+						fairslice(fd, e, IGT_SPIN_USERPTR);
+				}
+				igt_waitchildren();
+			}
+		}
+
 		test_each_engine("submit-early-slice", fd, e)
 			submit_slice(fd, e, EARLY_SUBMIT);
 		test_each_engine("u-submit-early-slice", fd, e)
@@ -2644,6 +3487,10 @@ igt_main
 		test_each_engine_store("promotion", fd, e)
 			promotion(fd, e->flags);
 
+		igt_subtest_group {
+			test_fairness(fd, 2);
+		}
+
 		igt_subtest_group {
 			igt_fixture {
 				igt_require(gem_scheduler_has_preemption(fd));
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [igt-dev] [PATCH i-g-t 10/10] i915/gem_exec_schedule: Try to spot unfairness
@ 2020-10-14 10:40   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:40 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx, Tvrtko Ursulin, Chris Wilson

An important property for multi-client systems is that each client gets
a 'fair' allotment of system time. (Where fairness is at the whim of the
context properties, such as priorities.) This test forks N independent
clients (albeit they happen to share a single vm), and does an equal
amount of work in client and asserts that they take an equal amount of
time.

Though we have never claimed to have a completely fair scheduler, that
is what is expected.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Ramalingam C <ramalingam.c@intel.com>
---
 tests/i915/gem_exec_schedule.c | 847 +++++++++++++++++++++++++++++++++
 1 file changed, 847 insertions(+)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index 74d77d3e6..da5a6d248 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -29,6 +29,7 @@
 #include <sys/poll.h>
 #include <sys/ioctl.h>
 #include <sys/mman.h>
+#include <sys/resource.h>
 #include <sys/syscall.h>
 #include <sched.h>
 #include <signal.h>
@@ -2531,6 +2532,819 @@ static void measure_semaphore_power(int i915)
 	rapl_close(&pkg);
 }
 
+static int read_timestamp_frequency(int i915)
+{
+	int value = 0;
+	drm_i915_getparam_t gp = {
+		.value = &value,
+		.param = I915_PARAM_CS_TIMESTAMP_FREQUENCY,
+	};
+	ioctl(i915, DRM_IOCTL_I915_GETPARAM, &gp);
+	return value;
+}
+
+static uint64_t div64_u64_round_up(uint64_t x, uint64_t y)
+{
+	return (x + y - 1) / y;
+}
+
+static uint64_t ns_to_ctx_ticks(int i915, uint64_t ns)
+{
+	int f = read_timestamp_frequency(i915);
+	if (intel_gen(intel_get_drm_devid(i915)) == 11)
+		f = 12500000; /* icl!!! are you feeling alright? CTX vs CS */
+	return div64_u64_round_up(ns * f, NSEC_PER_SEC);
+}
+
+static uint64_t ticks_to_ns(int i915, uint64_t ticks)
+{
+	return div64_u64_round_up(ticks * NSEC_PER_SEC,
+				  read_timestamp_frequency(i915));
+}
+
+#define MI_INSTR(opcode, flags) (((opcode) << 23) | (flags))
+
+#define MI_MATH(x)                      MI_INSTR(0x1a, (x) - 1)
+#define MI_MATH_INSTR(opcode, op1, op2) ((opcode) << 20 | (op1) << 10 | (op2))
+/* Opcodes for MI_MATH_INSTR */
+#define   MI_MATH_NOOP                  MI_MATH_INSTR(0x000, 0x0, 0x0)
+#define   MI_MATH_LOAD(op1, op2)        MI_MATH_INSTR(0x080, op1, op2)
+#define   MI_MATH_LOADINV(op1, op2)     MI_MATH_INSTR(0x480, op1, op2)
+#define   MI_MATH_LOAD0(op1)            MI_MATH_INSTR(0x081, op1)
+#define   MI_MATH_LOAD1(op1)            MI_MATH_INSTR(0x481, op1)
+#define   MI_MATH_ADD                   MI_MATH_INSTR(0x100, 0x0, 0x0)
+#define   MI_MATH_SUB                   MI_MATH_INSTR(0x101, 0x0, 0x0)
+#define   MI_MATH_AND                   MI_MATH_INSTR(0x102, 0x0, 0x0)
+#define   MI_MATH_OR                    MI_MATH_INSTR(0x103, 0x0, 0x0)
+#define   MI_MATH_XOR                   MI_MATH_INSTR(0x104, 0x0, 0x0)
+#define   MI_MATH_STORE(op1, op2)       MI_MATH_INSTR(0x180, op1, op2)
+#define   MI_MATH_STOREINV(op1, op2)    MI_MATH_INSTR(0x580, op1, op2)
+/* Registers used as operands in MI_MATH_INSTR */
+#define   MI_MATH_REG(x)                (x)
+#define   MI_MATH_REG_SRCA              0x20
+#define   MI_MATH_REG_SRCB              0x21
+#define   MI_MATH_REG_ACCU              0x31
+#define   MI_MATH_REG_ZF                0x32
+#define   MI_MATH_REG_CF                0x33
+
+#define MI_LOAD_REGISTER_REG    MI_INSTR(0x2A, 1)
+
+static void delay(int i915,
+		  const struct intel_execution_engine2 *e,
+		  uint32_t handle,
+		  uint64_t addr,
+		  uint64_t ns)
+{
+	const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
+	const uint32_t base = gem_engine_mmio_base(i915, e->name);
+#define CS_GPR(x) (base + 0x600 + 8 * (x))
+#define RUNTIME (base + 0x3a8)
+	enum { START_TS, NOW_TS };
+	uint32_t *map, *cs, *jmp;
+
+	igt_require(base);
+
+	/* Loop until CTX_TIMESTAMP - initial > @ns */
+
+	cs = map = gem_mmap__device_coherent(i915, handle, 0, 4096, PROT_WRITE);
+
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(START_TS) + 4;
+	*cs++ = 0;
+	*cs++ = MI_LOAD_REGISTER_REG;
+	*cs++ = RUNTIME;
+	*cs++ = CS_GPR(START_TS);
+
+	while (offset_in_page(cs) & 63)
+		*cs++ = 0;
+	jmp = cs;
+
+	*cs++ = 0x5 << 23; /* MI_ARB_CHECK */
+
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(NOW_TS) + 4;
+	*cs++ = 0;
+	*cs++ = MI_LOAD_REGISTER_REG;
+	*cs++ = RUNTIME;
+	*cs++ = CS_GPR(NOW_TS);
+
+	/* delta = now - start; inverted to match COND_BBE */
+	*cs++ = MI_MATH(4);
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(NOW_TS));
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(START_TS));
+	*cs++ = MI_MATH_SUB;
+	*cs++ = MI_MATH_STOREINV(MI_MATH_REG(NOW_TS), MI_MATH_REG_ACCU);
+
+	/* Save delta for reading by COND_BBE */
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = CS_GPR(NOW_TS);
+	*cs++ = addr + 4000;
+	*cs++ = addr >> 32;
+
+	/* Delay between SRM and COND_BBE to post the writes */
+	for (int n = 0; n < 8; n++) {
+		*cs++ = MI_STORE_DWORD_IMM;
+		if (use_64b) {
+			*cs++ = addr + 4064;
+			*cs++ = addr >> 32;
+		} else {
+			*cs++ = 0;
+			*cs++ = addr + 4064;
+		}
+		*cs++ = 0;
+	}
+
+	/* Break if delta [time elapsed] > ns */
+	*cs++ = MI_COND_BATCH_BUFFER_END | MI_DO_COMPARE | (1 + use_64b);
+	*cs++ = ~ns_to_ctx_ticks(i915, ns);
+	*cs++ = addr + 4000;
+	*cs++ = addr >> 32;
+
+	/* Otherwise back to recalculating delta */
+	*cs++ = MI_BATCH_BUFFER_START | 1 << 8 | use_64b;
+	*cs++ = addr + offset_in_page(jmp);
+	*cs++ = addr >> 32;
+
+	munmap(map, 4096);
+}
+
+static struct drm_i915_gem_exec_object2
+delay_create(int i915, uint32_t ctx,
+	     const struct intel_execution_engine2 *e,
+	     uint64_t target_ns)
+{
+	struct drm_i915_gem_exec_object2 obj = {
+		.handle = batch_create(i915),
+		.flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS,
+	};
+	struct drm_i915_gem_execbuffer2 execbuf = {
+		.buffers_ptr = to_user_pointer(&obj),
+		.buffer_count = 1,
+		.rsvd1 = ctx,
+		.flags = e->flags,
+	};
+
+	obj.offset = obj.handle << 12;
+	gem_execbuf(i915, &execbuf);
+	gem_sync(i915, obj.handle);
+
+	delay(i915, e, obj.handle, obj.offset, target_ns);
+
+	obj.flags |= EXEC_OBJECT_PINNED;
+	return obj;
+}
+
+static void tslog(int i915,
+		  const struct intel_execution_engine2 *e,
+		  uint32_t handle,
+		  uint64_t addr)
+{
+	const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
+	const uint32_t base = gem_engine_mmio_base(i915, e->name);
+#define CS_GPR(x) (base + 0x600 + 8 * (x))
+#define CS_TIMESTAMP (base + 0x358)
+	enum { INC, MASK, ADDR };
+	uint32_t *timestamp_lo, *addr_lo;
+	uint32_t *map, *cs;
+
+	igt_require(base);
+
+	map = gem_mmap__device_coherent(i915, handle, 0, 4096, PROT_WRITE);
+	cs = map + 512;
+
+	/* Record the current CS_TIMESTAMP into a journal [a 512 slot ring]. */
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = CS_TIMESTAMP;
+	timestamp_lo = cs;
+	*cs++ = addr;
+	*cs++ = addr >> 32;
+
+	/* Load the address + inc & mask variables */
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(ADDR);
+	addr_lo = cs;
+	*cs++ = addr;
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(ADDR) + 4;
+	*cs++ = addr >> 32;
+
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(INC);
+	*cs++ = 4;
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(INC) + 4;
+	*cs++ = 0;
+
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(MASK);
+	*cs++ = 0xfffff7ff;
+	*cs++ = MI_LOAD_REGISTER_IMM;
+	*cs++ = CS_GPR(MASK) + 4;
+	*cs++ = 0xffffffff;
+
+	/* Increment the [ring] address for saving CS_TIMESTAMP */
+	*cs++ = MI_MATH(8);
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(INC));
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(ADDR));
+	*cs++ = MI_MATH_ADD;
+	*cs++ = MI_MATH_STORE(MI_MATH_REG(ADDR), MI_MATH_REG_ACCU);
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(ADDR));
+	*cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(MASK));
+	*cs++ = MI_MATH_AND;
+	*cs++ = MI_MATH_STORE(MI_MATH_REG(ADDR), MI_MATH_REG_ACCU);
+
+	/* Rewrite the batch buffer for the next execution */
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = CS_GPR(ADDR);
+	*cs++ = addr + offset_in_page(timestamp_lo);
+	*cs++ = addr >> 32;
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = CS_GPR(ADDR);
+	*cs++ = addr + offset_in_page(addr_lo);
+	*cs++ = addr >> 32;
+
+	*cs++ = MI_BATCH_BUFFER_END;
+
+	munmap(map, 4096);
+}
+
+static struct drm_i915_gem_exec_object2
+tslog_create(int i915, uint32_t ctx, const struct intel_execution_engine2 *e)
+{
+	struct drm_i915_gem_exec_object2 obj = {
+		.handle = batch_create(i915),
+		.flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS,
+	};
+	struct drm_i915_gem_execbuffer2 execbuf = {
+		.buffers_ptr = to_user_pointer(&obj),
+		.buffer_count = 1,
+		.rsvd1 = ctx,
+		.flags = e->flags,
+	};
+
+	obj.offset = obj.handle << 12;
+	gem_execbuf(i915, &execbuf);
+	gem_sync(i915, obj.handle);
+
+	tslog(i915, e, obj.handle, obj.offset);
+
+	obj.flags |= EXEC_OBJECT_PINNED;
+	return obj;
+}
+
+static int cmp_u32(const void *A, const void *B)
+{
+	const uint32_t *a = A, *b = B;
+
+	if (*a < *b)
+		return -1;
+	else if (*a > *b)
+		return 1;
+	else
+		return 0;
+}
+
+static bool has_ctx_timestamp(int i915, const struct intel_execution_engine2 *e)
+{
+	const int gen = intel_gen(intel_get_drm_devid(i915));
+
+	if (gen == 8 && e->class == I915_ENGINE_CLASS_VIDEO)
+		return false; /* looks fubar */
+
+	return true;
+}
+
+static struct intel_execution_engine2
+pick_random_engine(int i915, const struct intel_execution_engine2 *not)
+{
+	const struct intel_execution_engine2 *e;
+	unsigned int count = 0;
+
+	__for_each_physical_engine(i915, e) {
+		if (e->flags == not->flags)
+			continue;
+		if (!gem_class_has_mutable_submission(i915, e->class))
+			continue;
+		count++;
+	}
+	if (!count)
+		return *not;
+
+	count = rand() % count;
+	__for_each_physical_engine(i915, e) {
+		if (e->flags == not->flags)
+			continue;
+		if (!gem_class_has_mutable_submission(i915, e->class))
+			continue;
+		if (!count--)
+			break;
+	}
+
+	return *e;
+}
+
+static void fair_child(int i915, uint32_t ctx,
+		       const struct intel_execution_engine2 *e,
+		       uint64_t frame_ns,
+		       int timeline,
+		       uint32_t common,
+		       unsigned int flags,
+		       unsigned long *ctl,
+		       unsigned long *out)
+#define F_SYNC		(1 << 0)
+#define F_PACE		(1 << 1)
+#define F_FLOW		(1 << 2)
+#define F_HALF		(1 << 3)
+#define F_SOLO		(1 << 4)
+#define F_SPARE		(1 << 5)
+#define F_NEXT		(1 << 6)
+#define F_VIP		(1 << 7)
+#define F_RRUL		(1 << 8)
+#define F_SHARE		(1 << 9)
+#define F_PING		(1 << 10)
+#define F_THROTTLE	(1 << 11)
+#define F_ISOLATE	(1 << 12)
+{
+	const int batches_per_frame = flags & F_SOLO ? 1 : 3;
+	struct drm_i915_gem_exec_object2 obj[4] = {
+		{},
+		{
+			.handle = common ?: gem_create(i915, 4096),
+		},
+		delay_create(i915, ctx, e, frame_ns / batches_per_frame),
+		delay_create(i915, ctx, e, frame_ns / batches_per_frame),
+	};
+	struct intel_execution_engine2 ping = *e;
+	int p_fence = -1, n_fence = -1;
+	unsigned long count = 0;
+	int n;
+
+	srandom(getpid());
+	if (flags & F_PING)
+		ping = pick_random_engine(i915, e);
+	obj[0] = tslog_create(i915, ctx, &ping);
+
+	while (!READ_ONCE(*ctl)) {
+		struct drm_i915_gem_execbuffer2 execbuf = {
+			.buffers_ptr = to_user_pointer(obj),
+			.buffer_count = 4,
+			.rsvd1 = ctx,
+			.rsvd2 = -1,
+			.flags = e->flags,
+		};
+
+		if (flags & F_FLOW) {
+			unsigned int seq;
+
+			seq = count;
+			if (flags & F_NEXT)
+				seq++;
+
+			execbuf.rsvd2 =
+				sw_sync_timeline_create_fence(timeline, seq);
+			execbuf.flags |= I915_EXEC_FENCE_IN;
+		}
+
+		execbuf.flags |= I915_EXEC_FENCE_OUT;
+		gem_execbuf_wr(i915, &execbuf);
+		n_fence = execbuf.rsvd2 >> 32;
+		execbuf.flags &= ~(I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_IN);
+		for (n = 1; n < batches_per_frame; n++)
+			gem_execbuf(i915, &execbuf);
+		close(execbuf.rsvd2);
+
+		execbuf.buffer_count = 1;
+		execbuf.batch_start_offset = 2048;
+		execbuf.flags = ping.flags | I915_EXEC_FENCE_IN;
+		execbuf.rsvd2 = n_fence;
+		gem_execbuf(i915, &execbuf);
+
+		if (flags & F_PACE && p_fence != -1) {
+			struct pollfd pfd = {
+				.fd = p_fence,
+				.events = POLLIN,
+			};
+			poll(&pfd, 1, -1);
+		}
+		close(p_fence);
+
+		if (flags & F_SYNC) {
+			struct pollfd pfd = {
+				.fd = n_fence,
+				.events = POLLIN,
+			};
+			poll(&pfd, 1, -1);
+		}
+
+		if (flags & F_THROTTLE)
+			igt_ioctl(i915, DRM_IOCTL_I915_GEM_THROTTLE, 0);
+
+		igt_swap(obj[2], obj[3]);
+		igt_swap(p_fence, n_fence);
+		count++;
+	}
+	close(p_fence);
+
+	gem_close(i915, obj[3].handle);
+	gem_close(i915, obj[2].handle);
+	if (obj[1].handle != common)
+		gem_close(i915, obj[1].handle);
+
+	gem_sync(i915, obj[0].handle);
+	if (out) {
+		uint32_t *map;
+
+		map = gem_mmap__device_coherent(i915, obj[0].handle,
+						0, 4096, PROT_WRITE);
+		for (n = 1; n < min(count, 512); n++) {
+			igt_assert(map[n]);
+			map[n - 1] = map[n] - map[n - 1];
+		}
+		qsort(map, --n, sizeof(*map), cmp_u32);
+		*out = ticks_to_ns(i915, map[n / 2]);
+		munmap(map, 4096);
+	}
+	gem_close(i915, obj[0].handle);
+}
+
+static int cmp_ul(const void *A, const void *B)
+{
+	const unsigned long *a = A, *b = B;
+
+	if (*a < *b)
+		return -1;
+	else if (*a > *b)
+		return 1;
+	else
+		return 0;
+}
+
+static uint64_t d_cpu_time(const struct rusage *a, const struct rusage *b)
+{
+	uint64_t cpu_time = 0;
+
+	cpu_time += (a->ru_utime.tv_sec - b->ru_utime.tv_sec) * NSEC_PER_SEC;
+	cpu_time += (a->ru_utime.tv_usec - b->ru_utime.tv_usec) * 1000;
+
+	cpu_time += (a->ru_stime.tv_sec - b->ru_stime.tv_sec) * NSEC_PER_SEC;
+	cpu_time += (a->ru_stime.tv_usec - b->ru_stime.tv_usec) * 1000;
+
+	return cpu_time;
+}
+
+static void timeline_advance(int timeline, int delay_ns)
+{
+	struct timespec tv = { .tv_nsec = delay_ns };
+	nanosleep(&tv, NULL);
+	sw_sync_timeline_inc(timeline, 1);
+}
+
+static void fairness(int i915,
+		     const struct intel_execution_engine2 *e,
+		     int timeout, unsigned int flags)
+{
+	const int frame_ns = 16666 * 1000;
+	const int fence_ns = flags & F_HALF ? 2 * frame_ns : frame_ns;
+	unsigned long *result;
+	uint32_t common = 0;
+
+	igt_require(has_ctx_timestamp(i915, e));
+	igt_require(gem_class_has_mutable_submission(i915, e->class));
+
+	if (flags & F_SHARE)
+		common = gem_create(i915, 4095);
+
+	result = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+
+	for (int n = 2; n <= 64; n <<= 1) { /* 32 == 500us per client */
+		int timeline = sw_sync_timeline_create();
+		int nfences = timeout * NSEC_PER_SEC / fence_ns + 1;
+		const int nchild = n - 1; /* odd for easy medians */
+		const int child_ns = frame_ns / (nchild + !!(flags & F_SPARE));
+		const int lo = nchild / 4;
+		const int hi = (3 * nchild + 3) / 4 - 1;
+		struct rusage old_usage, usage;
+		uint64_t cpu_time, d_time;
+		unsigned long vip = -1;
+		struct timespec tv;
+		struct igt_mean m;
+
+		if (flags & F_PING) {
+			struct intel_execution_engine2 *ping;
+
+			__for_each_physical_engine(i915, ping) {
+				if (ping->flags == e->flags)
+					continue;
+
+				igt_fork(child, 1) {
+					uint32_t ctx = gem_context_clone_with_engines(i915, 0);
+
+					fair_child(i915, ctx, ping,
+						   child_ns / 8,
+						   -1, common,
+						   F_SOLO | F_PACE | F_SHARE,
+						   &result[nchild],
+						   NULL);
+
+					gem_context_destroy(i915, ctx);
+				}
+			}
+		}
+
+		memset(result, 0, (nchild + 1) * sizeof(result[0]));
+		getrusage(RUSAGE_CHILDREN, &old_usage);
+		igt_nsec_elapsed(memset(&tv, 0, sizeof(tv)));
+		igt_fork(child, nchild) {
+			uint32_t ctx;
+
+			if (flags & F_ISOLATE) {
+				int clone, dmabuf = -1;
+
+				if (common)
+					dmabuf = prime_handle_to_fd(i915, common);
+
+				clone = gem_reopen_driver(i915);
+				gem_context_copy_engines(i915, 0, clone, 0);
+				i915 = clone;
+
+				if (dmabuf != -1)
+					common = prime_fd_to_handle(i915, dmabuf);
+			}
+
+			ctx = gem_context_clone_with_engines(i915, 0);
+
+			if (flags & F_VIP && child == 0) {
+				gem_context_set_priority(i915, ctx, MAX_PRIO);
+				flags |= F_FLOW;
+			}
+			if (flags & F_RRUL && child == 0)
+				flags |= F_SOLO | F_FLOW | F_SYNC;
+
+			fair_child(i915, ctx, e, child_ns,
+				   timeline, common, flags,
+				   &result[nchild],
+				   &result[child]);
+
+			gem_context_destroy(i915, ctx);
+		}
+
+		while (nfences--)
+			timeline_advance(timeline, fence_ns);
+
+		result[nchild] = 1;
+		for (int child = 0; child < nchild; child++) {
+			while (!READ_ONCE(result[child]))
+				timeline_advance(timeline, fence_ns);
+		}
+
+		igt_waitchildren();
+		close(timeline);
+
+		/* Are we running out of CPU time, and fail to submit frames? */
+		d_time = igt_nsec_elapsed(&tv);
+		getrusage(RUSAGE_CHILDREN, &usage);
+		cpu_time = d_cpu_time(&usage, &old_usage);
+		if (10 * cpu_time > 9 * d_time) {
+			if (nchild > 7)
+				break;
+
+			igt_skip_on_f(10 * cpu_time > 9 * d_time,
+				      "%.0f%% CPU usage, presuming capacity exceeded\n",
+				      100. * cpu_time / d_time);
+		}
+
+		igt_mean_init(&m);
+		for (int child = 0; child < nchild; child++)
+			igt_mean_add(&m, result[child]);
+
+		if (flags & (F_VIP | F_RRUL))
+			vip = result[0];
+
+		qsort(result, nchild, sizeof(*result), cmp_ul);
+		igt_info("%2d clients, range: [%.1f, %.1f], iqr: [%.1f, %.1f], median: %.1f, mean: %.1f ± %.2f ms\n",
+			 nchild,
+			 1e-6 * result[0],  1e-6 * result[nchild - 1],
+			 1e-6 * result[lo], 1e-6 * result[hi],
+			 1e-6 * result[nchild / 2],
+			 1e-6 * igt_mean_get(&m),
+			 1e-6 * sqrt(igt_mean_get_variance(&m)));
+
+		if (vip != -1) {
+			igt_info("VIP interval %.2f ms\n", 1e-6 * vip);
+			igt_assert(4 * vip > 3 * fence_ns &&
+				   3 * vip < 4 * fence_ns);
+		}
+
+		/* May be slowed due to sheer volume of context switches */
+		igt_assert(4 * igt_mean_get(&m) > 3 * fence_ns &&
+			       igt_mean_get(&m) < 3 * fence_ns);
+
+		igt_assert(4 * igt_mean_get(&m) > 3 * result[nchild / 2] &&
+			   3 * igt_mean_get(&m) < 4 * result[nchild / 2]);
+
+		igt_assert(2 * (result[hi] - result[lo]) < result[nchild / 2]);
+	}
+
+	munmap(result, 4096);
+	if (common)
+		gem_close(i915, common);
+}
+
+static void test_fairness(int i915, int timeout)
+{
+	static const struct {
+		const char *name;
+		unsigned int flags;
+	} fair[] = {
+		/*
+		 * none - maximal greed in each client
+		 *
+		 * Push as many frames from each client as fast as possible
+		 */
+		{ "none",       0 },
+		{ "none-vip",   F_VIP }, /* one vip client must meet deadlines */
+		{ "none-solo",  F_SOLO }, /* 1 batch per frame per client */
+		{ "none-share", F_SHARE }, /* read from a common buffer */
+		{ "none-rrul",  F_RRUL }, /* "realtime-response under load" */
+		{ "none-ping",  F_PING }, /* measure inter-engine fairness */
+
+		/*
+		 * throttle - original per client throttling
+		 *
+		 * Used for front buffering rendering where there is no
+		 * extenal frame marker. Each client tries to only keep
+		 * 20ms of work submitted, though that measurement is
+		 * flawed...
+		 *
+		 * This is used by Xorg to try and maintain some resembalance
+		 * of input/output consistency when being feed a continuous
+		 * stream of X11 draw requests straight into scanout, where
+		 * the clients may submit the work faster than can be drawn.
+		 *
+		 * Throttling tracks requests per-file (and assumes that
+		 * all requests are in submission order across the whole file),
+		 * so we split each child to its own fd.
+		 */
+		{ "throttle",       F_THROTTLE | F_ISOLATE },
+		{ "throttle-vip",   F_THROTTLE | F_ISOLATE | F_VIP },
+		{ "throttle-solo",  F_THROTTLE | F_ISOLATE | F_SOLO },
+		{ "throttle-share", F_THROTTLE | F_ISOLATE | F_SHARE },
+		{ "throttle-rrul",  F_THROTTLE | F_ISOLATE | F_RRUL },
+
+		/*
+		 * pace - mesa "submit double buffering"
+		 *
+		 * Submit a frame, wait for previous frame to start. This
+		 * prevents each client from getting too far ahead of its
+		 * rendering, maintaining a consistent input/output latency.
+		 */
+		{ "pace",       F_PACE },
+		{ "pace-solo",  F_PACE | F_SOLO},
+		{ "pace-share", F_PACE | F_SHARE},
+		{ "pace-ping",  F_PACE | F_SHARE | F_PING},
+
+		/* sync - only submit a frame at a time */
+		{ "sync",      F_SYNC },
+		{ "sync-vip",  F_SYNC | F_VIP },
+		{ "sync-solo", F_SYNC | F_SOLO },
+
+		/* flow - synchronise execution against the clock (vblank) */
+		{ "flow",       F_PACE | F_FLOW },
+		{ "flow-share", F_PACE | F_FLOW | F_SHARE },
+		{ "flow-ping",  F_PACE | F_FLOW | F_SHARE | F_PING },
+
+		/* next - submit ahead of the clock (vblank double buffering) */
+		{ "next",       F_PACE | F_FLOW | F_NEXT },
+		{ "next-share", F_PACE | F_FLOW | F_NEXT | F_SHARE },
+		{ "next-ping",  F_PACE | F_FLOW | F_NEXT | F_SHARE | F_PING },
+
+		/* spare - underutilise by a single client timeslice */
+		{ "spare", F_PACE | F_FLOW | F_SPARE },
+
+		/* half - run at half pace (submit 16ms of work every 32ms) */
+		{ "half",  F_PACE | F_FLOW | F_HALF },
+
+		{}
+	};
+
+	igt_fixture {
+		igt_info("CS timestamp frequency: %d\n",
+			 read_timestamp_frequency(i915));
+
+		igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
+	}
+
+	for (typeof(*fair) *f = fair; f->name; f++) {
+		igt_subtest_with_dynamic_f("fair-%s", f->name)  {
+			const struct intel_execution_engine2 *e;
+
+			__for_each_physical_engine(i915, e) {
+				if (!gem_class_can_store_dword(i915, e->class))
+					continue;
+
+				igt_dynamic_f("%s", e->name)
+					fairness(i915, e, timeout, f->flags);
+			}
+		}
+	}
+}
+
+static uint32_t read_ctx_timestamp(int i915,
+				   uint32_t ctx,
+				   const struct intel_execution_engine2 *e)
+{
+	const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
+	const uint32_t base = gem_engine_mmio_base(i915, e->name);
+	struct drm_i915_gem_relocation_entry reloc;
+	struct drm_i915_gem_exec_object2 obj = {
+		.handle = gem_create(i915, 4096),
+		.offset = 32 << 20,
+		.relocs_ptr = to_user_pointer(&reloc),
+		.relocation_count = 1,
+	};
+	struct drm_i915_gem_execbuffer2 execbuf = {
+		.buffers_ptr = to_user_pointer(&obj),
+		.buffer_count = 1,
+		.flags = e->flags,
+		.rsvd1 = ctx,
+	};
+#define RUNTIME (base + 0x3a8)
+	uint32_t *map, *cs;
+	uint32_t ts;
+
+	igt_require(base);
+
+	cs = map = gem_mmap__device_coherent(i915, obj.handle,
+					     0, 4096, PROT_WRITE);
+
+	*cs++ = 0x24 << 23 | (1 + use_64b); /* SRM */
+	*cs++ = RUNTIME;
+	memset(&reloc, 0, sizeof(reloc));
+	reloc.target_handle = obj.handle;
+	reloc.presumed_offset = obj.offset;
+	reloc.offset = offset_in_page(cs);
+	reloc.delta = 4000;
+	*cs++ = obj.offset + 4000;
+	*cs++ = obj.offset >> 32;
+
+	*cs++ = MI_BATCH_BUFFER_END;
+
+	gem_execbuf(i915, &execbuf);
+	gem_sync(i915, obj.handle);
+	gem_close(i915, obj.handle);
+
+	ts = map[1000];
+	munmap(map, 4096);
+
+	return ts;
+}
+
+static void fairslice(int i915,
+		      const struct intel_execution_engine2 *e,
+		      unsigned long flags)
+{
+	igt_spin_t *spin = NULL;
+	uint32_t ctx[3];
+	uint32_t ts[3];
+
+	for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
+		ctx[i] = gem_context_clone_with_engines(i915, 0);
+		if (spin == NULL) {
+			spin = __igt_spin_new(i915,
+					      .ctx = ctx[i],
+					      .engine = e->flags,
+					      .flags = flags);
+		} else {
+			struct drm_i915_gem_execbuffer2 eb = {
+				.buffer_count = 1,
+				.buffers_ptr = to_user_pointer(&spin->obj[IGT_SPIN_BATCH]),
+				.flags = e->flags,
+				.rsvd1 = ctx[i],
+			};
+			gem_execbuf(i915, &eb);
+		}
+	}
+
+	sleep(2); /* over the course of many timeslices */
+
+	igt_assert(gem_bo_busy(i915, spin->handle));
+	igt_spin_end(spin);
+	for (int i = 0; i < ARRAY_SIZE(ctx); i++)
+		ts[i] = read_ctx_timestamp(i915, ctx[i], e);
+
+	for (int i = 0; i < ARRAY_SIZE(ctx); i++)
+		gem_context_destroy(i915, ctx[i]);
+	igt_spin_free(i915, spin);
+
+	qsort(ts, 3, sizeof(*ts), cmp_u32);
+	igt_info("%s: [%.1f, %.1f] ms\n", e->name,
+		 1e-6 * ticks_to_ns(i915, ts[0]),
+		 1e-6 * ticks_to_ns(i915, ts[2]));
+
+	igt_assert(ts[0] && ts[2] > ts[0]);
+	igt_assert(4 * ts[0] > 3 * ts[2]);
+}
+
 #define test_each_engine(T, i915, e) \
 	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
 		igt_dynamic_f("%s", e->name)
@@ -2601,6 +3415,35 @@ igt_main
 		test_each_engine("u-lateslice", fd, e)
 			lateslice(fd, e->flags, IGT_SPIN_USERPTR);
 
+		igt_subtest_group {
+			igt_fixture {
+				igt_require(gem_scheduler_has_semaphores(fd));
+				igt_require(gem_scheduler_has_preemption(fd));
+				igt_require(intel_gen(intel_get_drm_devid(fd)) >= 8);
+			}
+
+			test_each_engine("fairslice", fd, e)
+				fairslice(fd, e, 0);
+
+			test_each_engine("u-fairslice", fd, e)
+				fairslice(fd, e, IGT_SPIN_USERPTR);
+
+			igt_subtest("fairslice-all")  {
+				__for_each_physical_engine(fd, e) {
+					igt_fork(child, 1)
+						fairslice(fd, e, 0);
+				}
+				igt_waitchildren();
+			}
+			igt_subtest("u-fairslice-all")  {
+				__for_each_physical_engine(fd, e) {
+					igt_fork(child, 1)
+						fairslice(fd, e, IGT_SPIN_USERPTR);
+				}
+				igt_waitchildren();
+			}
+		}
+
 		test_each_engine("submit-early-slice", fd, e)
 			submit_slice(fd, e, EARLY_SUBMIT);
 		test_each_engine("u-submit-early-slice", fd, e)
@@ -2644,6 +3487,10 @@ igt_main
 		test_each_engine_store("promotion", fd, e)
 			promotion(fd, e->flags);
 
+		igt_subtest_group {
+			test_fairness(fd, 2);
+		}
+
 		igt_subtest_group {
 			igt_fixture {
 				igt_require(gem_scheduler_has_preemption(fd));
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [Intel-gfx] [PATCH i-g-t 03/10] i915/gen9_exec_parse: Check oversized batch with length==0
  2020-10-14 10:40   ` [igt-dev] " Chris Wilson
@ 2020-10-14 10:43     ` Matthew Auld
  -1 siblings, 0 replies; 27+ messages in thread
From: Matthew Auld @ 2020-10-14 10:43 UTC (permalink / raw)
  To: Chris Wilson, igt-dev; +Cc: intel-gfx

On 14/10/2020 11:40, Chris Wilson wrote:
> Include the implicit eb.batch_len=0 into the mix of various offsets and
> lengths.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 03/10] i915/gen9_exec_parse: Check oversized batch with length==0
@ 2020-10-14 10:43     ` Matthew Auld
  0 siblings, 0 replies; 27+ messages in thread
From: Matthew Auld @ 2020-10-14 10:43 UTC (permalink / raw)
  To: Chris Wilson, igt-dev; +Cc: intel-gfx

On 14/10/2020 11:40, Chris Wilson wrote:
> Include the implicit eb.batch_len=0 into the mix of various offsets and
> lengths.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Intel-gfx] [igt-dev] [PATCH i-g-t 03/10] i915/gen9_exec_parse: Check oversized batch with length==0
  2020-10-14 10:43     ` [igt-dev] " Matthew Auld
@ 2020-10-14 10:49       ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:49 UTC (permalink / raw)
  To: Matthew Auld, igt-dev; +Cc: intel-gfx

Quoting Matthew Auld (2020-10-14 11:43:48)
> On 14/10/2020 11:40, Chris Wilson wrote:
> > Include the implicit eb.batch_len=0 into the mix of various offsets and
> > lengths.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>

Note this is in addition to your gem_exec_params test :)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 03/10] i915/gen9_exec_parse: Check oversized batch with length==0
@ 2020-10-14 10:49       ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2020-10-14 10:49 UTC (permalink / raw)
  To: Matthew Auld, igt-dev; +Cc: intel-gfx

Quoting Matthew Auld (2020-10-14 11:43:48)
> On 14/10/2020 11:40, Chris Wilson wrote:
> > Include the implicit eb.batch_len=0 into the mix of various offsets and
> > lengths.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>

Note this is in addition to your gem_exec_params test :)
-Chris
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for series starting with [i-g-t,01/10] i915/gem_userptr_blits: Tighten has_userptr()
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
                   ` (9 preceding siblings ...)
  (?)
@ 2020-10-14 11:18 ` Patchwork
  -1 siblings, 0 replies; 27+ messages in thread
From: Patchwork @ 2020-10-14 11:18 UTC (permalink / raw)
  To: Chris Wilson; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 6157 bytes --]

== Series Details ==

Series: series starting with [i-g-t,01/10] i915/gem_userptr_blits: Tighten has_userptr()
URL   : https://patchwork.freedesktop.org/series/82670/
State : success

== Summary ==

CI Bug Log - changes from IGT_5815 -> IGTPW_5067
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/index.html

Known issues
------------

  Here are the changes found in IGTPW_5067 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@debugfs_test@read_all_entries:
    - fi-bsw-nick:        [PASS][1] -> [INCOMPLETE][2] ([i915#1250] / [i915#1436])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/fi-bsw-nick/igt@debugfs_test@read_all_entries.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/fi-bsw-nick/igt@debugfs_test@read_all_entries.html

  * igt@gem_flink_basic@double-flink:
    - fi-tgl-y:           [PASS][3] -> [DMESG-WARN][4] ([i915#402])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/fi-tgl-y/igt@gem_flink_basic@double-flink.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/fi-tgl-y/igt@gem_flink_basic@double-flink.html

  * igt@kms_cursor_legacy@basic-flip-after-cursor-atomic:
    - fi-icl-u2:          [PASS][5] -> [DMESG-WARN][6] ([i915#1982])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/fi-icl-u2/igt@kms_cursor_legacy@basic-flip-after-cursor-atomic.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/fi-icl-u2/igt@kms_cursor_legacy@basic-flip-after-cursor-atomic.html

  * igt@kms_flip@basic-flip-vs-wf_vblank@d-edp1:
    - fi-tgl-y:           [PASS][7] -> [DMESG-WARN][8] ([i915#1982])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/fi-tgl-y/igt@kms_flip@basic-flip-vs-wf_vblank@d-edp1.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/fi-tgl-y/igt@kms_flip@basic-flip-vs-wf_vblank@d-edp1.html

  
#### Possible fixes ####

  * igt@gem_flink_basic@bad-flink:
    - fi-tgl-y:           [DMESG-WARN][9] ([i915#402]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/fi-tgl-y/igt@gem_flink_basic@bad-flink.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/fi-tgl-y/igt@gem_flink_basic@bad-flink.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-kbl-7500u:       [DMESG-WARN][11] ([i915#165]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html

  * igt@kms_cursor_legacy@basic-flip-before-cursor-legacy:
    - fi-icl-u2:          [DMESG-WARN][13] ([i915#1982]) -> [PASS][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/fi-icl-u2/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/fi-icl-u2/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#1250]: https://gitlab.freedesktop.org/drm/intel/issues/1250
  [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
  [i915#165]: https://gitlab.freedesktop.org/drm/intel/issues/165
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2417]: https://gitlab.freedesktop.org/drm/intel/issues/2417
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [k.org#205379]: https://bugzilla.kernel.org/show_bug.cgi?id=205379


Participating hosts (46 -> 39)
------------------------------

  Missing    (7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-bsw-kefka fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_5815 -> IGTPW_5067

  CI-20190529: 20190529
  CI_DRM_9138: 5e4234f97efbaa30f0beb243dcf98fe0a0bb0945 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_5067: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/index.html
  IGT_5815: 0c3b29498a624ad42033a219d031cb9dd475405b @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools



== Testlist changes ==

+igt@gem_exec_balancer@u-bonded-cork
+igt@gem_exec_balancer@u-bonded-early
+igt@gem_exec_balancer@u-bonded-imm
+igt@gem_exec_balancer@u-bonded-semaphore
+igt@gem_exec_reloc@basic-spin-user
+igt@gem_exec_schedule@fairslice
+igt@gem_exec_schedule@fairslice-all
+igt@gem_exec_schedule@fair-flow
+igt@gem_exec_schedule@fair-flow-ping
+igt@gem_exec_schedule@fair-flow-share
+igt@gem_exec_schedule@fair-half
+igt@gem_exec_schedule@fair-next
+igt@gem_exec_schedule@fair-next-ping
+igt@gem_exec_schedule@fair-next-share
+igt@gem_exec_schedule@fair-none
+igt@gem_exec_schedule@fair-none-ping
+igt@gem_exec_schedule@fair-none-rrul
+igt@gem_exec_schedule@fair-none-share
+igt@gem_exec_schedule@fair-none-solo
+igt@gem_exec_schedule@fair-none-vip
+igt@gem_exec_schedule@fair-pace
+igt@gem_exec_schedule@fair-pace-ping
+igt@gem_exec_schedule@fair-pace-share
+igt@gem_exec_schedule@fair-pace-solo
+igt@gem_exec_schedule@fair-spare
+igt@gem_exec_schedule@fair-sync
+igt@gem_exec_schedule@fair-sync-solo
+igt@gem_exec_schedule@fair-sync-vip
+igt@gem_exec_schedule@fair-throttle
+igt@gem_exec_schedule@fair-throttle-rrul
+igt@gem_exec_schedule@fair-throttle-share
+igt@gem_exec_schedule@fair-throttle-solo
+igt@gem_exec_schedule@fair-throttle-vip
+igt@gem_exec_schedule@preempt-user
+igt@gem_exec_schedule@u-fairslice
+igt@gem_exec_schedule@u-fairslice-all
+igt@gem_exec_schedule@u-independent
+igt@gem_exec_schedule@u-lateslice
+igt@gem_exec_schedule@u-semaphore-codependency
+igt@gem_exec_schedule@u-semaphore-noskip
+igt@gem_exec_schedule@u-semaphore-resolve
+igt@gem_exec_schedule@u-semaphore-user
+igt@gem_exec_schedule@u-submit-early-slice
+igt@gem_exec_schedule@u-submit-golden-slice
+igt@gem_exec_schedule@u-submit-late-slice
+igt@gem_userptr_blits@exec-isolation
+igt@gem_userptr_blits@unmap-isolation

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/index.html

[-- Attachment #1.2: Type: text/html, Size: 7281 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [igt-dev] ✓ Fi.CI.IGT: success for series starting with [i-g-t,01/10] i915/gem_userptr_blits: Tighten has_userptr()
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
                   ` (10 preceding siblings ...)
  (?)
@ 2020-10-14 17:57 ` Patchwork
  -1 siblings, 0 replies; 27+ messages in thread
From: Patchwork @ 2020-10-14 17:57 UTC (permalink / raw)
  To: Chris Wilson; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 30298 bytes --]

== Series Details ==

Series: series starting with [i-g-t,01/10] i915/gem_userptr_blits: Tighten has_userptr()
URL   : https://patchwork.freedesktop.org/series/82670/
State : success

== Summary ==

CI Bug Log - changes from IGT_5815_full -> IGTPW_5067_full
====================================================

Summary
-------

  **WARNING**

  Minor unknown changes coming with IGTPW_5067_full need to be verified
  manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in IGTPW_5067_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_5067_full:

### IGT changes ###

#### Possible regressions ####

  * {igt@gem_exec_schedule@fair-next-ping@vecs0} (NEW):
    - shard-iclb:         NOTRUN -> [SKIP][1] +15 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-iclb4/igt@gem_exec_schedule@fair-next-ping@vecs0.html

  * {igt@gem_exec_schedule@fair-none-ping@rcs0} (NEW):
    - shard-tglb:         NOTRUN -> [SKIP][2] +19 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-tglb7/igt@gem_exec_schedule@fair-none-ping@rcs0.html

  * {igt@gem_exec_schedule@fair-none-solo@rcs0} (NEW):
    - shard-kbl:          NOTRUN -> [FAIL][3] +14 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-kbl6/igt@gem_exec_schedule@fair-none-solo@rcs0.html

  * {igt@gem_exec_schedule@fair-pace-share@vcs0} (NEW):
    - shard-glk:          NOTRUN -> [FAIL][4] +13 similar issues
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-glk3/igt@gem_exec_schedule@fair-pace-share@vcs0.html

  * {igt@gem_exec_schedule@fair-throttle-solo@vcs1} (NEW):
    - shard-tglb:         NOTRUN -> [FAIL][5] +36 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-tglb1/igt@gem_exec_schedule@fair-throttle-solo@vcs1.html

  * {igt@gem_exec_schedule@fair-throttle@rcs0} (NEW):
    - shard-iclb:         NOTRUN -> [FAIL][6] +29 similar issues
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-iclb2/igt@gem_exec_schedule@fair-throttle@rcs0.html

  
#### Warnings ####

  * igt@kms_frontbuffer_tracking@psr-rgb101010-draw-blt:
    - shard-tglb:         [DMESG-FAIL][7] ([i915#1982]) -> [FAIL][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-tglb2/igt@kms_frontbuffer_tracking@psr-rgb101010-draw-blt.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-tglb1/igt@kms_frontbuffer_tracking@psr-rgb101010-draw-blt.html

  
New tests
---------

  New tests have been introduced between IGT_5815_full and IGTPW_5067_full:

### New IGT tests (222) ###

  * igt@gem_exec_balancer@u-bonded-cork:
    - Statuses : 4 pass(s) 1 skip(s)
    - Exec time: [0.0, 4.08] s

  * igt@gem_exec_balancer@u-bonded-early:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 4.07] s

  * igt@gem_exec_balancer@u-bonded-imm:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 4.10] s

  * igt@gem_exec_balancer@u-bonded-semaphore:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 4.30] s

  * igt@gem_exec_reloc@basic-spin-user:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_reloc@basic-spin-user@bcs0:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@gem_exec_reloc@basic-spin-user@rcs0:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@gem_exec_reloc@basic-spin-user@vcs0:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@gem_exec_reloc@basic-spin-user@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [0.00] s

  * igt@gem_exec_reloc@basic-spin-user@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@gem_exec_schedule@fair-flow:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-flow-ping:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-flow-ping@bcs0:
    - Statuses : 5 skip(s)
    - Exec time: [0.0, 4.21] s

  * igt@gem_exec_schedule@fair-flow-ping@rcs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.12, 4.33] s

  * igt@gem_exec_schedule@fair-flow-ping@vcs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.14, 4.21] s

  * igt@gem_exec_schedule@fair-flow-ping@vcs1:
    - Statuses : 3 skip(s)
    - Exec time: [2.13, 4.21] s

  * igt@gem_exec_schedule@fair-flow-ping@vecs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.15, 4.24] s

  * igt@gem_exec_schedule@fair-flow-share:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-flow-share@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.20] s

  * igt@gem_exec_schedule@fair-flow-share@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.89, 13.57] s

  * igt@gem_exec_schedule@fair-flow-share@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.98, 13.19] s

  * igt@gem_exec_schedule@fair-flow-share@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [12.99, 13.66] s

  * igt@gem_exec_schedule@fair-flow-share@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.99, 13.68] s

  * igt@gem_exec_schedule@fair-flow@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.32] s

  * igt@gem_exec_schedule@fair-flow@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.90, 13.62] s

  * igt@gem_exec_schedule@fair-flow@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.02, 13.86] s

  * igt@gem_exec_schedule@fair-flow@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [13.01, 13.91] s

  * igt@gem_exec_schedule@fair-flow@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.00, 13.95] s

  * igt@gem_exec_schedule@fair-half:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-half@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.62] s

  * igt@gem_exec_schedule@fair-half@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [11.58, 15.72] s

  * igt@gem_exec_schedule@fair-half@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.06, 15.68] s

  * igt@gem_exec_schedule@fair-half@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [9.05, 13.38] s

  * igt@gem_exec_schedule@fair-half@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.11, 15.85] s

  * igt@gem_exec_schedule@fair-next:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-next-ping:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-next-ping@bcs0:
    - Statuses : 5 skip(s)
    - Exec time: [0.0, 4.21] s

  * igt@gem_exec_schedule@fair-next-ping@rcs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.13, 4.38] s

  * igt@gem_exec_schedule@fair-next-ping@vcs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.16, 4.36] s

  * igt@gem_exec_schedule@fair-next-ping@vcs1:
    - Statuses : 3 skip(s)
    - Exec time: [2.15, 4.22] s

  * igt@gem_exec_schedule@fair-next-ping@vecs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.13, 4.33] s

  * igt@gem_exec_schedule@fair-next-share:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-next-share@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.20] s

  * igt@gem_exec_schedule@fair-next-share@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.90, 13.58] s

  * igt@gem_exec_schedule@fair-next-share@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.00, 13.61] s

  * igt@gem_exec_schedule@fair-next-share@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [12.98, 13.62] s

  * igt@gem_exec_schedule@fair-next-share@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.01, 13.62] s

  * igt@gem_exec_schedule@fair-next@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.35] s

  * igt@gem_exec_schedule@fair-next@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.93, 13.63] s

  * igt@gem_exec_schedule@fair-next@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.00, 13.81] s

  * igt@gem_exec_schedule@fair-next@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [13.02, 13.98] s

  * igt@gem_exec_schedule@fair-next@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.01, 13.79] s

  * igt@gem_exec_schedule@fair-none:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-none-ping:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-none-ping@bcs0:
    - Statuses : 5 skip(s)
    - Exec time: [0.0, 7.61] s

  * igt@gem_exec_schedule@fair-none-ping@rcs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.38, 7.50] s

  * igt@gem_exec_schedule@fair-none-ping@vcs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.55, 7.63] s

  * igt@gem_exec_schedule@fair-none-ping@vcs1:
    - Statuses : 2 skip(s)
    - Exec time: [2.60, 7.44] s

  * igt@gem_exec_schedule@fair-none-ping@vecs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.52, 7.65] s

  * igt@gem_exec_schedule@fair-none-rrul:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-none-rrul@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 12.17] s

  * igt@gem_exec_schedule@fair-none-rrul@rcs0:
    - Statuses : 3 fail(s) 2 pass(s)
    - Exec time: [9.27, 11.90] s

  * igt@gem_exec_schedule@fair-none-rrul@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [10.01, 12.88] s

  * igt@gem_exec_schedule@fair-none-rrul@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [11.61, 12.88] s

  * igt@gem_exec_schedule@fair-none-rrul@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [10.11, 12.83] s

  * igt@gem_exec_schedule@fair-none-share:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-none-share@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 12.42] s

  * igt@gem_exec_schedule@fair-none-share@rcs0:
    - Statuses : 3 fail(s) 2 pass(s)
    - Exec time: [9.68, 12.40] s

  * igt@gem_exec_schedule@fair-none-share@vcs0:
    - Statuses : 4 pass(s) 1 skip(s)
    - Exec time: [7.46, 12.14] s

  * igt@gem_exec_schedule@fair-none-share@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [9.75, 12.10] s

  * igt@gem_exec_schedule@fair-none-share@vecs0:
    - Statuses : 4 pass(s) 1 skip(s)
    - Exec time: [7.43, 12.11] s

  * igt@gem_exec_schedule@fair-none-solo:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-none-solo@bcs0:
    - Statuses : 2 fail(s) 3 skip(s)
    - Exec time: [0.0, 14.20] s

  * igt@gem_exec_schedule@fair-none-solo@rcs0:
    - Statuses : 4 fail(s) 1 pass(s)
    - Exec time: [13.28, 14.36] s

  * igt@gem_exec_schedule@fair-none-solo@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [15.05, 18.04] s

  * igt@gem_exec_schedule@fair-none-solo@vcs1:
    - Statuses : 1 fail(s) 1 pass(s)
    - Exec time: [13.63, 14.95] s

  * igt@gem_exec_schedule@fair-none-solo@vecs0:
    - Statuses : 1 fail(s) 4 pass(s)
    - Exec time: [14.25, 16.86] s

  * igt@gem_exec_schedule@fair-none-vip:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-none-vip@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.21] s

  * igt@gem_exec_schedule@fair-none-vip@rcs0:
    - Statuses : 2 fail(s) 3 pass(s)
    - Exec time: [10.13, 14.28] s

  * igt@gem_exec_schedule@fair-none-vip@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [11.10, 13.90] s

  * igt@gem_exec_schedule@fair-none-vip@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [10.60, 13.17] s

  * igt@gem_exec_schedule@fair-none-vip@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [10.95, 13.91] s

  * igt@gem_exec_schedule@fair-none@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 12.45] s

  * igt@gem_exec_schedule@fair-none@rcs0:
    - Statuses : 2 fail(s) 3 pass(s)
    - Exec time: [9.77, 13.79] s

  * igt@gem_exec_schedule@fair-none@vcs0:
    - Statuses : 4 pass(s) 1 skip(s)
    - Exec time: [7.54, 12.17] s

  * igt@gem_exec_schedule@fair-none@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [9.80, 12.16] s

  * igt@gem_exec_schedule@fair-none@vecs0:
    - Statuses : 4 pass(s) 1 skip(s)
    - Exec time: [7.58, 12.17] s

  * igt@gem_exec_schedule@fair-pace:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-pace-ping:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-pace-ping@bcs0:
    - Statuses : 5 skip(s)
    - Exec time: [0.0, 4.21] s

  * igt@gem_exec_schedule@fair-pace-ping@rcs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.13, 4.38] s

  * igt@gem_exec_schedule@fair-pace-ping@vcs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.12, 4.21] s

  * igt@gem_exec_schedule@fair-pace-ping@vcs1:
    - Statuses : 3 skip(s)
    - Exec time: [4.20, 4.24] s

  * igt@gem_exec_schedule@fair-pace-ping@vecs0:
    - Statuses : 5 skip(s)
    - Exec time: [2.12, 4.24] s

  * igt@gem_exec_schedule@fair-pace-share:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-pace-share@bcs0:
    - Statuses : 2 fail(s) 2 skip(s)
    - Exec time: [0.0, 10.58] s

  * igt@gem_exec_schedule@fair-pace-share@rcs0:
    - Statuses : 3 fail(s) 1 pass(s)
    - Exec time: [8.44, 10.61] s

  * igt@gem_exec_schedule@fair-pace-share@vcs0:
    - Statuses : 3 fail(s) 1 pass(s)
    - Exec time: [8.41, 10.60] s

  * igt@gem_exec_schedule@fair-pace-share@vcs1:
    - Statuses : 1 fail(s)
    - Exec time: [10.60] s

  * igt@gem_exec_schedule@fair-pace-share@vecs0:
    - Statuses : 3 fail(s) 1 pass(s)
    - Exec time: [8.40, 8.80] s

  * igt@gem_exec_schedule@fair-pace-solo:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_schedule@fair-pace-solo@bcs0:
    - Statuses : 2 fail(s) 3 skip(s)
    - Exec time: [0.0, 10.54] s

  * igt@gem_exec_schedule@fair-pace-solo@rcs0:
    - Statuses : 5 fail(s)
    - Exec time: [8.50, 10.54] s

  * igt@gem_exec_schedule@fair-pace-solo@vcs0:
    - Statuses : 5 fail(s)
    - Exec time: [8.38, 10.54] s

  * igt@gem_exec_schedule@fair-pace-solo@vcs1:
    - Statuses : 2 fail(s)
    - Exec time: [8.36, 8.55] s

  * igt@gem_exec_schedule@fair-pace-solo@vecs0:
    - Statuses : 4 fail(s) 1 pass(s)
    - Exec time: [8.59, 10.57] s

  * igt@gem_exec_schedule@fair-pace@bcs0:
    - Statuses : 2 fail(s) 3 skip(s)
    - Exec time: [0.0, 10.60] s

  * igt@gem_exec_schedule@fair-pace@rcs0:
    - Statuses : 4 fail(s) 1 pass(s)
    - Exec time: [8.54, 10.61] s

  * igt@gem_exec_schedule@fair-pace@vcs0:
    - Statuses : 3 fail(s) 2 pass(s)
    - Exec time: [8.41, 11.92] s

  * igt@gem_exec_schedule@fair-pace@vcs1:
    - Statuses : 2 fail(s)
    - Exec time: [10.60, 10.87] s

  * igt@gem_exec_schedule@fair-pace@vecs0:
    - Statuses : 3 fail(s) 2 pass(s)
    - Exec time: [8.68, 11.88] s

  * igt@gem_exec_schedule@fair-spare:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-spare@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.26] s

  * igt@gem_exec_schedule@fair-spare@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.05, 13.73] s

  * igt@gem_exec_schedule@fair-spare@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.02, 13.97] s

  * igt@gem_exec_schedule@fair-spare@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [13.05, 14.04] s

  * igt@gem_exec_schedule@fair-spare@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.00, 14.36] s

  * igt@gem_exec_schedule@fair-sync:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-sync-solo:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-sync-solo@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 12.79] s

  * igt@gem_exec_schedule@fair-sync-solo@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [10.53, 16.17] s

  * igt@gem_exec_schedule@fair-sync-solo@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [10.53, 13.43] s

  * igt@gem_exec_schedule@fair-sync-solo@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [12.79, 13.40] s

  * igt@gem_exec_schedule@fair-sync-solo@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [10.53, 13.41] s

  * igt@gem_exec_schedule@fair-sync-vip:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-sync-vip@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.01] s

  * igt@gem_exec_schedule@fair-sync-vip@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.80, 13.66] s

  * igt@gem_exec_schedule@fair-sync-vip@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.80, 13.79] s

  * igt@gem_exec_schedule@fair-sync-vip@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [12.93, 13.78] s

  * igt@gem_exec_schedule@fair-sync-vip@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.74, 13.78] s

  * igt@gem_exec_schedule@fair-sync@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 13.02] s

  * igt@gem_exec_schedule@fair-sync@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.69, 13.47] s

  * igt@gem_exec_schedule@fair-sync@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.51, 12.92] s

  * igt@gem_exec_schedule@fair-sync@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [12.84, 13.59] s

  * igt@gem_exec_schedule@fair-sync@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.54, 12.93] s

  * igt@gem_exec_schedule@fair-throttle:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-throttle-rrul:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-throttle-rrul@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 11.41] s

  * igt@gem_exec_schedule@fair-throttle-rrul@rcs0:
    - Statuses : 2 fail(s) 3 pass(s)
    - Exec time: [8.73, 12.29] s

  * igt@gem_exec_schedule@fair-throttle-rrul@vcs0:
    - Statuses : 2 fail(s) 3 pass(s)
    - Exec time: [8.84, 12.51] s

  * igt@gem_exec_schedule@fair-throttle-rrul@vcs1:
    - Statuses : 1 fail(s) 1 pass(s)
    - Exec time: [11.19, 11.56] s

  * igt@gem_exec_schedule@fair-throttle-rrul@vecs0:
    - Statuses : 4 fail(s) 1 pass(s)
    - Exec time: [8.77, 11.18] s

  * igt@gem_exec_schedule@fair-throttle-share:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-throttle-share@bcs0:
    - Statuses : 2 pass(s) 3 skip(s)
    - Exec time: [0.0, 11.46] s

  * igt@gem_exec_schedule@fair-throttle-share@rcs0:
    - Statuses : 2 fail(s) 3 pass(s)
    - Exec time: [8.80, 12.15] s

  * igt@gem_exec_schedule@fair-throttle-share@vcs0:
    - Statuses : 1 fail(s) 4 pass(s)
    - Exec time: [8.85, 11.96] s

  * igt@gem_exec_schedule@fair-throttle-share@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [11.31, 11.39] s

  * igt@gem_exec_schedule@fair-throttle-share@vecs0:
    - Statuses : 2 fail(s) 3 pass(s)
    - Exec time: [8.91, 11.97] s

  * igt@gem_exec_schedule@fair-throttle-solo:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-throttle-solo@bcs0:
    - Statuses : 2 fail(s) 2 skip(s)
    - Exec time: [0.0, 12.95] s

  * igt@gem_exec_schedule@fair-throttle-solo@rcs0:
    - Statuses : 2 fail(s) 2 pass(s)
    - Exec time: [11.52, 12.60] s

  * igt@gem_exec_schedule@fair-throttle-solo@vcs0:
    - Statuses : 3 fail(s) 1 pass(s)
    - Exec time: [11.58, 12.79] s

  * igt@gem_exec_schedule@fair-throttle-solo@vcs1:
    - Statuses : 2 fail(s)
    - Exec time: [11.48, 12.88] s

  * igt@gem_exec_schedule@fair-throttle-solo@vecs0:
    - Statuses : 3 fail(s) 1 pass(s)
    - Exec time: [11.64, 12.85] s

  * igt@gem_exec_schedule@fair-throttle-vip:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fair-throttle-vip@bcs0:
    - Statuses : 2 fail(s) 3 skip(s)
    - Exec time: [0.0, 9.96] s

  * igt@gem_exec_schedule@fair-throttle-vip@rcs0:
    - Statuses : 2 fail(s) 3 pass(s)
    - Exec time: [9.08, 11.68] s

  * igt@gem_exec_schedule@fair-throttle-vip@vcs0:
    - Statuses : 1 fail(s) 4 pass(s)
    - Exec time: [9.14, 12.38] s

  * igt@gem_exec_schedule@fair-throttle-vip@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [9.32, 11.61] s

  * igt@gem_exec_schedule@fair-throttle-vip@vecs0:
    - Statuses : 1 fail(s) 4 pass(s)
    - Exec time: [9.14, 12.43] s

  * igt@gem_exec_schedule@fair-throttle@bcs0:
    - Statuses : 2 fail(s) 3 skip(s)
    - Exec time: [0.0, 9.05] s

  * igt@gem_exec_schedule@fair-throttle@rcs0:
    - Statuses : 3 fail(s) 2 pass(s)
    - Exec time: [8.84, 12.35] s

  * igt@gem_exec_schedule@fair-throttle@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [8.86, 12.70] s

  * igt@gem_exec_schedule@fair-throttle@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [11.29, 11.67] s

  * igt@gem_exec_schedule@fair-throttle@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [9.0, 12.54] s

  * igt@gem_exec_schedule@fairslice:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@fairslice-all:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 2.10] s

  * igt@gem_exec_schedule@fairslice@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [2.01, 2.03] s

  * igt@gem_exec_schedule@fairslice@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [2.01, 2.03] s

  * igt@gem_exec_schedule@fairslice@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [2.01, 2.02] s

  * igt@gem_exec_schedule@fairslice@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [2.01] s

  * igt@gem_exec_schedule@fairslice@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [2.01, 2.02] s

  * igt@gem_exec_schedule@preempt-user:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_schedule@preempt-user@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.10] s

  * igt@gem_exec_schedule@preempt-user@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.08] s

  * igt@gem_exec_schedule@preempt-user@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.07] s

  * igt@gem_exec_schedule@preempt-user@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [0.02] s

  * igt@gem_exec_schedule@preempt-user@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.07] s

  * igt@gem_exec_schedule@u-fairslice:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@u-fairslice-all:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 2.09] s

  * igt@gem_exec_schedule@u-fairslice@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [2.01, 2.03] s

  * igt@gem_exec_schedule@u-fairslice@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [2.01, 2.03] s

  * igt@gem_exec_schedule@u-fairslice@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [2.01, 2.02] s

  * igt@gem_exec_schedule@u-fairslice@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [2.01, 2.03] s

  * igt@gem_exec_schedule@u-fairslice@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [2.01, 2.02] s

  * igt@gem_exec_schedule@u-independent:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_schedule@u-independent@bcs0:
    - Statuses : 6 pass(s)
    - Exec time: [0.02, 0.21] s

  * igt@gem_exec_schedule@u-independent@rcs0:
    - Statuses : 6 pass(s)
    - Exec time: [0.04, 1.61] s

  * igt@gem_exec_schedule@u-independent@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.06, 0.13] s

  * igt@gem_exec_schedule@u-independent@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [0.06, 0.11] s

  * igt@gem_exec_schedule@u-independent@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.07, 0.16] s

  * igt@gem_exec_schedule@u-lateslice:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@u-lateslice@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.03] s

  * igt@gem_exec_schedule@u-lateslice@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.01, 0.03] s

  * igt@gem_exec_schedule@u-lateslice@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.03] s

  * igt@gem_exec_schedule@u-lateslice@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [0.02] s

  * igt@gem_exec_schedule@u-lateslice@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.01, 0.03] s

  * igt@gem_exec_schedule@u-semaphore-codependency:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.03] s

  * igt@gem_exec_schedule@u-semaphore-noskip:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.44] s

  * igt@gem_exec_schedule@u-semaphore-resolve:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.05] s

  * igt@gem_exec_schedule@u-semaphore-user:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.02] s

  * igt@gem_exec_schedule@u-submit-early-slice:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@u-submit-early-slice@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.03, 0.07] s

  * igt@gem_exec_schedule@u-submit-early-slice@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.03, 0.08] s

  * igt@gem_exec_schedule@u-submit-early-slice@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.03, 0.06] s

  * igt@gem_exec_schedule@u-submit-early-slice@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [0.03] s

  * igt@gem_exec_schedule@u-submit-early-slice@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.03, 0.07] s

  * igt@gem_exec_schedule@u-submit-golden-slice:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@u-submit-golden-slice@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.07] s

  * igt@gem_exec_schedule@u-submit-golden-slice@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.03, 0.08] s

  * igt@gem_exec_schedule@u-submit-golden-slice@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.06] s

  * igt@gem_exec_schedule@u-submit-golden-slice@vcs1:
    - Statuses : 3 pass(s)
    - Exec time: [0.02, 0.03] s

  * igt@gem_exec_schedule@u-submit-golden-slice@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.07] s

  * igt@gem_exec_schedule@u-submit-late-slice:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@gem_exec_schedule@u-submit-late-slice@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.06] s

  * igt@gem_exec_schedule@u-submit-late-slice@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.06] s

  * igt@gem_exec_schedule@u-submit-late-slice@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.06] s

  * igt@gem_exec_schedule@u-submit-late-slice@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [0.02, 0.03] s

  * igt@gem_exec_schedule@u-submit-late-slice@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [0.02, 0.06] s

  * igt@gem_userptr_blits@exec-isolation:
    - Statuses : 6 pass(s)
    - Exec time: [2.15, 2.16] s

  * igt@gem_userptr_blits@unmap-isolation:
    - Statuses : 6 pass(s)
    - Exec time: [0.01, 0.04] s

  

Known issues
------------

  Here are the changes found in IGTPW_5067_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_reloc@basic-many-active@bcs0:
    - shard-glk:          [PASS][9] -> [FAIL][10] ([i915#2389])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-glk3/igt@gem_exec_reloc@basic-many-active@bcs0.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-glk9/igt@gem_exec_reloc@basic-many-active@bcs0.html

  * igt@gem_exec_suspend@basic-s3:
    - shard-kbl:          [PASS][11] -> [INCOMPLETE][12] ([i915#155])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-kbl2/igt@gem_exec_suspend@basic-s3.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-kbl4/igt@gem_exec_suspend@basic-s3.html

  * igt@gem_exec_whisper@basic-forked-all:
    - shard-glk:          [PASS][13] -> [DMESG-WARN][14] ([i915#118] / [i915#95]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-glk9/igt@gem_exec_whisper@basic-forked-all.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-glk9/igt@gem_exec_whisper@basic-forked-all.html

  * igt@i915_module_load@reload:
    - shard-iclb:         [PASS][15] -> [DMESG-WARN][16] ([i915#1982])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-iclb6/igt@i915_module_load@reload.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-iclb1/igt@i915_module_load@reload.html

  * igt@i915_selftest@perf@request:
    - shard-tglb:         [PASS][17] -> [INCOMPLETE][18] ([i915#1823])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-tglb8/igt@i915_selftest@perf@request.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-tglb7/igt@i915_selftest@perf@request.html

  * igt@i915_suspend@fence-restore-tiled2untiled:
    - shard-kbl:          [PASS][19] -> [INCOMPLETE][20] ([i915#155] / [i915#794])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-kbl6/igt@i915_suspend@fence-restore-tiled2untiled.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-kbl4/igt@i915_suspend@fence-restore-tiled2untiled.html

  * igt@kms_cursor_crc@pipe-c-cursor-alpha-opaque:
    - shard-apl:          [PASS][21] -> [FAIL][22] ([i915#1635] / [i915#54])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-apl2/igt@kms_cursor_crc@pipe-c-cursor-alpha-opaque.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-apl2/igt@kms_cursor_crc@pipe-c-cursor-alpha-opaque.html
    - shard-kbl:          [PASS][23] -> [FAIL][24] ([i915#54])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-kbl6/igt@kms_cursor_crc@pipe-c-cursor-alpha-opaque.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-kbl7/igt@kms_cursor_crc@pipe-c-cursor-alpha-opaque.html

  * igt@kms_cursor_legacy@pipe-b-torture-move:
    - shard-iclb:         [PASS][25] -> [DMESG-WARN][26] ([i915#128])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5815/shard-iclb7/igt@kms_cursor_legacy@pipe-b-torture-move.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/shard-iclb6/igt@kms_cursor_legacy@pipe-b-torture-move.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-indfb-msflip-blt:
    - shard-tglb:         [PASS][27] -> [DMESG-WARN][28] ([i915#1982]) +3 similar issu

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5067/index.html

[-- Attachment #1.2: Type: text/html, Size: 38022 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Intel-gfx] [PATCH i-g-t 08/10] lib: Use unsigned gen for forward compatible tests
  2020-10-14 10:40   ` [igt-dev] " Chris Wilson
  (?)
@ 2020-10-15  4:58   ` Zbigniew Kempczyński
  -1 siblings, 0 replies; 27+ messages in thread
From: Zbigniew Kempczyński @ 2020-10-15  4:58 UTC (permalink / raw)
  To: Chris Wilson; +Cc: igt-dev, intel-gfx

On Wed, Oct 14, 2020 at 11:40:36AM +0100, Chris Wilson wrote:
> Unknown, so future, gen are marked as -1 which we want to treat as -1u
> so that always pass >= gen checks.

We've discussed this some time ago. I was previously to 'no' but you've
realized me we can avoid a lot of failures on likely working tests because
they will fail on gen check where's real reason is lack of platform entry
(then focus on those which will fail).

So, I see no problem with that:

Acked-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>

> 
> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2298
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> ---
>  lib/intel_batchbuffer.c | 10 +++++-----
>  lib/intel_batchbuffer.h | 10 ++++++----
>  2 files changed, 11 insertions(+), 9 deletions(-)
> 
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 60dbfe261..fc73495c0 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -414,7 +414,7 @@ intel_blt_copy(struct intel_batchbuffer *batch,
>  	       drm_intel_bo *dst_bo, int dst_x1, int dst_y1, int dst_pitch,
>  	       int width, int height, int bpp)
>  {
> -	const int gen = batch->gen;
> +	const unsigned int gen = batch->gen;
>  	uint32_t src_tiling, dst_tiling, swizzle;
>  	uint32_t cmd_bits = 0;
>  	uint32_t br13_bits;
> @@ -553,7 +553,7 @@ unsigned igt_buf_height(const struct igt_buf *buf)
>   * Returns:
>   * The width of the ccs buffer data.
>   */
> -unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf)
> +unsigned int igt_buf_intel_ccs_width(unsigned int gen, const struct igt_buf *buf)
>  {
>  	/*
>  	 * GEN12+: The CCS unit size is 64 bytes mapping 4 main surface
> @@ -576,7 +576,7 @@ unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf)
>   * Returns:
>   * The height of the ccs buffer data.
>   */
> -unsigned int igt_buf_intel_ccs_height(int gen, const struct igt_buf *buf)
> +unsigned int igt_buf_intel_ccs_height(unsigned int gen, const struct igt_buf *buf)
>  {
>  	/*
>  	 * GEN12+: The CCS unit size is 64 bytes mapping 4 main surface
> @@ -703,7 +703,7 @@ fill_object(struct drm_i915_gem_exec_object2 *obj, uint32_t gem_handle,
>  
>  static void exec_blit(int fd,
>  		      struct drm_i915_gem_exec_object2 *objs, uint32_t count,
> -		      int gen)
> +		      unsigned int gen)
>  {
>  	struct drm_i915_gem_execbuffer2 exec = {
>  		.buffers_ptr = to_user_pointer(objs),
> @@ -2416,7 +2416,7 @@ void intel_bb_emit_blt_copy(struct intel_bb *ibb,
>  			    int dst_x1, int dst_y1, int dst_pitch,
>  			    int width, int height, int bpp)
>  {
> -	const int gen = ibb->gen;
> +	const unsigned int gen = ibb->gen;
>  	uint32_t cmd_bits = 0;
>  	uint32_t br13_bits;
>  	uint32_t mask;
> diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
> index d20b4e66a..ab1b0c286 100644
> --- a/lib/intel_batchbuffer.h
> +++ b/lib/intel_batchbuffer.h
> @@ -15,7 +15,7 @@
>  struct intel_batchbuffer {
>  	drm_intel_bufmgr *bufmgr;
>  	uint32_t devid;
> -	int gen;
> +	unsigned int gen;
>  
>  	drm_intel_context *ctx;
>  	drm_intel_bo *bo;
> @@ -263,8 +263,10 @@ static inline bool igt_buf_compressed(const struct igt_buf *buf)
>  
>  unsigned igt_buf_width(const struct igt_buf *buf);
>  unsigned igt_buf_height(const struct igt_buf *buf);
> -unsigned int igt_buf_intel_ccs_width(int gen, const struct igt_buf *buf);
> -unsigned int igt_buf_intel_ccs_height(int gen, const struct igt_buf *buf);
> +unsigned int igt_buf_intel_ccs_width(unsigned int gen,
> +				     const struct igt_buf *buf);
> +unsigned int igt_buf_intel_ccs_height(unsigned int gen,
> +				      const struct igt_buf *buf);
>  
>  void igt_blitter_src_copy(int fd,
>  			  /* src */
> @@ -434,7 +436,7 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
>   */
>  struct intel_bb {
>  	int i915;
> -	int gen;
> +	unsigned int gen;
>  	bool debug;
>  	bool dump_base64;
>  	bool enforce_relocs;
> -- 
> 2.28.0
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Intel-gfx] [igt-dev] [PATCH i-g-t 01/10] i915/gem_userptr_blits: Tighten has_userptr()
  2020-10-14 10:40 ` [igt-dev] " Chris Wilson
@ 2020-10-15 12:53   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 27+ messages in thread
From: Tvrtko Ursulin @ 2020-10-15 12:53 UTC (permalink / raw)
  To: Chris Wilson, igt-dev; +Cc: intel-gfx


On 14/10/2020 11:40, Chris Wilson wrote:
> We use has_userptr() to determine if the different flags are supported,
> so it helps not to override the flags inside the test.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   tests/i915/gem_userptr_blits.c | 12 ++++++------
>   1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/tests/i915/gem_userptr_blits.c b/tests/i915/gem_userptr_blits.c
> index 268423dcd..01498edad 100644
> --- a/tests/i915/gem_userptr_blits.c
> +++ b/tests/i915/gem_userptr_blits.c
> @@ -71,8 +71,7 @@
>   #define PAGE_SIZE 4096
>   #endif
>   
> -static uint32_t userptr_flags = I915_USERPTR_UNSYNCHRONIZED;
> -
> +static uint32_t userptr_flags;
>   static bool *can_mmap;
>   
>   #define WIDTH 512
> @@ -504,14 +503,11 @@ static int has_userptr(int fd)
>   {
>   	uint32_t handle = 0;
>   	void *ptr;
> -	uint32_t oldflags;
>   	int ret;
>   
>   	igt_assert(posix_memalign(&ptr, PAGE_SIZE, PAGE_SIZE) == 0);
> -	oldflags = userptr_flags;
> -	gem_userptr_test_unsynchronized();
>   	ret = __gem_userptr(fd, ptr, PAGE_SIZE, 0, userptr_flags, &handle);
> -	userptr_flags = oldflags;
> +	errno = 0;
>   	if (ret != 0) {
>   		free(ptr);
>   		return 0;
> @@ -2112,6 +2108,10 @@ igt_main_args("c:", NULL, help_str, opt_handler, NULL)
>   
>   	igt_subtest_group {
>   		igt_fixture {
> +			/* Either mode will do for parameter checking */
> +			gem_userptr_test_synchronized();
> +			if (!has_userptr(fd))
> +				gem_userptr_test_unsynchronized();
>   			igt_require(has_userptr(fd));
>   		}
>   
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 01/10] i915/gem_userptr_blits: Tighten has_userptr()
@ 2020-10-15 12:53   ` Tvrtko Ursulin
  0 siblings, 0 replies; 27+ messages in thread
From: Tvrtko Ursulin @ 2020-10-15 12:53 UTC (permalink / raw)
  To: Chris Wilson, igt-dev; +Cc: intel-gfx


On 14/10/2020 11:40, Chris Wilson wrote:
> We use has_userptr() to determine if the different flags are supported,
> so it helps not to override the flags inside the test.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   tests/i915/gem_userptr_blits.c | 12 ++++++------
>   1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/tests/i915/gem_userptr_blits.c b/tests/i915/gem_userptr_blits.c
> index 268423dcd..01498edad 100644
> --- a/tests/i915/gem_userptr_blits.c
> +++ b/tests/i915/gem_userptr_blits.c
> @@ -71,8 +71,7 @@
>   #define PAGE_SIZE 4096
>   #endif
>   
> -static uint32_t userptr_flags = I915_USERPTR_UNSYNCHRONIZED;
> -
> +static uint32_t userptr_flags;
>   static bool *can_mmap;
>   
>   #define WIDTH 512
> @@ -504,14 +503,11 @@ static int has_userptr(int fd)
>   {
>   	uint32_t handle = 0;
>   	void *ptr;
> -	uint32_t oldflags;
>   	int ret;
>   
>   	igt_assert(posix_memalign(&ptr, PAGE_SIZE, PAGE_SIZE) == 0);
> -	oldflags = userptr_flags;
> -	gem_userptr_test_unsynchronized();
>   	ret = __gem_userptr(fd, ptr, PAGE_SIZE, 0, userptr_flags, &handle);
> -	userptr_flags = oldflags;
> +	errno = 0;
>   	if (ret != 0) {
>   		free(ptr);
>   		return 0;
> @@ -2112,6 +2108,10 @@ igt_main_args("c:", NULL, help_str, opt_handler, NULL)
>   
>   	igt_subtest_group {
>   		igt_fixture {
> +			/* Either mode will do for parameter checking */
> +			gem_userptr_test_synchronized();
> +			if (!has_userptr(fd))
> +				gem_userptr_test_unsynchronized();
>   			igt_require(has_userptr(fd));
>   		}
>   
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2020-10-15 12:54 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-14 10:40 [Intel-gfx] [PATCH i-g-t 01/10] i915/gem_userptr_blits: Tighten has_userptr() Chris Wilson
2020-10-14 10:40 ` [igt-dev] " Chris Wilson
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 02/10] i915/gem_exec_balancer: Check balancer submission latency Chris Wilson
2020-10-14 10:40   ` [igt-dev] " Chris Wilson
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 03/10] i915/gen9_exec_parse: Check oversized batch with length==0 Chris Wilson
2020-10-14 10:40   ` [igt-dev] " Chris Wilson
2020-10-14 10:43   ` [Intel-gfx] " Matthew Auld
2020-10-14 10:43     ` [igt-dev] " Matthew Auld
2020-10-14 10:49     ` [Intel-gfx] " Chris Wilson
2020-10-14 10:49       ` Chris Wilson
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 04/10] i915/gem_exec_schedule: Include userptr scheduling tests Chris Wilson
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 05/10] i915/gem_exec_balancer: Check interactions between bonds and userptr Chris Wilson
2020-10-14 10:40   ` [igt-dev] " Chris Wilson
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 06/10] i915/gem_exec_reloc: Continuing the trend of checking userptr Chris Wilson
2020-10-14 10:40   ` [igt-dev] " Chris Wilson
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 07/10] i915/gem_userptr_blits: Test execution isolation Chris Wilson
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 08/10] lib: Use unsigned gen for forward compatible tests Chris Wilson
2020-10-14 10:40   ` [igt-dev] " Chris Wilson
2020-10-15  4:58   ` [Intel-gfx] " Zbigniew Kempczyński
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 09/10] tests/i915: Treat gen as unsigned for forward compatibility Chris Wilson
2020-10-14 10:40   ` [igt-dev] " Chris Wilson
2020-10-14 10:40 ` [Intel-gfx] [PATCH i-g-t 10/10] i915/gem_exec_schedule: Try to spot unfairness Chris Wilson
2020-10-14 10:40   ` [igt-dev] " Chris Wilson
2020-10-14 11:18 ` [igt-dev] ✓ Fi.CI.BAT: success for series starting with [i-g-t,01/10] i915/gem_userptr_blits: Tighten has_userptr() Patchwork
2020-10-14 17:57 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
2020-10-15 12:53 ` [Intel-gfx] [igt-dev] [PATCH i-g-t 01/10] " Tvrtko Ursulin
2020-10-15 12:53   ` Tvrtko Ursulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.