All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator
@ 2021-08-16 11:56 Zbigniew Kempczyński
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 1/5] tests/gem_exec_capture: Remove unnecessary multiprocess stop() call Zbigniew Kempczyński
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-16 11:56 UTC (permalink / raw)
  To: igt-dev; +Cc: Zbigniew Kempczyński, Petri Latvala, Ashutosh Dixit

This is third series which should decrease coverage gap on 
gens without relocations support.

In gem_exec_fence I've left three subtests intact - they are 
not trivial to rewrite so they will be send in future series.
I decided to send not fully rewritten test because some subtests
are part of BAT run and this should make life easier to enable
DG1 on CI.

I've incidentally omit review comments and didn't address minor
nits so I've added it in this series (gem_exec_capture + gem_exec_big).

In gem_exec_schedule 'pi*' subtests should work properly now. I had
userfault disabled previously so I notice skip on CI, but we want to
have these tests.

Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>

Zbigniew Kempczyński (5):
  tests/gem_exec_capture: Remove unnecessary multiprocess stop() call
  tests/gem_exec_big: Move check of pressumed offset out of no-reloc
    scope
  tests/gem_exec_fence: Adopt to use allocator
  tests/gem_exec_schedule: Adopt to use allocator
  HAX: remove gttfill for tgl ci

 tests/i915/gem_exec_big.c             |   3 +-
 tests/i915/gem_exec_capture.c         |   1 -
 tests/i915/gem_exec_fence.c           | 240 +++++++++----
 tests/i915/gem_exec_schedule.c        | 472 ++++++++++++++++++++------
 tests/intel-ci/fast-feedback.testlist |   1 -
 5 files changed, 541 insertions(+), 176 deletions(-)

-- 
2.26.0

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [igt-dev] [PATCH i-g-t 1/5] tests/gem_exec_capture: Remove unnecessary multiprocess stop() call
  2021-08-16 11:56 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
@ 2021-08-16 11:56 ` Zbigniew Kempczyński
  2021-08-17  0:25   ` Dixit, Ashutosh
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 2/5] tests/gem_exec_big: Move check of pressumed offset out of no-reloc scope Zbigniew Kempczyński
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-16 11:56 UTC (permalink / raw)
  To: igt-dev; +Cc: Zbigniew Kempczyński, Petri Latvala, Ashutosh Dixit

I've incidentally missed this during review and line calling
intel_allocator_multiprocess_stop() left before merge.

Remove this as source of confusion - for igt_fork() we can use
standalone allocator within child for some cases (reopen driver
or work within new created context).

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_capture.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tests/i915/gem_exec_capture.c b/tests/i915/gem_exec_capture.c
index cd6b2f88f..f2ea6cb06 100644
--- a/tests/i915/gem_exec_capture.c
+++ b/tests/i915/gem_exec_capture.c
@@ -612,7 +612,6 @@ static void prioinv(int fd, int dir, const intel_ctx_t *ctx,
 	gem_quiescent_gpu(fd);
 	put_offset(ahnd, obj.handle);
 	put_ahnd(ahnd);
-	intel_allocator_multiprocess_stop();
 }
 
 static void userptr(int fd, int dir)
-- 
2.26.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [igt-dev] [PATCH i-g-t 2/5] tests/gem_exec_big: Move check of pressumed offset out of no-reloc scope
  2021-08-16 11:56 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 1/5] tests/gem_exec_capture: Remove unnecessary multiprocess stop() call Zbigniew Kempczyński
@ 2021-08-16 11:56 ` Zbigniew Kempczyński
  2021-08-17  0:31   ` Dixit, Ashutosh
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 3/5] tests/gem_exec_fence: Adopt to use allocator Zbigniew Kempczyński
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-16 11:56 UTC (permalink / raw)
  To: igt-dev; +Cc: Zbigniew Kempczyński, Petri Latvala, Ashutosh Dixit

Missed during addressing last review - we don't want to process checking
of pressumed offset on no-reloc path. Move this out of no-reloc scope.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_big.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_exec_big.c b/tests/i915/gem_exec_big.c
index 90230fc33..2f47de398 100644
--- a/tests/i915/gem_exec_big.c
+++ b/tests/i915/gem_exec_big.c
@@ -98,12 +98,13 @@ static void exec1(int fd, uint32_t handle, uint64_t reloc_ofs, unsigned flags, c
 
 	gem_execbuf(fd, &execbuf);
 
-	igt_warn_on(gem_reloc[0].presumed_offset == -1);
 	gem_set_domain(fd, gem_exec[0].handle, I915_GEM_DOMAIN_WC, 0);
 
 	if (!has_relocs)
 		return;
 
+	igt_warn_on(gem_reloc[0].presumed_offset == -1);
+
 	if (use_64bit_relocs) {
 		uint64_t tmp;
 		if (ptr)
-- 
2.26.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [igt-dev] [PATCH i-g-t 3/5] tests/gem_exec_fence: Adopt to use allocator
  2021-08-16 11:56 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 1/5] tests/gem_exec_capture: Remove unnecessary multiprocess stop() call Zbigniew Kempczyński
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 2/5] tests/gem_exec_big: Move check of pressumed offset out of no-reloc scope Zbigniew Kempczyński
@ 2021-08-16 11:56 ` Zbigniew Kempczyński
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: " Zbigniew Kempczyński
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-16 11:56 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Andrzej Turko, Petri Latvala, Ashutosh Dixit

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Three subtests are not covered here, there are:
- syncobj-timeline-chain-engines
- syncobj-stationary-timeline-chain-engines
- syncobj-backward-timeline-chain-engines

Due to sophisticated manner of the three tests mentioned above they
will be subject of separate patch. At the moment CI depends on reloc
version only so this doesn't introduce regression there and decreases
coverage gap on no-reloc discrete runs.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_fence.c | 240 +++++++++++++++++++++++++-----------
 1 file changed, 171 insertions(+), 69 deletions(-)

diff --git a/tests/i915/gem_exec_fence.c b/tests/i915/gem_exec_fence.c
index 620e7ac22..8859c81cd 100644
--- a/tests/i915/gem_exec_fence.c
+++ b/tests/i915/gem_exec_fence.c
@@ -57,9 +57,10 @@ struct sync_merge_data {
 #define   MI_SEMAPHORE_SAD_EQ_SDD       (4 << 12)
 #define   MI_SEMAPHORE_SAD_NEQ_SDD      (5 << 12)
 
-static void store(int fd, const intel_ctx_t *ctx,
+static void store(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
 		  const struct intel_execution_engine2 *e,
-		  int fence, uint32_t target, unsigned offset_value)
+		  int fence, uint32_t target, uint64_t target_offset,
+		  unsigned offset_value)
 {
 	const int SCRATCH = 0;
 	const int BATCH = 1;
@@ -67,7 +68,8 @@ static void store(int fd, const intel_ctx_t *ctx,
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
-	uint32_t batch[16];
+	uint32_t batch[16], delta;
+	uint64_t bb_offset;
 	int i;
 
 	memset(&execbuf, 0, sizeof(execbuf));
@@ -84,33 +86,43 @@ static void store(int fd, const intel_ctx_t *ctx,
 
 	obj[BATCH].handle = gem_create(fd, 4096);
 	obj[BATCH].relocs_ptr = to_user_pointer(&reloc);
-	obj[BATCH].relocation_count = 1;
+	obj[BATCH].relocation_count = !ahnd ? 1 : 0;
+	bb_offset = get_offset(ahnd, obj[BATCH].handle, 4096, 0);
 	memset(&reloc, 0, sizeof(reloc));
 
 	i = 0;
-	reloc.target_handle = obj[SCRATCH].handle;
-	reloc.presumed_offset = -1;
-	reloc.offset = sizeof(uint32_t) * (i + 1);
-	reloc.delta = sizeof(uint32_t) * offset_value;
-	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
-	reloc.write_domain = I915_GEM_DOMAIN_INSTRUCTION;
+	delta = sizeof(uint32_t) * offset_value;
+	if (!ahnd) {
+		reloc.target_handle = obj[SCRATCH].handle;
+		reloc.presumed_offset = -1;
+		reloc.offset = sizeof(uint32_t) * (i + 1);
+		reloc.delta = delta;
+		reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
+		reloc.write_domain = I915_GEM_DOMAIN_INSTRUCTION;
+	} else {
+		obj[SCRATCH].offset = target_offset;
+		obj[SCRATCH].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[BATCH].offset = bb_offset;
+		obj[BATCH].flags |= EXEC_OBJECT_PINNED;
+	}
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = reloc.delta;
-		batch[++i] = 0;
+		batch[++i] = target_offset + delta;
+		batch[++i] = target_offset >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
-		batch[++i] = reloc.delta;
+		batch[++i] = delta;
 		reloc.offset += sizeof(uint32_t);
 	} else {
 		batch[i]--;
-		batch[++i] = reloc.delta;
+		batch[++i] = delta;
 	}
 	batch[++i] = offset_value;
 	batch[++i] = MI_BATCH_BUFFER_END;
 	gem_write(fd, obj[BATCH].handle, 0, batch, sizeof(batch));
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj[BATCH].handle);
+	put_offset(ahnd, obj[BATCH].handle);
 }
 
 static bool fence_busy(int fence)
@@ -132,6 +144,7 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx,
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct timespec tv;
 	uint32_t *batch;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	int fence, i, timeout;
 
 	if ((flags & HANG) == 0)
@@ -147,10 +160,7 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx,
 
 	memset(&obj, 0, sizeof(obj));
 	obj.handle = gem_create(fd, 4096);
-
-	obj.relocs_ptr = to_user_pointer(&reloc);
-	obj.relocation_count = 1;
-	memset(&reloc, 0, sizeof(reloc));
+	obj.offset = get_offset(ahnd, obj.handle, 4096, 0);
 
 	batch = gem_mmap__device_coherent(fd, obj.handle, 0, 4096, PROT_WRITE);
 	gem_set_domain(fd, obj.handle,
@@ -160,24 +170,31 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx,
 	if ((flags & HANG) == 0)
 		batch[i++] = 0x5 << 23;
 
-	reloc.target_handle = obj.handle; /* recurse */
-	reloc.presumed_offset = 0;
-	reloc.offset = (i + 1) * sizeof(uint32_t);
-	reloc.delta = 0;
-	reloc.read_domains = I915_GEM_DOMAIN_COMMAND;
-	reloc.write_domain = 0;
+	if (!ahnd) {
+		obj.relocs_ptr = to_user_pointer(&reloc);
+		obj.relocation_count = 1;
+		memset(&reloc, 0, sizeof(reloc));
+		reloc.target_handle = obj.handle; /* recurse */
+		reloc.presumed_offset = obj.offset;
+		reloc.offset = (i + 1) * sizeof(uint32_t);
+		reloc.delta = 0;
+		reloc.read_domains = I915_GEM_DOMAIN_COMMAND;
+		reloc.write_domain = 0;
+	} else {
+		obj.flags |= EXEC_OBJECT_PINNED;
+	}
 
 	batch[i] = MI_BATCH_BUFFER_START;
 	if (gen >= 8) {
 		batch[i] |= 1 << 8 | 1;
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
+		batch[++i] = obj.offset >> 32;
 	} else if (gen >= 6) {
 		batch[i] |= 1 << 8;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
 	} else {
 		batch[i] |= 2 << 6;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
 		if (gen < 4) {
 			batch[i] |= 1;
 			reloc.delta = 1;
@@ -216,6 +233,8 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx,
 
 	close(fence);
 	gem_close(fd, obj.handle);
+	put_offset(ahnd, obj.handle);
+	put_ahnd(ahnd);
 
 	gem_quiescent_gpu(fd);
 }
@@ -229,6 +248,7 @@ static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct timespec tv;
 	uint32_t *batch;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	int all, i, timeout;
 
 	gem_quiescent_gpu(fd);
@@ -239,10 +259,8 @@ static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 
 	memset(&obj, 0, sizeof(obj));
 	obj.handle = gem_create(fd, 4096);
-
-	obj.relocs_ptr = to_user_pointer(&reloc);
-	obj.relocation_count = 1;
-	memset(&reloc, 0, sizeof(reloc));
+	obj.offset = get_offset(ahnd, obj.handle, 4096, 0);
+	igt_assert(obj.offset != -1);
 
 	batch = gem_mmap__device_coherent(fd, obj.handle, 0, 4096, PROT_WRITE);
 	gem_set_domain(fd, obj.handle,
@@ -252,24 +270,31 @@ static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 	if ((flags & HANG) == 0)
 		batch[i++] = 0x5 << 23;
 
-	reloc.target_handle = obj.handle; /* recurse */
-	reloc.presumed_offset = 0;
-	reloc.offset = (i + 1) * sizeof(uint32_t);
-	reloc.delta = 0;
-	reloc.read_domains = I915_GEM_DOMAIN_COMMAND;
-	reloc.write_domain = 0;
+	if (!ahnd) {
+		obj.relocs_ptr = to_user_pointer(&reloc);
+		obj.relocation_count = 1;
+		memset(&reloc, 0, sizeof(reloc));
+		reloc.target_handle = obj.handle; /* recurse */
+		reloc.presumed_offset = obj.offset;
+		reloc.offset = (i + 1) * sizeof(uint32_t);
+		reloc.delta = 0;
+		reloc.read_domains = I915_GEM_DOMAIN_COMMAND;
+		reloc.write_domain = 0;
+	} else {
+		obj.flags |= EXEC_OBJECT_PINNED;
+	}
 
 	batch[i] = MI_BATCH_BUFFER_START;
 	if (gen >= 8) {
 		batch[i] |= 1 << 8 | 1;
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
+		batch[++i] = obj.offset >> 32;
 	} else if (gen >= 6) {
 		batch[i] |= 1 << 8;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
 	} else {
 		batch[i] |= 2 << 6;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
 		if (gen < 4) {
 			batch[i] |= 1;
 			reloc.delta = 1;
@@ -331,6 +356,8 @@ static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 
 	close(all);
 	gem_close(fd, obj.handle);
+	put_offset(ahnd, obj.handle);
+	put_ahnd(ahnd);
 
 	gem_quiescent_gpu(fd);
 }
@@ -351,13 +378,17 @@ static void test_fence_await(int fd, const intel_ctx_t *ctx,
 	uint32_t scratch = gem_create(fd, 4096);
 	igt_spin_t *spin;
 	uint32_t *out;
+	uint64_t scratch_offset, ahnd = get_reloc_ahnd(fd, ctx->id);
 	int i;
 
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
+
 	out = gem_mmap__device_coherent(fd, scratch, 0, 4096, PROT_WRITE);
 	gem_set_domain(fd, scratch,
 			I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
 
 	spin = igt_spin_new(fd,
+			    .ahnd = ahnd,
 			    .ctx = ctx,
 			    .engine = e->flags,
 			    .flags = IGT_SPIN_FENCE_OUT | spin_hang(flags));
@@ -369,10 +400,15 @@ static void test_fence_await(int fd, const intel_ctx_t *ctx,
 			continue;
 
 		if (flags & NONBLOCK) {
-			store(fd, ctx, e2, spin->out_fence, scratch, i);
+			store(fd, ahnd, ctx, e2, spin->out_fence,
+			      scratch, scratch_offset, i);
 		} else {
-			igt_fork(child, 1)
-				store(fd, ctx, e2, spin->out_fence, scratch, i);
+			igt_fork(child, 1) {
+				ahnd = get_reloc_ahnd(fd, ctx->id);
+				store(fd, ahnd, ctx, e2, spin->out_fence,
+				      scratch, scratch_offset, i);
+				put_ahnd(ahnd);
+			}
 		}
 
 		i++;
@@ -398,6 +434,8 @@ static void test_fence_await(int fd, const intel_ctx_t *ctx,
 
 	igt_spin_free(fd, spin);
 	gem_close(fd, scratch);
+	put_offset(ahnd, scratch);
+	put_ahnd(ahnd);
 }
 
 static uint32_t timeslicing_batches(int i915, uint32_t *offset)
@@ -623,9 +661,12 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 	igt_spin_t *spin;
 	int fence;
 	int x = 0;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id), bb_offset;
+	uint64_t scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 
 	fence = igt_cork_plug(&cork, i915),
 	spin = igt_spin_new(i915,
+			    .ahnd = ahnd,
 			    .ctx = ctx,
 			    .engine = e->flags,
 			    .fence = fence,
@@ -644,7 +685,7 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 			{ .handle = scratch, },
 			{
 				.relocs_ptr = to_user_pointer(&reloc),
-				.relocation_count = 1,
+				.relocation_count = !ahnd ? 1 : 0,
 			}
 		};
 		struct drm_i915_gem_execbuffer2 execbuf = {
@@ -662,11 +703,19 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 
 		obj[1].handle = gem_create(i915, 4096);
 
+		if (ahnd) {
+			bb_offset = get_offset(ahnd, obj[1].handle, 4096, 0);
+			obj[1].offset = bb_offset;
+			obj[1].flags = EXEC_OBJECT_PINNED;
+			obj[0].offset = scratch_offset;
+			obj[0].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		}
+
 		i = 0;
 		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 		if (gen >= 8) {
-			batch[++i] = reloc.delta;
-			batch[++i] = 0;
+			batch[++i] = scratch_offset + reloc.delta;
+			batch[++i] = scratch_offset >> 32;
 		} else if (gen >= 4) {
 			batch[++i] = 0;
 			batch[++i] = reloc.delta;
@@ -687,6 +736,7 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 	}
 	igt_assert(gem_bo_busy(i915, spin->handle));
 	gem_close(i915, scratch);
+	put_offset(ahnd, scratch);
 	igt_require(x);
 
 	/*
@@ -713,18 +763,21 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 
 		igt_assert_eq_u32(out[i], ~i);
 		gem_close(i915, handle[i]);
+		put_offset(ahnd, handle[i]);
 	}
 	munmap(out, 4096);
 
 	/* Master should still be spinning, but all output should be written */
 	igt_assert(gem_bo_busy(i915, spin->handle));
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_concurrent(int i915, const intel_ctx_t *ctx,
 			    const struct intel_execution_engine2 *e)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 	struct drm_i915_gem_relocation_entry reloc = {
 		.target_handle =  gem_create(i915, 4096),
 		.write_domain = I915_GEM_DOMAIN_RENDER,
@@ -735,7 +788,7 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 		{
 			.handle = gem_create(i915, 4096),
 			.relocs_ptr = to_user_pointer(&reloc),
-			.relocation_count = 1,
+			.relocation_count = !ahnd ? 1 : 0,
 		}
 	};
 	struct drm_i915_gem_execbuffer2 execbuf = {
@@ -749,9 +802,19 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 	igt_spin_t *spin;
 	const intel_ctx_t *tmp_ctx;
 	uint32_t result;
+	uint64_t bb_offset, target_offset;
 	int fence;
 	int i;
 
+	bb_offset = get_offset(ahnd, obj[1].handle, 4096, 0);
+	target_offset = get_offset(ahnd, obj[0].handle, 4096, 0);
+	if (ahnd) {
+		obj[1].offset = bb_offset;
+		obj[1].flags = EXEC_OBJECT_PINNED;
+		obj[0].offset = target_offset;
+		obj[0].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+	}
+
 	/*
 	 * A variant of test_parallel() that runs a bonded pair on a single
 	 * engine and ensures that the secondary batch cannot start before
@@ -760,6 +823,7 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 
 	fence = igt_cork_plug(&cork, i915),
 	      spin = igt_spin_new(i915,
+				  .ahnd = ahnd,
 				  .ctx = ctx,
 				  .engine = e->flags,
 				  .fence = fence,
@@ -770,8 +834,8 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = reloc.delta;
-		batch[++i] = 0;
+		batch[++i] = target_offset + reloc.delta;
+		batch[++i] = target_offset >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
 		batch[++i] = reloc.delta;
@@ -793,6 +857,7 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 	gem_execbuf(i915, &execbuf);
 	intel_ctx_destroy(i915, tmp_ctx);
 	gem_close(i915, obj[1].handle);
+	put_offset(ahnd, obj[1].handle);
 
 	/*
 	 * No secondary should be executed since master is stalled. If there
@@ -814,10 +879,12 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 	gem_read(i915, obj[0].handle, 0, &result, sizeof(result));
 	igt_assert_eq_u32(result, 0xd0df0d);
 	gem_close(i915, obj[0].handle);
+	put_offset(ahnd, obj[0].handle);
 
 	/* Master should still be spinning, but all output should be written */
 	igt_assert(gem_bo_busy(i915, spin->handle));
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_submit_chain(int i915, const intel_ctx_t *ctx)
@@ -827,12 +894,14 @@ static void test_submit_chain(int i915, const intel_ctx_t *ctx)
 	IGT_LIST_HEAD(list);
 	IGT_CORK_FENCE(cork);
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	/* Check that we can simultaneously launch spinners on each engine */
 
 	fence = igt_cork_plug(&cork, i915);
 	for_each_ctx_engine(i915, ctx, e) {
 		spin = igt_spin_new(i915,
+				    .ahnd = ahnd,
 				    .ctx = ctx,
 				    .engine = e->flags,
 				    .fence = fence,
@@ -860,6 +929,7 @@ static void test_submit_chain(int i915, const intel_ctx_t *ctx)
 		igt_assert_eq(sync_fence_status(spin->out_fence), 1);
 		igt_spin_free(i915, spin);
 	}
+	put_ahnd(ahnd);
 }
 
 static uint32_t batch_create(int fd)
@@ -889,9 +959,10 @@ static void test_keep_in_fence(int fd, const intel_ctx_t *ctx,
 	unsigned long count, last;
 	struct itimerval itv;
 	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	int fence;
 
-	spin = igt_spin_new(fd, .ctx = ctx, .engine = e->flags);
+	spin = igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx, .engine = e->flags);
 
 	gem_execbuf_wr(fd, &execbuf);
 	fence = upper_32_bits(execbuf.rsvd2);
@@ -940,6 +1011,7 @@ static void test_keep_in_fence(int fd, const intel_ctx_t *ctx,
 
 	igt_spin_free(fd, spin);
 	gem_quiescent_gpu(fd);
+	put_ahnd(ahnd);
 }
 
 #define EXPIRED 0x10000
@@ -1165,7 +1237,8 @@ static void test_syncobj_unused_fence(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	igt_spin_t *spin = igt_spin_new(fd);
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* sanity check our syncobj_to_sync_file interface */
 	igt_assert_eq(__syncobj_to_sync_file(fd, 0), -ENOENT);
@@ -1191,6 +1264,7 @@ static void test_syncobj_unused_fence(int fd)
 	syncobj_destroy(fd, fence.handle);
 
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_invalid_wait(int fd)
@@ -1257,7 +1331,8 @@ static void test_syncobj_signal(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	igt_spin_t *spin = igt_spin_new(fd);
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that the syncobj is signaled only when our request/fence is */
 
@@ -1286,6 +1361,7 @@ static void test_syncobj_signal(int fd)
 
 	gem_close(fd, obj.handle);
 	syncobj_destroy(fd, fence.handle);
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
@@ -1300,6 +1376,7 @@ static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
 	unsigned handle[I915_EXEC_RING_MASK + 1];
 	igt_spin_t *spin;
 	int n;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
 
 	/* Check that we can use the syncobj to asynchronous wait prior to
 	 * execution.
@@ -1307,7 +1384,7 @@ static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
 
 	gem_quiescent_gpu(fd);
 
-	spin = igt_spin_new(fd);
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	memset(&execbuf, 0, sizeof(execbuf));
 	execbuf.buffers_ptr = to_user_pointer(&obj);
@@ -1357,6 +1434,8 @@ static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
 		gem_sync(fd, handle[i]);
 		gem_close(fd, handle[i]);
 	}
+
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_export(int fd)
@@ -1368,7 +1447,10 @@ static void test_syncobj_export(int fd)
 		.handle = syncobj_create(fd, 0),
 	};
 	int export[2];
-	igt_spin_t *spin = igt_spin_new(fd);
+	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that if we export the syncobj prior to use it picks up
 	 * the later fence. This allows a syncobj to establish a channel
@@ -1416,6 +1498,8 @@ static void test_syncobj_export(int fd)
 		syncobj_destroy(fd, import);
 		close(export[n]);
 	}
+
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_repeat(int fd)
@@ -1426,7 +1510,10 @@ static void test_syncobj_repeat(int fd)
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_fence *fence;
 	int export;
-	igt_spin_t *spin = igt_spin_new(fd);
+	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that we can wait on the same fence multiple times */
 	fence = calloc(nfences, sizeof(*fence));
@@ -1474,6 +1561,8 @@ static void test_syncobj_repeat(int fd)
 		syncobj_destroy(fd, fence[i].handle);
 	}
 	free(fence);
+
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_import(int fd)
@@ -1481,7 +1570,8 @@ static void test_syncobj_import(int fd)
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 obj;
 	struct drm_i915_gem_execbuffer2 execbuf;
-	igt_spin_t *spin = igt_spin_new(fd);
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 	uint32_t sync = syncobj_create(fd, 0);
 	int fence;
 
@@ -1517,6 +1607,7 @@ static void test_syncobj_import(int fd)
 
 	gem_close(fd, obj.handle);
 	syncobj_destroy(fd, sync);
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_channel(int fd)
@@ -1808,8 +1899,8 @@ static void test_syncobj_timeline_unused_fence(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	igt_spin_t *spin = igt_spin_new(fd);
-	uint64_t value = 1;
+	uint64_t value = 1, ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* sanity check our syncobj_to_sync_file interface */
 	igt_assert_eq(__syncobj_to_sync_file(fd, 0), -ENOENT);
@@ -1841,6 +1932,7 @@ static void test_syncobj_timeline_unused_fence(int fd)
 	syncobj_destroy(fd, fence.handle);
 
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_invalid_wait_desc =
@@ -1949,7 +2041,7 @@ static void test_syncobj_timeline_signal(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	uint64_t value = 42, query_value;
+	uint64_t value = 42, query_value, ahnd = get_reloc_ahnd(fd, 0);
 	igt_spin_t *spin;
 
 	/* Check that the syncobj is signaled only when our request/fence is */
@@ -1974,7 +2066,7 @@ static void test_syncobj_timeline_signal(int fd)
 	fence.flags = I915_EXEC_FENCE_SIGNAL;
 
 	/* Check syncobj after waiting on the buffer handle. */
-	spin = igt_spin_new(fd);
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 	gem_execbuf(fd, &execbuf);
 
 	igt_assert(gem_bo_busy(fd, obj.handle));
@@ -1993,7 +2085,7 @@ static void test_syncobj_timeline_signal(int fd)
 	syncobj_timeline_query(fd, &fence.handle, &query_value, 1);
 	igt_assert_eq(query_value, value);
 
-	spin = igt_spin_new(fd);
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/*
 	 * Wait on the syncobj and verify the state of the buffer
@@ -2024,6 +2116,7 @@ static void test_syncobj_timeline_signal(int fd)
 
 	gem_close(fd, obj.handle);
 	syncobj_destroy(fd, fence.handle);
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_wait_desc =
@@ -2046,7 +2139,7 @@ static void test_syncobj_timeline_wait(int fd, const intel_ctx_t *ctx)
 	};
 	unsigned handle[I915_EXEC_RING_MASK + 1];
 	const struct intel_execution_engine2 *e;
-	uint64_t value = 1;
+	uint64_t value = 1, ahnd = get_reloc_ahnd(fd, ctx->id);
 	igt_spin_t *spin;
 	int n;
 
@@ -2056,7 +2149,7 @@ static void test_syncobj_timeline_wait(int fd, const intel_ctx_t *ctx)
 
 	gem_quiescent_gpu(fd);
 
-	spin = igt_spin_new(fd, .ctx = ctx, .engine = ALL_ENGINES);
+	spin = igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx, .engine = ALL_ENGINES);
 
 	memset(&timeline_fences, 0, sizeof(timeline_fences));
 	timeline_fences.base.name = DRM_I915_GEM_EXECBUFFER_EXT_TIMELINE_FENCES;
@@ -2105,6 +2198,7 @@ static void test_syncobj_timeline_wait(int fd, const intel_ctx_t *ctx)
 		gem_sync(fd, handle[i]);
 		gem_close(fd, handle[i]);
 	}
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_export_desc =
@@ -2121,9 +2215,9 @@ static void test_syncobj_timeline_export(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	uint64_t value = 1;
+	uint64_t value = 1, ahnd = get_reloc_ahnd(fd, 0);
 	int export[2];
-	igt_spin_t *spin = igt_spin_new(fd);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that if we export the syncobj prior to use it picks up
 	 * the later fence. This allows a syncobj to establish a channel
@@ -2177,6 +2271,7 @@ static void test_syncobj_timeline_export(int fd)
 		syncobj_destroy(fd, import);
 		close(export[n]);
 	}
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_repeat_desc =
@@ -2193,9 +2288,9 @@ static void test_syncobj_timeline_repeat(int fd)
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_execbuffer_ext_timeline_fences timeline_fences;
 	struct drm_i915_gem_exec_fence *fence;
-	uint64_t *values;
+	uint64_t *values, ahnd = get_reloc_ahnd(fd, 0);
 	int export;
-	igt_spin_t *spin = igt_spin_new(fd);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that we can wait on the same fence multiple times */
 	fence = calloc(nfences, sizeof(*fence));
@@ -2266,6 +2361,7 @@ static void test_syncobj_timeline_repeat(int fd)
 	}
 	free(fence);
 	free(values);
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_multiple_ext_nodes_desc =
@@ -3005,6 +3101,7 @@ igt_main
 		igt_subtest_group {
 			igt_fixture {
 				igt_fork_hang_detector(i915);
+				intel_allocator_multiprocess_start();
 			}
 
 			igt_subtest_with_dynamic("basic-busy") {
@@ -3097,6 +3194,7 @@ igt_main
 			}
 
 			igt_fixture {
+				intel_allocator_multiprocess_stop();
 				igt_stop_hang_detector();
 			}
 		}
@@ -3106,6 +3204,7 @@ igt_main
 
 			igt_fixture {
 				hang = igt_allow_hang(i915, 0, 0);
+				intel_allocator_multiprocess_start();
 			}
 
 			igt_subtest_with_dynamic("busy-hang") {
@@ -3133,6 +3232,7 @@ igt_main
 				}
 			}
 			igt_fixture {
+				intel_allocator_multiprocess_stop();
 				igt_disallow_hang(i915, hang);
 			}
 		}
@@ -3162,6 +3262,7 @@ igt_main
 			igt_require(exec_has_fence_array(i915));
 			igt_assert(has_syncobj(i915));
 			igt_fork_hang_detector(i915);
+			intel_allocator_multiprocess_start();
 		}
 
 		igt_subtest("invalid-fence-array")
@@ -3195,6 +3296,7 @@ igt_main
 			test_syncobj_channel(i915);
 
 		igt_fixture {
+			intel_allocator_multiprocess_stop();
 			igt_stop_hang_detector();
 		}
 	}
-- 
2.26.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: Adopt to use allocator
  2021-08-16 11:56 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
                   ` (2 preceding siblings ...)
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 3/5] tests/gem_exec_fence: Adopt to use allocator Zbigniew Kempczyński
@ 2021-08-16 11:56 ` Zbigniew Kempczyński
  2021-08-17  2:20   ` Dixit, Ashutosh
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 5/5] HAX: remove gttfill for tgl ci Zbigniew Kempczyński
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-16 11:56 UTC (permalink / raw)
  To: igt-dev; +Cc: Zbigniew Kempczyński, Petri Latvala, Ashutosh Dixit

Alter tests to cover reloc and no-reloc (softpin) modes.

v2: fix pi-* subtests

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_schedule.c | 472 +++++++++++++++++++++++++--------
 1 file changed, 368 insertions(+), 104 deletions(-)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index e5fb45982..eb3c1b486 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -91,9 +91,11 @@ void __sync_read_u32_count(int fd, uint32_t handle, uint32_t *dst, uint64_t size
 	gem_read(fd, handle, 0, dst, size);
 }
 
-static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
-			      uint32_t target, uint32_t offset, uint32_t value,
-			      uint32_t cork, int fence, unsigned write_domain)
+static uint32_t __store_dword(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			      unsigned ring, uint32_t target, uint64_t target_offset,
+			      uint32_t offset, uint32_t value,
+			      uint32_t cork, uint64_t cork_offset,
+			      int fence, unsigned write_domain)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[3];
@@ -117,12 +119,23 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = cork;
-	obj[0].offset = cork << 20;
 	obj[1].handle = target;
-	obj[1].offset = target << 20;
 	obj[2].handle = gem_create(fd, 4096);
-	obj[2].offset = 256 << 10;
-	obj[2].offset += (random() % 128) << 12;
+	if (ahnd) {
+		obj[0].offset = cork_offset;
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[1].offset = target_offset;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		if (write_domain)
+			obj[1].flags |= EXEC_OBJECT_WRITE;
+		obj[2].offset = get_offset(ahnd, obj[2].handle, 4096, 0);
+		obj[2].flags |= EXEC_OBJECT_PINNED;
+	} else {
+		obj[0].offset = cork << 20;
+		obj[1].offset = target << 20;
+		obj[2].offset = 256 << 10;
+		obj[2].offset += (random() % 128) << 12;
+	}
 
 	memset(&reloc, 0, sizeof(reloc));
 	reloc.target_handle = obj[1].handle;
@@ -132,13 +145,13 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 	reloc.write_domain = write_domain;
 	obj[2].relocs_ptr = to_user_pointer(&reloc);
-	obj[2].relocation_count = 1;
+	obj[2].relocation_count = !ahnd ? 1 : 0;
 
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
 		batch[++i] = reloc.presumed_offset + reloc.delta;
-		batch[++i] = 0;
+		batch[++i] = (reloc.presumed_offset + reloc.delta) >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
 		batch[++i] = reloc.presumed_offset + reloc.delta;
@@ -155,31 +168,38 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 	return obj[2].handle;
 }
 
-static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
-			uint32_t target, uint32_t offset, uint32_t value,
+static void store_dword(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			unsigned ring,
+			uint32_t target, uint64_t target_offset,
+			uint32_t offset, uint32_t value,
 			unsigned write_domain)
 {
-	gem_close(fd, __store_dword(fd, ctx, ring,
-				    target, offset, value,
-				    0, -1, write_domain));
+	gem_close(fd, __store_dword(fd, ahnd, ctx, ring,
+				    target, target_offset, offset, value,
+				    0, 0, -1, write_domain));
 }
 
-static void store_dword_plug(int fd, const intel_ctx_t *ctx, unsigned ring,
-			     uint32_t target, uint32_t offset, uint32_t value,
-			     uint32_t cork, unsigned write_domain)
+static void store_dword_plug(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			     unsigned ring,
+			     uint32_t target, uint64_t target_offset,
+			     uint32_t offset, uint32_t value,
+			     uint32_t cork, uint64_t cork_offset,
+			     unsigned write_domain)
 {
-	gem_close(fd, __store_dword(fd, ctx, ring,
-				    target, offset, value,
-				    cork, -1, write_domain));
+	gem_close(fd, __store_dword(fd, ahnd, ctx, ring,
+				    target, target_offset, offset, value,
+				    cork, cork_offset, -1, write_domain));
 }
 
-static void store_dword_fenced(int fd, const intel_ctx_t *ctx, unsigned ring,
-			       uint32_t target, uint32_t offset, uint32_t value,
+static void store_dword_fenced(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			       unsigned ring,
+			       uint32_t target, uint64_t target_offset,
+			       uint32_t offset, uint32_t value,
 			       int fence, unsigned write_domain)
 {
-	gem_close(fd, __store_dword(fd, ctx, ring,
-				    target, offset, value,
-				    0, fence, write_domain));
+	gem_close(fd, __store_dword(fd, ahnd, ctx, ring,
+				    target, target_offset, offset, value,
+				    0, 0, fence, write_domain));
 }
 
 static const intel_ctx_t *
@@ -210,15 +230,21 @@ static void unplug_show_queue(int fd, struct igt_cork *c,
 
 	for (int n = 0; n < max; n++) {
 		const intel_ctx_t *ctx = create_highest_priority(fd, cfg);
-		spin[n] = __igt_spin_new(fd, .ctx = ctx, .engine = engine);
+		uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
+
+		spin[n] = __igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx,
+					 .engine = engine);
 		intel_ctx_destroy(fd, ctx);
 	}
 
 	igt_cork_unplug(c); /* batches will now be queued on the engine */
 	igt_debugfs_dump(fd, "i915_engine_info");
 
-	for (int n = 0; n < max; n++)
+	for (int n = 0; n < max; n++) {
+		uint64_t ahnd = spin[n]->ahnd;
 		igt_spin_free(fd, spin[n]);
+		put_ahnd(ahnd);
+	}
 
 }
 
@@ -228,20 +254,26 @@ static void fifo(int fd, const intel_ctx_t *ctx, unsigned ring)
 	uint32_t scratch;
 	uint32_t result;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id), scratch_offset;
 
 	scratch = gem_create(fd, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 
 	fence = igt_cork_plug(&cork, fd);
 
 	/* Same priority, same timeline, final result will be the second eb */
-	store_dword_fenced(fd, ctx, ring, scratch, 0, 1, fence, 0);
-	store_dword_fenced(fd, ctx, ring, scratch, 0, 2, fence, 0);
+	store_dword_fenced(fd, ahnd, ctx, ring, scratch, scratch_offset,
+			   0, 1, fence, 0);
+	store_dword_fenced(fd, ahnd, ctx, ring, scratch, scratch_offset,
+			   0, 2, fence, 0);
 
 	unplug_show_queue(fd, &cork, &ctx->cfg, ring);
 	close(fence);
 
 	result =  __sync_read_u32(fd, scratch, 0);
 	gem_close(fd, scratch);
+	put_offset(ahnd, scratch);
+	put_ahnd(ahnd);
 
 	igt_assert_eq_u32(result, 2);
 }
@@ -260,6 +292,7 @@ static void implicit_rw(int i915, const intel_ctx_t *ctx, unsigned int ring,
 	uint32_t scratch;
 	uint32_t result;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id), scratch_offset;
 
 	count = 0;
 	for_each_ctx_engine(i915, ctx, e) {
@@ -274,11 +307,12 @@ static void implicit_rw(int i915, const intel_ctx_t *ctx, unsigned int ring,
 	igt_require(count);
 
 	scratch = gem_create(i915, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 	fence = igt_cork_plug(&cork, i915);
 
 	if (dir & WRITE_READ)
-		store_dword_fenced(i915, ctx,
-				   ring, scratch, 0, ~ring,
+		store_dword_fenced(i915, ahnd, ctx,
+				   ring, scratch, scratch_offset, 0, ~ring,
 				   fence, I915_GEM_DOMAIN_RENDER);
 
 	for_each_ctx_engine(i915, ctx, e) {
@@ -288,14 +322,14 @@ static void implicit_rw(int i915, const intel_ctx_t *ctx, unsigned int ring,
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
-		store_dword_fenced(i915, ctx,
-				   e->flags, scratch, 0, e->flags,
+		store_dword_fenced(i915, ahnd, ctx,
+				   e->flags, scratch, scratch_offset, 0, e->flags,
 				   fence, 0);
 	}
 
 	if (dir & READ_WRITE)
-		store_dword_fenced(i915, ctx,
-				   ring, scratch, 0, ring,
+		store_dword_fenced(i915, ahnd, ctx,
+				   ring, scratch, scratch_offset, 0, ring,
 				   fence, I915_GEM_DOMAIN_RENDER);
 
 	unplug_show_queue(i915, &cork, &ctx->cfg, ring);
@@ -303,6 +337,8 @@ static void implicit_rw(int i915, const intel_ctx_t *ctx, unsigned int ring,
 
 	result =  __sync_read_u32(i915, scratch, 0);
 	gem_close(i915, scratch);
+	put_offset(ahnd, scratch);
+	put_ahnd(ahnd);
 
 	if (dir & WRITE_READ)
 		igt_assert_neq_u32(result, ~ring);
@@ -319,8 +355,10 @@ static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
 	uint32_t scratch, batch;
 	uint32_t *ptr;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id), scratch_offset;
 
 	scratch = gem_create(fd, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 	ptr = gem_mmap__device_coherent(fd, scratch, 0, 4096, PROT_READ);
 	igt_assert_eq(ptr[0], 0);
 
@@ -336,6 +374,7 @@ static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
 
 		if (spin == NULL) {
 			spin = __igt_spin_new(fd,
+					      .ahnd = ahnd,
 					      .ctx = ctx,
 					      .engine = e->flags,
 					      .flags = flags);
@@ -348,12 +387,15 @@ static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
 			gem_execbuf(fd, &eb);
 		}
 
-		store_dword_fenced(fd, ctx, e->flags, scratch, 0, e->flags, fence, 0);
+		store_dword_fenced(fd, ahnd, ctx, e->flags,
+				   scratch, scratch_offset,
+				   0, e->flags, fence, 0);
 	}
 	igt_require(spin);
 
 	/* Same priority, but different timeline (as different engine) */
-	batch = __store_dword(fd, ctx, engine, scratch, 0, engine, 0, fence, 0);
+	batch = __store_dword(fd, ahnd, ctx, engine, scratch, scratch_offset,
+			      0, engine, 0, 0, fence, 0);
 
 	unplug_show_queue(fd, &cork, &ctx->cfg, engine);
 	close(fence);
@@ -369,6 +411,8 @@ static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
 
 	igt_spin_free(fd, spin);
 	gem_quiescent_gpu(fd);
+	put_offset(ahnd, scratch);
+	put_ahnd(ahnd);
 
 	/* And we expect the others to have overwritten us, order unspecified */
 	igt_assert(!gem_bo_busy(fd, scratch));
@@ -388,6 +432,7 @@ static void smoketest(int fd, const intel_ctx_cfg_t *cfg,
 	unsigned engine;
 	uint32_t scratch;
 	uint32_t result[2 * ncpus];
+	uint64_t scratch_offset;
 
 	nengine = 0;
 	if (ring == ALL_ENGINES) {
@@ -400,13 +445,19 @@ static void smoketest(int fd, const intel_ctx_cfg_t *cfg,
 	igt_require(nengine);
 
 	scratch = gem_create(fd, 4096);
+
 	igt_fork(child, ncpus) {
 		unsigned long count = 0;
 		const intel_ctx_t *ctx;
+		uint64_t ahnd;
+
+		intel_allocator_init();
 
 		hars_petruska_f54_1_random_perturb(child);
 
 		ctx = intel_ctx_create(fd, cfg);
+		ahnd = get_reloc_ahnd(fd, ctx->id);
+		scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 		igt_until_timeout(timeout) {
 			int prio;
 
@@ -414,15 +465,18 @@ static void smoketest(int fd, const intel_ctx_cfg_t *cfg,
 			gem_context_set_priority(fd, ctx->id, prio);
 
 			engine = engines[hars_petruska_f54_1_random_unsafe_max(nengine)];
-			store_dword(fd, ctx, engine, scratch,
-				    8*child + 0, ~child,
-				    0);
+			store_dword(fd, ahnd, ctx, engine,
+				    scratch, scratch_offset,
+				    8*child + 0, ~child, 0);
 			for (unsigned int step = 0; step < 8; step++)
-				store_dword(fd, ctx, engine, scratch,
+				store_dword(fd, ahnd, ctx, engine,
+					    scratch, scratch_offset,
 					    8*child + 4, count++,
 					    0);
 		}
 		intel_ctx_destroy(fd, ctx);
+		put_offset(ahnd, scratch);
+		put_ahnd(ahnd);
 	}
 	igt_waitchildren();
 
@@ -644,12 +698,15 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
 {
 	const intel_ctx_t *ctx;
 	igt_spin_t *spin[3];
+	uint64_t ahnd[3];
 
 	igt_require(gem_scheduler_has_timeslicing(i915));
 	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
 	ctx = intel_ctx_create(i915, cfg);
-	spin[0] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	ahnd[0] = get_reloc_ahnd(i915, ctx->id);
+	spin[0] = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx,
+			       .engine = engine,
 			       .flags = (IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT |
 					 flags));
@@ -658,7 +715,9 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
 	igt_spin_busywait_until_started(spin[0]);
 
 	ctx = intel_ctx_create(i915, cfg);
-	spin[1] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	ahnd[1] = get_reloc_ahnd(i915, ctx->id);
+	spin[1] = igt_spin_new(i915, .ahnd = ahnd[1], .ctx = ctx,
+			       .engine = engine,
 			       .fence = spin[0]->out_fence,
 			       .flags = (IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_IN |
@@ -675,7 +734,9 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
 	 */
 
 	ctx = intel_ctx_create(i915, cfg);
-	spin[2] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	ahnd[2] = get_reloc_ahnd(i915, ctx->id);
+	spin[2] = igt_spin_new(i915, .ahnd = ahnd[2], .ctx = ctx,
+			       .engine = engine,
 			       .flags = IGT_SPIN_POLL_RUN | flags);
 	intel_ctx_destroy(i915, ctx);
 
@@ -696,6 +757,9 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
 
 	igt_assert(gem_bo_busy(i915, spin[1]->handle));
 	igt_spin_free(i915, spin[1]);
+
+	for (int i = 0; i < ARRAY_SIZE(ahnd); i++)
+		put_ahnd(ahnd[i]);
 }
 
 static void cancel_spinner(int i915,
@@ -742,6 +806,7 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 		.num_engines = 1,
 	};
 	const intel_ctx_t *ctx;
+	uint64_t ahnd0 = get_reloc_ahnd(i915, 0);
 
 	/*
 	 * When using a submit fence, we do not want to block concurrent work,
@@ -755,13 +820,14 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 		igt_spin_t *bg, *spin;
 		int timeline = -1;
 		int fence = -1;
+		uint64_t ahndN;
 
 		if (!gem_class_can_store_dword(i915, cancel->class))
 			continue;
 
 		igt_debug("Testing cancellation from %s\n", e->name);
 
-		bg = igt_spin_new(i915, .engine = e->flags);
+		bg = igt_spin_new(i915, .ahnd = ahnd0, .engine = e->flags);
 
 		if (flags & LATE_SUBMIT) {
 			timeline = sw_sync_timeline_create();
@@ -771,7 +837,8 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 		engine_cfg.engines[0].engine_class = e->class;
 		engine_cfg.engines[0].engine_instance = e->instance;
 		ctx = intel_ctx_create(i915, &engine_cfg);
-		spin = igt_spin_new(i915, .ctx = ctx,
+		ahndN = get_reloc_ahnd(i915, ctx->id);
+		spin = igt_spin_new(i915, .ahnd = ahndN, .ctx = ctx,
 				    .fence = fence,
 				    .flags =
 				    IGT_SPIN_POLL_RUN |
@@ -800,7 +867,10 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 		igt_spin_free(i915, bg);
 
 		intel_ctx_destroy(i915, ctx);
+		put_ahnd(ahndN);
 	}
+
+	put_ahnd(ahnd0);
 }
 
 static uint32_t __batch_create(int i915, uint32_t offset)
@@ -829,6 +899,7 @@ static void semaphore_userlock(int i915, const intel_ctx_t *ctx,
 	igt_spin_t *spin = NULL;
 	uint32_t scratch;
 	const intel_ctx_t *tmp_ctx;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	igt_require(gem_scheduler_has_timeslicing(i915));
 
@@ -843,6 +914,7 @@ static void semaphore_userlock(int i915, const intel_ctx_t *ctx,
 	for_each_ctx_engine(i915, ctx, e) {
 		if (!spin) {
 			spin = igt_spin_new(i915,
+					    .ahnd = ahnd,
 					    .ctx = ctx,
 					    .dependency = scratch,
 					    .engine = e->flags,
@@ -885,6 +957,7 @@ static void semaphore_userlock(int i915, const intel_ctx_t *ctx,
 	gem_close(i915, obj.handle);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
@@ -894,6 +967,7 @@ static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
 	struct {
 		igt_spin_t *xcs, *rcs;
 	} task[2];
+	uint64_t ahnd;
 	int i;
 
 	/*
@@ -919,9 +993,11 @@ static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
 			continue;
 
 		tmp_ctx = intel_ctx_create(i915, &ctx->cfg);
+		ahnd = get_simple_l2h_ahnd(i915, tmp_ctx->id);
 
 		task[i].xcs =
 			__igt_spin_new(i915,
+				       .ahnd = ahnd,
 				       .ctx = tmp_ctx,
 				       .engine = e->flags,
 				       .flags = IGT_SPIN_POLL_RUN | flags);
@@ -930,6 +1006,7 @@ static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
 		/* Common rcs tasks will be queued in FIFO */
 		task[i].rcs =
 			__igt_spin_new(i915,
+				       .ahnd = ahnd,
 				       .ctx = tmp_ctx,
 				       .engine = 0,
 				       .dependency = task[i].xcs->handle);
@@ -952,8 +1029,10 @@ static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
 	}
 
 	for (i = 0; i < ARRAY_SIZE(task); i++) {
+		ahnd = task[i].rcs->ahnd;
 		igt_spin_free(i915, task[i].xcs);
 		igt_spin_free(i915, task[i].rcs);
+		put_ahnd(ahnd);
 	}
 }
 
@@ -964,6 +1043,7 @@ static void semaphore_resolve(int i915, const intel_ctx_cfg_t *cfg,
 	const uint32_t SEMAPHORE_ADDR = 64 << 10;
 	uint32_t semaphore, *sema;
 	const intel_ctx_t *outer, *inner;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	/*
 	 * Userspace may submit batches that wait upon unresolved
@@ -994,7 +1074,8 @@ static void semaphore_resolve(int i915, const intel_ctx_cfg_t *cfg,
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
-		spin = __igt_spin_new(i915, .engine = e->flags, .flags = flags);
+		spin = __igt_spin_new(i915, .ahnd = ahnd,
+				      .engine = e->flags, .flags = flags);
 		igt_spin_end(spin); /* we just want its address for later */
 		gem_sync(i915, spin->handle);
 		igt_spin_reset(spin);
@@ -1086,6 +1167,7 @@ static void semaphore_resolve(int i915, const intel_ctx_cfg_t *cfg,
 
 	intel_ctx_destroy(i915, inner);
 	intel_ctx_destroy(i915, outer);
+	put_ahnd(ahnd);
 }
 
 static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
@@ -1094,10 +1176,12 @@ static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *outer, *inner;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	igt_require(gen >= 6); /* MI_STORE_DWORD_IMM convenience */
 
 	ctx = intel_ctx_create(i915, cfg);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	for_each_ctx_engine(i915, ctx, outer) {
 	for_each_ctx_engine(i915, ctx, inner) {
@@ -1110,10 +1194,10 @@ static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
 		    !gem_class_can_store_dword(i915, inner->class))
 			continue;
 
-		chain = __igt_spin_new(i915, .ctx = ctx,
+		chain = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				       .engine = outer->flags, .flags = flags);
 
-		spin = __igt_spin_new(i915, .ctx = ctx,
+		spin = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				      .engine = inner->flags, .flags = flags);
 		igt_spin_end(spin); /* we just want its address for later */
 		gem_sync(i915, spin->handle);
@@ -1172,6 +1256,7 @@ static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
 	}
 
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 }
 
 static void
@@ -1197,6 +1282,7 @@ noreorder(int i915, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin;
 	int fence = -1;
 	uint64_t addr;
+	uint64_t ahnd[2];
 
 	if (flags & CORKED)
 		fence = igt_cork_plug(&cork, i915);
@@ -1205,8 +1291,9 @@ noreorder(int i915, const intel_ctx_cfg_t *cfg,
 		vm_cfg.vm = gem_vm_create(i915);
 
 	ctx = intel_ctx_create(i915, &vm_cfg);
+	ahnd[0] = get_reloc_ahnd(i915, ctx->id);
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	spin = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx,
 			    .engine = engine,
 			    .fence = fence,
 			    .flags = IGT_SPIN_FENCE_OUT | IGT_SPIN_FENCE_IN);
@@ -1281,7 +1368,9 @@ noreorder(int i915, const intel_ctx_cfg_t *cfg,
 	 * Without timeslices, fallback to waiting a second.
 	 */
 	ctx = intel_ctx_create(i915, &vm_cfg);
+	ahnd[1] = get_reloc_ahnd(i915, ctx->id);
 	slice = igt_spin_new(i915,
+			    .ahnd = ahnd[1],
 			    .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_POLL_RUN);
@@ -1299,6 +1388,8 @@ noreorder(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert_eq(sync_fence_status(spin->out_fence), 0);
 	igt_spin_free(i915, spin);
 	gem_quiescent_gpu(i915);
+	put_ahnd(ahnd[0]);
+	put_ahnd(ahnd[1]);
 }
 
 static void reorder(int fd, const intel_ctx_cfg_t *cfg,
@@ -1310,6 +1401,14 @@ static void reorder(int fd, const intel_ctx_cfg_t *cfg,
 	uint32_t result;
 	const intel_ctx_t *ctx[2];
 	int fence;
+	uint64_t ahnd, scratch_offset;
+
+	/*
+	 * We use reloc ahnd for default context because we're interested
+	 * acquiring distinct offsets only. This saves us typing - otherwise
+	 * we should get scratch_offset for each context separately.
+	 */
+	ahnd = get_reloc_ahnd(fd, 0);
 
 	ctx[LO] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
@@ -1318,19 +1417,25 @@ static void reorder(int fd, const intel_ctx_cfg_t *cfg,
 	gem_context_set_priority(fd, ctx[HI]->id, flags & EQUAL ? MIN_PRIO : 0);
 
 	scratch = gem_create(fd, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
+
 	fence = igt_cork_plug(&cork, fd);
 
 	/* We expect the high priority context to be executed first, and
 	 * so the final result will be value from the low priority context.
 	 */
-	store_dword_fenced(fd, ctx[LO], ring, scratch, 0, ctx[LO]->id, fence, 0);
-	store_dword_fenced(fd, ctx[HI], ring, scratch, 0, ctx[HI]->id, fence, 0);
+	store_dword_fenced(fd, ahnd, ctx[LO], ring, scratch, scratch_offset,
+			   0, ctx[LO]->id, fence, 0);
+	store_dword_fenced(fd, ahnd, ctx[HI], ring, scratch, scratch_offset,
+			   0, ctx[HI]->id, fence, 0);
 
 	unplug_show_queue(fd, &cork, cfg, ring);
 	close(fence);
 
 	result =  __sync_read_u32(fd, scratch, 0);
 	gem_close(fd, scratch);
+	put_offset(ahnd, scratch);
+	put_ahnd(ahnd);
 
 	if (flags & EQUAL) /* equal priority, result will be fifo */
 		igt_assert_eq_u32(result, ctx[HI]->id);
@@ -1348,6 +1453,7 @@ static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	uint32_t result_read, dep_read;
 	const intel_ctx_t *ctx[3];
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0), result_offset, dep_offset;
 
 	ctx[LO] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
@@ -1359,7 +1465,9 @@ static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	gem_context_set_priority(fd, ctx[NOISE]->id, MIN_PRIO/2);
 
 	result = gem_create(fd, 4096);
+	result_offset = get_offset(ahnd, result, 4096, 0);
 	dep = gem_create(fd, 4096);
+	dep_offset = get_offset(ahnd, dep, 4096, 0);
 
 	fence = igt_cork_plug(&cork, fd);
 
@@ -1368,14 +1476,19 @@ static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	 * fifo would be NOISE, LO, HI.
 	 * strict priority would be  HI, NOISE, LO
 	 */
-	store_dword_fenced(fd, ctx[NOISE], ring, result, 0, ctx[NOISE]->id, fence, 0);
-	store_dword_fenced(fd, ctx[LO], ring, result, 0, ctx[LO]->id, fence, 0);
+	store_dword_fenced(fd, ahnd, ctx[NOISE], ring, result, result_offset,
+			   0, ctx[NOISE]->id, fence, 0);
+	store_dword_fenced(fd, ahnd, ctx[LO], ring, result, result_offset,
+			   0, ctx[LO]->id, fence, 0);
 
 	/* link LO <-> HI via a dependency on another buffer */
-	store_dword(fd, ctx[LO], ring, dep, 0, ctx[LO]->id, I915_GEM_DOMAIN_INSTRUCTION);
-	store_dword(fd, ctx[HI], ring, dep, 0, ctx[HI]->id, 0);
+	store_dword(fd, ahnd, ctx[LO], ring, dep, dep_offset,
+		    0, ctx[LO]->id, I915_GEM_DOMAIN_INSTRUCTION);
+	store_dword(fd, ahnd, ctx[HI], ring, dep, dep_offset,
+		    0, ctx[HI]->id, 0);
 
-	store_dword(fd, ctx[HI], ring, result, 0, ctx[HI]->id, 0);
+	store_dword(fd, ahnd, ctx[HI], ring, result, result_offset,
+		    0, ctx[HI]->id, 0);
 
 	unplug_show_queue(fd, &cork, cfg, ring);
 	close(fence);
@@ -1385,6 +1498,9 @@ static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 
 	result_read = __sync_read_u32(fd, result, 0);
 	gem_close(fd, result);
+	put_offset(ahnd, result);
+	put_offset(ahnd, dep);
+	put_ahnd(ahnd);
 
 	igt_assert_eq_u32(dep_read, ctx[HI]->id);
 	igt_assert_eq_u32(result_read, ctx[NOISE]->id);
@@ -1413,32 +1529,42 @@ static void preempt(int fd, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	const intel_ctx_t *ctx[2];
 	igt_hang_t hang;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	uint64_t ahnd_lo_arr[MAX_ELSP_QLEN], ahnd_lo;
+	uint64_t result_offset = get_offset(ahnd, result, 4096, 0);
 
 	/* Set a fast timeout to speed the test up (if available) */
 	set_preempt_timeout(fd, e, 150);
 
 	ctx[LO] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
+	ahnd_lo = get_reloc_ahnd(fd, ctx[LO]->id);
 
 	ctx[HI] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
 
 	if (flags & HANG_LP)
-		hang = igt_hang_ctx(fd, ctx[LO]->id, e->flags, 0);
+		hang = igt_hang_ctx_with_ahnd(fd, ahnd_lo, ctx[LO]->id, e->flags, 0);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
+		uint64_t currahnd = ahnd_lo;
+
 		if (flags & NEW_CTX) {
 			intel_ctx_destroy(fd, ctx[LO]);
 			ctx[LO] = intel_ctx_create(fd, cfg);
 			gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
+			ahnd_lo_arr[n] = get_reloc_ahnd(fd, ctx[LO]->id);
+			currahnd = ahnd_lo_arr[n];
 		}
 		spin[n] = __igt_spin_new(fd,
+					 .ahnd = currahnd,
 					 .ctx = ctx[LO],
 					 .engine = e->flags,
 					 .flags = flags & USERPTR ? IGT_SPIN_USERPTR : 0);
 		igt_debug("spin[%d].handle=%d\n", n, spin[n]->handle);
 
-		store_dword(fd, ctx[HI], e->flags, result, 0, n + 1, I915_GEM_DOMAIN_RENDER);
+		store_dword(fd, ahnd, ctx[HI], e->flags, result, result_offset,
+			    0, n + 1, I915_GEM_DOMAIN_RENDER);
 
 		result_read = __sync_read_u32(fd, result, 0);
 		igt_assert_eq_u32(result_read, n + 1);
@@ -1453,6 +1579,13 @@ static void preempt(int fd, const intel_ctx_cfg_t *cfg,
 
 	intel_ctx_destroy(fd, ctx[LO]);
 	intel_ctx_destroy(fd, ctx[HI]);
+	put_ahnd(ahnd);
+	put_ahnd(ahnd_lo);
+
+	if (flags & NEW_CTX) {
+		for (int n = 0; n < ARRAY_SIZE(spin); n++)
+			put_ahnd(ahnd_lo_arr[n]);
+	}
 
 	gem_close(fd, result);
 }
@@ -1460,7 +1593,7 @@ static void preempt(int fd, const intel_ctx_cfg_t *cfg,
 #define CHAIN 0x1
 #define CONTEXTS 0x2
 
-static igt_spin_t *__noise(int fd, const intel_ctx_t *ctx,
+static igt_spin_t *__noise(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
 			   int prio, igt_spin_t *spin)
 {
 	const struct intel_execution_engine2 *e;
@@ -1470,6 +1603,7 @@ static igt_spin_t *__noise(int fd, const intel_ctx_t *ctx,
 	for_each_ctx_engine(fd, ctx, e) {
 		if (spin == NULL) {
 			spin = __igt_spin_new(fd,
+					      .ahnd = ahnd,
 					      .ctx = ctx,
 					      .engine = e->flags);
 		} else {
@@ -1487,6 +1621,7 @@ static igt_spin_t *__noise(int fd, const intel_ctx_t *ctx,
 }
 
 static void __preempt_other(int fd,
+			    uint64_t *ahnd,
 			    const intel_ctx_t **ctx,
 			    unsigned int target, unsigned int primary,
 			    unsigned flags)
@@ -1495,24 +1630,27 @@ static void __preempt_other(int fd,
 	uint32_t result = gem_create(fd, 4096);
 	uint32_t result_read[4096 / sizeof(uint32_t)];
 	unsigned int n, i;
+	uint64_t result_offset_lo = get_offset(ahnd[LO], result, 4096, 0);
+	uint64_t result_offset_hi = get_offset(ahnd[HI], result, 4096, 0);
 
 	n = 0;
-	store_dword(fd, ctx[LO], primary,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	store_dword(fd, ahnd[LO], ctx[LO], primary,
+		    result, result_offset_lo, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 	n++;
 
 	if (flags & CHAIN) {
 		for_each_ctx_engine(fd, ctx[LO], e) {
-			store_dword(fd, ctx[LO], e->flags,
-				    result, (n + 1)*sizeof(uint32_t), n + 1,
+			store_dword(fd, ahnd[LO], ctx[LO], e->flags,
+				    result, result_offset_lo,
+				     (n + 1)*sizeof(uint32_t), n + 1,
 				    I915_GEM_DOMAIN_RENDER);
 			n++;
 		}
 	}
 
-	store_dword(fd, ctx[HI], target,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	store_dword(fd, ahnd[HI], ctx[HI], target,
+		    result, result_offset_hi, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 
 	igt_debugfs_dump(fd, "i915_engine_info");
@@ -1525,6 +1663,8 @@ static void __preempt_other(int fd,
 		igt_assert_eq_u32(result_read[i], i);
 
 	gem_close(fd, result);
+	put_offset(ahnd[LO], result);
+	put_offset(ahnd[HI], result);
 }
 
 static void preempt_other(int fd, const intel_ctx_cfg_t *cfg,
@@ -1533,6 +1673,7 @@ static void preempt_other(int fd, const intel_ctx_cfg_t *cfg,
 	const struct intel_execution_engine2 *e;
 	igt_spin_t *spin = NULL;
 	const intel_ctx_t *ctx[3];
+	uint64_t ahnd[3];
 
 	/* On each engine, insert
 	 * [NOISE] spinner,
@@ -1546,16 +1687,19 @@ static void preempt_other(int fd, const intel_ctx_cfg_t *cfg,
 
 	ctx[LO] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
+	ahnd[LO] = get_reloc_ahnd(fd, ctx[LO]->id);
 
 	ctx[NOISE] = intel_ctx_create(fd, cfg);
-	spin = __noise(fd, ctx[NOISE], 0, NULL);
+	ahnd[NOISE] = get_reloc_ahnd(fd, ctx[NOISE]->id);
+	spin = __noise(fd, ahnd[NOISE], ctx[NOISE], 0, NULL);
 
 	ctx[HI] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
+	ahnd[HI] = get_reloc_ahnd(fd, ctx[HI]->id);
 
 	for_each_ctx_cfg_engine(fd, cfg, e) {
 		igt_debug("Primary engine: %s\n", e->name);
-		__preempt_other(fd, ctx, ring, e->flags, flags);
+		__preempt_other(fd, ahnd, ctx, ring, e->flags, flags);
 
 	}
 
@@ -1565,6 +1709,9 @@ static void preempt_other(int fd, const intel_ctx_cfg_t *cfg,
 	intel_ctx_destroy(fd, ctx[LO]);
 	intel_ctx_destroy(fd, ctx[NOISE]);
 	intel_ctx_destroy(fd, ctx[HI]);
+	put_ahnd(ahnd[LO]);
+	put_ahnd(ahnd[NOISE]);
+	put_ahnd(ahnd[HI]);
 }
 
 static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
@@ -1574,12 +1721,18 @@ static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 	const struct intel_execution_engine2 *e;
 	uint32_t result = gem_create(fd, 4096);
 	uint32_t result_read[4096 / sizeof(uint32_t)];
+	uint64_t result_offset;
 	igt_spin_t *above = NULL, *below = NULL;
 	const intel_ctx_t *ctx[3] = {
 		intel_ctx_create(fd, cfg),
 		intel_ctx_create(fd, cfg),
 		intel_ctx_create(fd, cfg),
 	};
+	uint64_t ahnd[3] = {
+		get_reloc_ahnd(fd, ctx[0]->id),
+		get_reloc_ahnd(fd, ctx[1]->id),
+		get_reloc_ahnd(fd, ctx[2]->id),
+	};
 	int prio = MAX_PRIO;
 	unsigned int n, i;
 
@@ -1588,7 +1741,7 @@ static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 			intel_ctx_destroy(fd, ctx[NOISE]);
 			ctx[NOISE] = intel_ctx_create(fd, cfg);
 		}
-		above = __noise(fd, ctx[NOISE], prio--, above);
+		above = __noise(fd, ahnd[NOISE], ctx[NOISE], prio--, above);
 	}
 
 	gem_context_set_priority(fd, ctx[HI]->id, prio--);
@@ -1598,28 +1751,31 @@ static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 			intel_ctx_destroy(fd, ctx[NOISE]);
 			ctx[NOISE] = intel_ctx_create(fd, cfg);
 		}
-		below = __noise(fd, ctx[NOISE], prio--, below);
+		below = __noise(fd, ahnd[NOISE], ctx[NOISE], prio--, below);
 	}
 
 	gem_context_set_priority(fd, ctx[LO]->id, prio--);
 
 	n = 0;
-	store_dword(fd, ctx[LO], primary,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	result_offset = get_offset(ahnd[LO], result, 4096, 0);
+	store_dword(fd, ahnd[LO], ctx[LO], primary,
+		    result, result_offset, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 	n++;
 
 	if (flags & CHAIN) {
 		for_each_ctx_engine(fd, ctx[LO], e) {
-			store_dword(fd, ctx[LO], e->flags,
-				    result, (n + 1)*sizeof(uint32_t), n + 1,
+			store_dword(fd, ahnd[LO], ctx[LO], e->flags,
+				    result, result_offset,
+				     (n + 1)*sizeof(uint32_t), n + 1,
 				    I915_GEM_DOMAIN_RENDER);
 			n++;
 		}
 	}
 
-	store_dword(fd, ctx[HI], target,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	result_offset = get_offset(ahnd[HI], result, 4096, 0);
+	store_dword(fd, ahnd[HI], ctx[HI], target,
+		    result, result_offset, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 
 	igt_debugfs_dump(fd, "i915_engine_info");
@@ -1647,6 +1803,11 @@ static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 	intel_ctx_destroy(fd, ctx[HI]);
 
 	gem_close(fd, result);
+	put_offset(ahnd[LO], result);
+	put_offset(ahnd[HI], result);
+	put_ahnd(ahnd[LO]);
+	put_ahnd(ahnd[NOISE]);
+	put_ahnd(ahnd[HI]);
 }
 
 static void preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
@@ -1679,6 +1840,7 @@ static void preempt_engines(int i915,
 	IGT_LIST_HEAD(plist);
 	igt_spin_t *spin, *sn;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * A quick test that each engine within a context is an independent
@@ -1694,12 +1856,14 @@ static void preempt_engines(int i915,
 		igt_list_add(&pnode[n].link, &plist);
 	}
 	ctx = intel_ctx_create(i915, &cfg);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	for (int n = -(GEM_MAX_ENGINES - 1); n < GEM_MAX_ENGINES; n++) {
 		unsigned int engine = n & I915_EXEC_RING_MASK;
 
 		gem_context_set_priority(i915, ctx->id, n);
-		spin = igt_spin_new(i915, .ctx = ctx, .engine = engine);
+		spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
+				   .engine = engine);
 
 		igt_list_move_tail(&spin->link, &pnode[engine].spinners);
 		igt_list_move(&pnode[engine].link, &plist);
@@ -1713,6 +1877,7 @@ static void preempt_engines(int i915,
 		}
 	}
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 }
 
 static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
@@ -1724,6 +1889,7 @@ static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	unsigned int n, i;
 	const intel_ctx_t *ctx[3];
+	uint64_t ahnd[3], result_offset;
 
 	/* On each engine, insert
 	 * [NOISE] spinner,
@@ -1735,21 +1901,26 @@ static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
 
 	ctx[NOISE] = intel_ctx_create(fd, cfg);
 	ctx[HI] = intel_ctx_create(fd, cfg);
+	ahnd[NOISE] = get_reloc_ahnd(fd, ctx[NOISE]->id);
+	ahnd[HI] = get_reloc_ahnd(fd, ctx[HI]->id);
+	result_offset = get_offset(ahnd[HI], result, 4096, 0);
 
 	n = 0;
 	gem_context_set_priority(fd, ctx[HI]->id, MIN_PRIO);
 	for_each_ctx_cfg_engine(fd, cfg, e) {
 		spin[n] = __igt_spin_new(fd,
+					 .ahnd = ahnd[NOISE],
 					 .ctx = ctx[NOISE],
 					 .engine = e->flags);
-		store_dword(fd, ctx[HI], e->flags,
-			    result, (n + 1)*sizeof(uint32_t), n + 1,
+		store_dword(fd, ahnd[HI], ctx[HI], e->flags,
+			    result, result_offset,
+			     (n + 1)*sizeof(uint32_t), n + 1,
 			    I915_GEM_DOMAIN_RENDER);
 		n++;
 	}
 	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
-	store_dword(fd, ctx[HI], ring,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	store_dword(fd, ahnd[HI], ctx[HI], ring,
+		    result, result_offset, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 
 	gem_set_domain(fd, result, I915_GEM_DOMAIN_GTT, 0);
@@ -1769,6 +1940,9 @@ static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
 	intel_ctx_destroy(fd, ctx[HI]);
 
 	gem_close(fd, result);
+	put_offset(ahnd[HI], result);
+	put_ahnd(ahnd[NOISE]);
+	put_ahnd(ahnd[HI]);
 }
 
 static void preemptive_hang(int fd, const intel_ctx_cfg_t *cfg,
@@ -1777,25 +1951,29 @@ static void preemptive_hang(int fd, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	igt_hang_t hang;
 	const intel_ctx_t *ctx[2];
+	uint64_t ahnd_hi, ahnd_lo;
 
 	/* Set a fast timeout to speed the test up (if available) */
 	set_preempt_timeout(fd, e, 150);
 
 	ctx[HI] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
+	ahnd_hi = get_reloc_ahnd(fd, ctx[HI]->id);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
 		ctx[LO] = intel_ctx_create(fd, cfg);
 		gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
+		ahnd_lo = get_reloc_ahnd(fd, ctx[LO]->id);
 
 		spin[n] = __igt_spin_new(fd,
+					 .ahnd = ahnd_lo,
 					 .ctx = ctx[LO],
 					 .engine = e->flags);
 
 		intel_ctx_destroy(fd, ctx[LO]);
 	}
 
-	hang = igt_hang_ctx(fd, ctx[HI]->id, e->flags, 0);
+	hang = igt_hang_ctx_with_ahnd(fd, ahnd_hi, ctx[HI]->id, e->flags, 0);
 	igt_post_hang_ring(fd, hang);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
@@ -1803,11 +1981,14 @@ static void preemptive_hang(int fd, const intel_ctx_cfg_t *cfg,
 		 * This is subject to change as the scheduler evolve. The test should
 		 * be updated to reflect such changes.
 		 */
+		ahnd_lo = spin[n]->ahnd;
 		igt_assert(gem_bo_busy(fd, spin[n]->handle));
 		igt_spin_free(fd, spin[n]);
+		put_ahnd(ahnd_lo);
 	}
 
 	intel_ctx_destroy(fd, ctx[HI]);
+	put_ahnd(ahnd_hi);
 }
 
 static void deep(int fd, const intel_ctx_cfg_t *cfg,
@@ -1823,6 +2004,8 @@ static void deep(int fd, const intel_ctx_cfg_t *cfg,
 	uint32_t result, dep[XS];
 	uint32_t read_buf[size / sizeof(uint32_t)];
 	uint32_t expected = 0;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	uint64_t result_offset, dep_offset[XS], plug_offset;
 	const intel_ctx_t **ctx;
 	int dep_nreq;
 	int n;
@@ -1838,6 +2021,7 @@ static void deep(int fd, const intel_ctx_cfg_t *cfg,
 	igt_info("Using %d requests (prio range %d)\n", nreq, max_req);
 
 	result = gem_create(fd, size);
+	result_offset = get_offset(ahnd, result, size, 0);
 	for (int m = 0; m < XS; m ++)
 		dep[m] = gem_create(fd, size);
 
@@ -1848,10 +2032,23 @@ static void deep(int fd, const intel_ctx_cfg_t *cfg,
 		const uint32_t bbe = MI_BATCH_BUFFER_END;
 
 		memset(obj, 0, sizeof(obj));
-		for (n = 0; n < XS; n++)
+		for (n = 0; n < XS; n++) {
 			obj[n].handle = dep[n];
+			if (ahnd) {
+				obj[n].offset = get_offset(ahnd, obj[n].handle,
+							   size, 0);
+				dep_offset[n] = obj[n].offset;
+				obj[n].flags |= EXEC_OBJECT_PINNED;
+			}
+		}
 		obj[XS].handle = result;
+		obj[XS].offset = result_offset;
 		obj[XS+1].handle = gem_create(fd, 4096);
+		obj[XS+1].offset = get_offset(ahnd, obj[XS+1].handle, 4096, 0);
+		if (ahnd) {
+			obj[XS].flags |= EXEC_OBJECT_PINNED;
+			obj[XS+1].flags |= EXEC_OBJECT_PINNED;
+		}
 		gem_write(fd, obj[XS+1].handle, 0, &bbe, sizeof(bbe));
 
 		memset(&execbuf, 0, sizeof(execbuf));
@@ -1867,6 +2064,7 @@ static void deep(int fd, const intel_ctx_cfg_t *cfg,
 	}
 
 	plug = igt_cork_plug(&cork, fd);
+	plug_offset = get_offset(ahnd, plug, 4096, 0);
 
 	/* Create a deep dependency chain, with a few branches */
 	for (n = 0; n < nreq && igt_seconds_elapsed(&tv) < 2; n++) {
@@ -1874,7 +2072,10 @@ static void deep(int fd, const intel_ctx_cfg_t *cfg,
 		gem_context_set_priority(fd, context->id, MAX_PRIO - nreq + n);
 
 		for (int m = 0; m < XS; m++)
-			store_dword_plug(fd, context, ring, dep[m], 4*n, context->id, plug, I915_GEM_DOMAIN_INSTRUCTION);
+			store_dword_plug(fd, ahnd, context, ring,
+					 dep[m], dep_offset[m], 4*n,
+					 context->id, plug, plug_offset,
+					 I915_GEM_DOMAIN_INSTRUCTION);
 	}
 	igt_info("First deptree: %d requests [%.3fs]\n",
 		 n * XS, 1e-9*igt_nsec_elapsed(&tv));
@@ -1886,8 +2087,10 @@ static void deep(int fd, const intel_ctx_cfg_t *cfg,
 
 		expected = context->id;
 		for (int m = 0; m < XS; m++) {
-			store_dword_plug(fd, context, ring, result, 4*n, expected, dep[m], 0);
-			store_dword(fd, context, ring, result, 4*m, expected, I915_GEM_DOMAIN_INSTRUCTION);
+			store_dword_plug(fd, ahnd, context, ring, result, result_offset,
+					 4*n, expected, dep[m], dep_offset[m], 0);
+			store_dword(fd, ahnd, context, ring, result, result_offset,
+				    4*m, expected, I915_GEM_DOMAIN_INSTRUCTION);
 		}
 	}
 	igt_info("Second deptree: %d requests [%.3fs]\n",
@@ -1912,8 +2115,13 @@ static void deep(int fd, const intel_ctx_cfg_t *cfg,
 	gem_close(fd, result);
 
 	/* No reordering due to PI on all contexts because of the common dep */
-	for (int m = 0; m < XS; m++)
+	for (int m = 0; m < XS; m++) {
+		put_offset(ahnd, dep[m]);
 		igt_assert_eq_u32(read_buf[m], expected);
+	}
+	put_offset(ahnd, result);
+	put_offset(ahnd, plug);
+	put_ahnd(ahnd);
 
 	free(ctx);
 #undef XS
@@ -1941,12 +2149,14 @@ static void wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	const intel_ctx_t **ctx;
 	unsigned int count;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0), result_offset;
 
 	ctx = malloc(sizeof(*ctx)*MAX_CONTEXTS);
 	for (int n = 0; n < MAX_CONTEXTS; n++)
 		ctx[n] = intel_ctx_create(fd, cfg);
 
 	result = gem_create(fd, 4*MAX_CONTEXTS);
+	result_offset = get_offset(ahnd, result, 4 * MAX_CONTEXTS, 0);
 
 	fence = igt_cork_plug(&cork, fd);
 
@@ -1955,7 +2165,8 @@ static void wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	     igt_seconds_elapsed(&tv) < 5 && count < ring_size;
 	     count++) {
 		for (int n = 0; n < MAX_CONTEXTS; n++) {
-			store_dword_fenced(fd, ctx[n], ring, result, 4*n, ctx[n]->id,
+			store_dword_fenced(fd, ahnd, ctx[n], ring,
+					   result, result_offset, 4*n, ctx[n]->id,
 					   fence, I915_GEM_DOMAIN_INSTRUCTION);
 		}
 	}
@@ -1974,6 +2185,8 @@ static void wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 
 	gem_close(fd, result);
 	free(ctx);
+	put_offset(ahnd, result);
+	put_ahnd(ahnd);
 }
 
 static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
@@ -1989,8 +2202,11 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	IGT_CORK_FENCE(cork);
 	uint32_t *expected;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0), result_offset;
+	unsigned int sz = ALIGN(ring_size * 64, 4096);
 
 	result = gem_create(fd, 4096);
+	result_offset = get_offset(ahnd, result, 4096, 0);
 	target = gem_create(fd, 4096);
 	fence = igt_cork_plug(&cork, fd);
 
@@ -1999,11 +2215,13 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = result;
+	obj[0].offset = result_offset;
 	obj[1].relocs_ptr = to_user_pointer(&reloc);
-	obj[1].relocation_count = 1;
+	obj[1].relocation_count = !ahnd ? 1 : 0;
 
 	memset(&reloc, 0, sizeof(reloc));
 	reloc.target_handle = result;
+	reloc.presumed_offset = obj[0].offset;
 	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 	reloc.write_domain = 0; /* lies */
 
@@ -2017,8 +2235,12 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	execbuf.flags |= I915_EXEC_FENCE_IN;
 	execbuf.rsvd2 = fence;
 
+	if (ahnd) {
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+	}
+
 	for (int n = 0, x = 1; n < ARRAY_SIZE(priorities); n++, x++) {
-		unsigned int sz = ALIGN(ring_size * 64, 4096);
 		uint32_t *batch;
 		const intel_ctx_t *tmp_ctx;
 
@@ -2027,6 +2249,9 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 		execbuf.rsvd1 = tmp_ctx->id;
 
 		obj[1].handle = gem_create(fd, sz);
+		if (ahnd)
+			obj[1].offset = get_offset(ahnd, obj[1].handle, sz, 0);
+
 		batch = gem_mmap__device_coherent(fd, obj[1].handle, 0, sz, PROT_WRITE);
 		gem_set_domain(fd, obj[1].handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
 
@@ -2064,6 +2289,7 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 
 		munmap(batch, sz);
 		gem_close(fd, obj[1].handle);
+		put_offset(ahnd, obj[1].handle);
 		intel_ctx_destroy(fd, tmp_ctx);
 	}
 
@@ -2078,6 +2304,8 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 
 	gem_close(fd, result);
 	gem_close(fd, target);
+	put_offset(ahnd, result);
+	put_ahnd(ahnd);
 }
 
 static void bind_to_cpu(int cpu)
@@ -2268,6 +2496,10 @@ struct ufd_thread {
 	pthread_mutex_t mutex;
 	pthread_cond_t cond;
 	int count;
+
+	uint64_t ahnd;
+	uint64_t batch_offset;
+	uint64_t scratch_offset;
 };
 
 static uint32_t create_userptr(int i915, void *page)
@@ -2419,9 +2651,9 @@ static void *iova_thread(struct ufd_thread *t, int prio)
 	ctx = intel_ctx_create(t->i915, t->cfg);
 	gem_context_set_priority(t->i915, ctx->id, prio);
 
-	store_dword_plug(t->i915, ctx, t->engine,
-			 t->scratch, 0, prio,
-			 t->batch, 0 /* no write hazard! */);
+	store_dword_plug(t->i915, t->ahnd, ctx, t->engine,
+			 t->scratch, t->scratch_offset, 0, prio,
+			 t->batch, t->batch_offset, 0 /* no write hazard! */);
 
 	pthread_mutex_lock(&t->mutex);
 	if (!--t->count)
@@ -2455,6 +2687,7 @@ static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
 	pthread_t hi, lo;
 	char poison[4096];
 	int ufd;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	/*
 	 * In this scenario, we have a pair of contending contexts that
@@ -2485,6 +2718,7 @@ static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
 	t.i915 = i915;
 	t.cfg = &ufd_cfg;
 	t.engine = engine;
+	t.ahnd = ahnd;
 
 	t.count = 2;
 	pthread_cond_init(&t.cond, NULL);
@@ -2494,6 +2728,8 @@ static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert(t.page != MAP_FAILED);
 	t.batch = create_userptr(i915, t.page);
 	t.scratch = gem_create(i915, 4096);
+	t.batch_offset = get_offset(ahnd, t.batch, 4096, 0);
+	t.scratch_offset = get_offset(ahnd, t.scratch, 4096, 0);
 
 	/* Register our fault handler for t.page */
 	memset(&reg, 0, sizeof(reg));
@@ -2521,7 +2757,7 @@ static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
 	 * the local tasklet will not run until after all signals have been
 	 * delivered... but another tasklet might).
 	 */
-	spin = igt_spin_new(i915, .engine = engine);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .engine = engine);
 	for (int i = 0; i < MAX_ELSP_QLEN; i++) {
 		const intel_ctx_t *ctx = create_highest_priority(i915, cfg);
 		spin->execbuf.rsvd1 = ctx->id;
@@ -2554,6 +2790,9 @@ static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
 	pthread_mutex_unlock(&t.mutex);
 	igt_debugfs_dump(i915, "i915_engine_info");
 	igt_spin_free(i915, spin);
+	put_offset(ahnd, t.scratch);
+	put_offset(ahnd, t.batch);
+	put_ahnd(ahnd);
 
 	pthread_join(hi, NULL);
 	pthread_join(lo, NULL);
@@ -2574,6 +2813,7 @@ static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
 {
 	const struct intel_execution_engine2 *signaler, *e;
 	struct rapl gpu, pkg;
+	uint64_t ahnd = get_simple_l2h_ahnd(i915, ctx->id);
 
 	igt_require(gpu_power_open(&gpu) == 0);
 	pkg_power_open(&pkg);
@@ -2584,12 +2824,14 @@ static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
 		} s_spin[2], s_sema[2];
 		double baseline, total;
 		int64_t jiffie = 1;
-		igt_spin_t *spin;
+		igt_spin_t *spin, *sema[GEM_MAX_ENGINES] = {};
+		int i;
 
 		if (!gem_class_can_store_dword(i915, signaler->class))
 			continue;
 
 		spin = __igt_spin_new(i915,
+				      .ahnd = ahnd,
 				      .ctx = ctx,
 				      .engine = signaler->flags,
 				      .flags = IGT_SPIN_POLL_RUN);
@@ -2603,19 +2845,23 @@ static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
 		rapl_read(&pkg, &s_spin[1].pkg);
 
 		/* Add a waiter to each engine */
+		i = 0;
 		for_each_ctx_engine(i915, ctx, e) {
-			igt_spin_t *sema;
-
-			if (e->flags == signaler->flags)
+			if (e->flags == signaler->flags) {
+				i++;
 				continue;
+			}
 
-			sema = __igt_spin_new(i915,
-					      .ctx = ctx,
-					      .engine = e->flags,
-					      .dependency = spin->handle);
-
-			igt_spin_free(i915, sema);
+			sema[i] = __igt_spin_new(i915,
+						 .ahnd = ahnd,
+						 .ctx = ctx,
+						 .engine = e->flags,
+						 .dependency = spin->handle);
+			i++;
 		}
+		for (i = 0; i < GEM_MAX_ENGINES; i++)
+			if (sema[i])
+				igt_spin_free(i915, sema[i]);
 		usleep(10); /* just give the tasklets a chance to run */
 
 		rapl_read(&pkg, &s_sema[0].pkg);
@@ -2646,6 +2892,7 @@ static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
 
 	rapl_close(&gpu);
 	rapl_close(&pkg);
+	put_ahnd(ahnd);
 }
 
 static int read_timestamp_frequency(int i915)
@@ -2703,9 +2950,16 @@ static uint32_t read_ctx_timestamp(int i915, const intel_ctx_t *ctx,
 #define RUNTIME (base + 0x3a8)
 	uint32_t *map, *cs;
 	uint32_t ts;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	igt_require(base);
 
+	if (ahnd) {
+		obj.offset = get_offset(ahnd, obj.handle, 4096, 0);
+		obj.flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+		obj.relocation_count = 0;
+	}
+
 	cs = map = gem_mmap__device_coherent(i915, obj.handle,
 					     0, 4096, PROT_WRITE);
 
@@ -2741,11 +2995,14 @@ static void fairslice(int i915, const intel_ctx_cfg_t *cfg,
 	double threshold;
 	const intel_ctx_t *ctx[3];
 	uint32_t ts[3];
+	uint64_t ahnd;
 
 	for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
 		ctx[i] = intel_ctx_create(i915, cfg);
 		if (spin == NULL) {
+			ahnd = get_reloc_ahnd(i915, ctx[i]->id);
 			spin = __igt_spin_new(i915,
+					      .ahnd = ahnd,
 					      .ctx = ctx[i],
 					      .engine = e->flags,
 					      .flags = flags);
@@ -2770,6 +3027,7 @@ static void fairslice(int i915, const intel_ctx_cfg_t *cfg,
 	for (int i = 0; i < ARRAY_SIZE(ctx); i++)
 		intel_ctx_destroy(i915, ctx[i]);
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 
 	/*
 	 * If we imagine that the timeslices are randomly distributed to
@@ -2879,6 +3137,9 @@ igt_main
 			test_each_engine("u-fairslice", fd, ctx, e)
 				fairslice(fd, &ctx->cfg, e, IGT_SPIN_USERPTR, 2);
 
+			igt_fixture {
+				intel_allocator_multiprocess_start();
+			}
 			igt_subtest("fairslice-all")  {
 				for_each_ctx_engine(fd, ctx, e) {
 					igt_fork(child, 1)
@@ -2895,6 +3156,9 @@ igt_main
 				}
 				igt_waitchildren();
 			}
+			igt_fixture {
+				intel_allocator_multiprocess_stop();
+			}
 		}
 
 		test_each_engine("submit-early-slice", fd, ctx, e)
-- 
2.26.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [igt-dev] [PATCH i-g-t 5/5] HAX: remove gttfill for tgl ci
  2021-08-16 11:56 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
                   ` (3 preceding siblings ...)
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: " Zbigniew Kempczyński
@ 2021-08-16 11:56 ` Zbigniew Kempczyński
  2021-08-16 14:19 ` [igt-dev] ✓ Fi.CI.BAT: success for Adopt to use allocator (rev3) Patchwork
  2021-08-16 17:22 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
  6 siblings, 0 replies; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-16 11:56 UTC (permalink / raw)
  To: igt-dev; +Cc: Zbigniew Kempczyński

---
 tests/intel-ci/fast-feedback.testlist | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
index fa5006d2e..cac694b61 100644
--- a/tests/intel-ci/fast-feedback.testlist
+++ b/tests/intel-ci/fast-feedback.testlist
@@ -22,7 +22,6 @@ igt@gem_exec_fence@basic-busy
 igt@gem_exec_fence@basic-wait
 igt@gem_exec_fence@basic-await
 igt@gem_exec_fence@nb-await
-igt@gem_exec_gttfill@basic
 igt@gem_exec_parallel@engines
 igt@gem_exec_store@basic
 igt@gem_exec_suspend@basic-s0
-- 
2.26.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for Adopt to use allocator (rev3)
  2021-08-16 11:56 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
                   ` (4 preceding siblings ...)
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 5/5] HAX: remove gttfill for tgl ci Zbigniew Kempczyński
@ 2021-08-16 14:19 ` Patchwork
  2021-08-16 17:22 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
  6 siblings, 0 replies; 15+ messages in thread
From: Patchwork @ 2021-08-16 14:19 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 4563 bytes --]

== Series Details ==

Series: Adopt to use allocator (rev3)
URL   : https://patchwork.freedesktop.org/series/93661/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10489 -> IGTPW_6124
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/index.html

Known issues
------------

  Here are the changes found in IGTPW_6124 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_basic@query-info:
    - fi-bsw-kefka:       NOTRUN -> [SKIP][1] ([fdo#109271]) +17 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-bsw-kefka/igt@amdgpu/amd_basic@query-info.html
    - fi-tgl-1115g4:      NOTRUN -> [SKIP][2] ([fdo#109315])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-tgl-1115g4/igt@amdgpu/amd_basic@query-info.html

  * igt@amdgpu/amd_cs_nop@nop-gfx0:
    - fi-tgl-1115g4:      NOTRUN -> [SKIP][3] ([fdo#109315] / [i915#2575]) +16 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-tgl-1115g4/igt@amdgpu/amd_cs_nop@nop-gfx0.html

  * igt@amdgpu/amd_prime@amd-to-i915:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][4] ([fdo#109271])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-kbl-soraka/igt@amdgpu/amd_prime@amd-to-i915.html

  * igt@i915_pm_backlight@basic-brightness:
    - fi-tgl-1115g4:      NOTRUN -> [SKIP][5] ([i915#1155])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-tgl-1115g4/igt@i915_pm_backlight@basic-brightness.html

  * igt@kms_psr@primary_mmap_gtt:
    - fi-tgl-1115g4:      NOTRUN -> [SKIP][6] ([i915#1072]) +3 similar issues
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-tgl-1115g4/igt@kms_psr@primary_mmap_gtt.html

  * igt@prime_vgem@basic-userptr:
    - fi-tgl-1115g4:      NOTRUN -> [SKIP][7] ([i915#3301])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-tgl-1115g4/igt@prime_vgem@basic-userptr.html

  
#### Possible fixes ####

  * igt@gem_exec_fence@basic-busy@bcs0:
    - {fi-dg1-1}:         [FAIL][8] ([i915#3717]) -> [PASS][9] +19 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/fi-dg1-1/igt@gem_exec_fence@basic-busy@bcs0.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-dg1-1/igt@gem_exec_fence@basic-busy@bcs0.html

  * igt@i915_selftest@live@execlists:
    - fi-bsw-kefka:       [INCOMPLETE][10] ([i915#2940]) -> [PASS][11]
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/fi-bsw-kefka/igt@i915_selftest@live@execlists.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-bsw-kefka/igt@i915_selftest@live@execlists.html

  * igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a:
    - fi-tgl-1115g4:      [DMESG-WARN][12] ([i915#1887]) -> [PASS][13]
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/fi-tgl-1115g4/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/fi-tgl-1115g4/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1155]: https://gitlab.freedesktop.org/drm/intel/issues/1155
  [i915#1887]: https://gitlab.freedesktop.org/drm/intel/issues/1887
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2940]: https://gitlab.freedesktop.org/drm/intel/issues/2940
  [i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301
  [i915#3717]: https://gitlab.freedesktop.org/drm/intel/issues/3717


Participating hosts (36 -> 34)
------------------------------

  Missing    (2): fi-bsw-cyan fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_6177 -> IGTPW_6124

  CI-20190529: 20190529
  CI_DRM_10489: a5e502cef015ed88de65a044cf260c8beb63abc8 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_6124: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/index.html
  IGT_6177: f474644e7226dd319195ca03b3cde82ad10ac54c @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/index.html

[-- Attachment #2: Type: text/html, Size: 5473 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [igt-dev] ✓ Fi.CI.IGT: success for Adopt to use allocator (rev3)
  2021-08-16 11:56 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
                   ` (5 preceding siblings ...)
  2021-08-16 14:19 ` [igt-dev] ✓ Fi.CI.BAT: success for Adopt to use allocator (rev3) Patchwork
@ 2021-08-16 17:22 ` Patchwork
  6 siblings, 0 replies; 15+ messages in thread
From: Patchwork @ 2021-08-16 17:22 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 30247 bytes --]

== Series Details ==

Series: Adopt to use allocator (rev3)
URL   : https://patchwork.freedesktop.org/series/93661/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10489_full -> IGTPW_6124_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/index.html

Known issues
------------

  Here are the changes found in IGTPW_6124_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_persistence@legacy-engines-mixed:
    - shard-snb:          NOTRUN -> [SKIP][1] ([fdo#109271] / [i915#1099]) +4 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-snb7/igt@gem_ctx_persistence@legacy-engines-mixed.html

  * igt@gem_ctx_sseu@invalid-sseu:
    - shard-tglb:         NOTRUN -> [SKIP][2] ([i915#280])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb6/igt@gem_ctx_sseu@invalid-sseu.html

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [PASS][3] -> [TIMEOUT][4] ([i915#2369] / [i915#3063] / [i915#3648])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-tglb5/igt@gem_eio@unwedge-stress.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb1/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-apl:          NOTRUN -> [FAIL][5] ([i915#2846])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl8/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-tglb:         [PASS][6] -> [FAIL][7] ([i915#2842])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-tglb2/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb6/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-kbl:          NOTRUN -> [FAIL][8] ([i915#2842])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl4/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][9] ([fdo#109271] / [i915#2190])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl2/igt@gem_huc_copy@huc-copy.html

  * igt@gem_mmap_gtt@cpuset-basic-small-copy-xy:
    - shard-glk:          [PASS][10] -> [FAIL][11] ([i915#1888] / [i915#307] / [i915#3468])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-glk5/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk9/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html

  * igt@gem_pread@exhaustion:
    - shard-glk:          NOTRUN -> [WARN][12] ([i915#2658])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk7/igt@gem_pread@exhaustion.html
    - shard-apl:          NOTRUN -> [WARN][13] ([i915#2658])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl7/igt@gem_pread@exhaustion.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-snb:          NOTRUN -> [WARN][14] ([i915#2658])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-snb2/igt@gem_pwrite@basic-exhaustion.html
    - shard-tglb:         NOTRUN -> [WARN][15] ([i915#2658])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb1/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_render_copy@linear-to-vebox-yf-tiled:
    - shard-iclb:         NOTRUN -> [SKIP][16] ([i915#768]) +1 similar issue
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb6/igt@gem_render_copy@linear-to-vebox-yf-tiled.html

  * igt@gem_userptr_blits@access-control:
    - shard-tglb:         NOTRUN -> [SKIP][17] ([i915#3297]) +1 similar issue
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb2/igt@gem_userptr_blits@access-control.html
    - shard-iclb:         NOTRUN -> [SKIP][18] ([i915#3297]) +1 similar issue
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb6/igt@gem_userptr_blits@access-control.html

  * igt@gem_userptr_blits@input-checking:
    - shard-apl:          NOTRUN -> [DMESG-WARN][19] ([i915#3002])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl6/igt@gem_userptr_blits@input-checking.html

  * igt@gen3_render_tiledy_blits:
    - shard-tglb:         NOTRUN -> [SKIP][20] ([fdo#109289]) +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb7/igt@gen3_render_tiledy_blits.html
    - shard-iclb:         NOTRUN -> [SKIP][21] ([fdo#109289]) +1 similar issue
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb5/igt@gen3_render_tiledy_blits.html

  * igt@kms_atomic@plane-primary-overlay-mutable-zpos:
    - shard-tglb:         NOTRUN -> [SKIP][22] ([i915#404])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb6/igt@kms_atomic@plane-primary-overlay-mutable-zpos.html
    - shard-iclb:         NOTRUN -> [SKIP][23] ([i915#404])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb8/igt@kms_atomic@plane-primary-overlay-mutable-zpos.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip:
    - shard-apl:          NOTRUN -> [SKIP][24] ([fdo#109271] / [i915#3777]) +3 similar issues
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl2/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-kbl:          NOTRUN -> [SKIP][25] ([fdo#109271] / [i915#3777]) +1 similar issue
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl4/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
    - shard-glk:          NOTRUN -> [SKIP][26] ([fdo#109271] / [i915#3777]) +1 similar issue
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk8/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-180:
    - shard-tglb:         NOTRUN -> [SKIP][27] ([fdo#111615]) +1 similar issue
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb1/igt@kms_big_fb@yf-tiled-8bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
    - shard-iclb:         NOTRUN -> [SKIP][28] ([fdo#110723])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb5/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html

  * igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_gen12_mc_ccs:
    - shard-apl:          NOTRUN -> [SKIP][29] ([fdo#109271] / [i915#3886]) +13 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl1/igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-b-bad-aux-stride-y_tiled_gen12_rc_ccs_cc:
    - shard-glk:          NOTRUN -> [SKIP][30] ([fdo#109271] / [i915#3886]) +4 similar issues
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk1/igt@kms_ccs@pipe-b-bad-aux-stride-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-b-bad-rotation-90-yf_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][31] ([i915#3689]) +4 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb6/igt@kms_ccs@pipe-b-bad-rotation-90-yf_tiled_ccs.html

  * igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_gen12_rc_ccs_cc:
    - shard-kbl:          NOTRUN -> [SKIP][32] ([fdo#109271] / [i915#3886]) +4 similar issues
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl7/igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_rc_ccs_cc:
    - shard-iclb:         NOTRUN -> [SKIP][33] ([fdo#109278] / [i915#3886]) +3 similar issues
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb2/igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-c-crc-sprite-planes-basic-y_tiled_gen12_mc_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][34] ([i915#3689] / [i915#3886]) +1 similar issue
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb7/igt@kms_ccs@pipe-c-crc-sprite-planes-basic-y_tiled_gen12_mc_ccs.html

  * igt@kms_cdclk@mode-transition:
    - shard-apl:          NOTRUN -> [SKIP][35] ([fdo#109271]) +290 similar issues
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl7/igt@kms_cdclk@mode-transition.html

  * igt@kms_cdclk@plane-scaling:
    - shard-iclb:         NOTRUN -> [SKIP][36] ([i915#3742])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb1/igt@kms_cdclk@plane-scaling.html
    - shard-tglb:         NOTRUN -> [SKIP][37] ([i915#3742])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb5/igt@kms_cdclk@plane-scaling.html

  * igt@kms_color_chamelium@pipe-a-ctm-limited-range:
    - shard-apl:          NOTRUN -> [SKIP][38] ([fdo#109271] / [fdo#111827]) +21 similar issues
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl7/igt@kms_color_chamelium@pipe-a-ctm-limited-range.html

  * igt@kms_color_chamelium@pipe-a-gamma:
    - shard-iclb:         NOTRUN -> [SKIP][39] ([fdo#109284] / [fdo#111827]) +3 similar issues
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb1/igt@kms_color_chamelium@pipe-a-gamma.html

  * igt@kms_color_chamelium@pipe-c-ctm-red-to-blue:
    - shard-snb:          NOTRUN -> [SKIP][40] ([fdo#109271] / [fdo#111827]) +22 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-snb7/igt@kms_color_chamelium@pipe-c-ctm-red-to-blue.html

  * igt@kms_color_chamelium@pipe-d-ctm-0-25:
    - shard-glk:          NOTRUN -> [SKIP][41] ([fdo#109271] / [fdo#111827]) +6 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk5/igt@kms_color_chamelium@pipe-d-ctm-0-25.html
    - shard-tglb:         NOTRUN -> [SKIP][42] ([fdo#109284] / [fdo#111827]) +4 similar issues
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb7/igt@kms_color_chamelium@pipe-d-ctm-0-25.html
    - shard-iclb:         NOTRUN -> [SKIP][43] ([fdo#109278] / [fdo#109284] / [fdo#111827])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb7/igt@kms_color_chamelium@pipe-d-ctm-0-25.html

  * igt@kms_color_chamelium@pipe-invalid-gamma-lut-sizes:
    - shard-kbl:          NOTRUN -> [SKIP][44] ([fdo#109271] / [fdo#111827]) +4 similar issues
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl3/igt@kms_color_chamelium@pipe-invalid-gamma-lut-sizes.html

  * igt@kms_content_protection@uevent:
    - shard-apl:          NOTRUN -> [FAIL][45] ([i915#2105])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl6/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-a-cursor-32x32-random:
    - shard-tglb:         NOTRUN -> [SKIP][46] ([i915#3319])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb8/igt@kms_cursor_crc@pipe-a-cursor-32x32-random.html

  * igt@kms_cursor_crc@pipe-a-cursor-512x170-random:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([fdo#109279] / [i915#3359]) +1 similar issue
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb6/igt@kms_cursor_crc@pipe-a-cursor-512x170-random.html
    - shard-iclb:         NOTRUN -> [SKIP][48] ([fdo#109278] / [fdo#109279]) +1 similar issue
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb5/igt@kms_cursor_crc@pipe-a-cursor-512x170-random.html

  * igt@kms_cursor_crc@pipe-c-cursor-32x10-sliding:
    - shard-tglb:         NOTRUN -> [SKIP][49] ([i915#3359]) +1 similar issue
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb5/igt@kms_cursor_crc@pipe-c-cursor-32x10-sliding.html

  * igt@kms_cursor_crc@pipe-d-cursor-64x64-rapid-movement:
    - shard-iclb:         NOTRUN -> [SKIP][50] ([fdo#109278]) +13 similar issues
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb5/igt@kms_cursor_crc@pipe-d-cursor-64x64-rapid-movement.html

  * igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge:
    - shard-snb:          NOTRUN -> [SKIP][51] ([fdo#109271]) +449 similar issues
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-snb5/igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-kbl:          [PASS][52] -> [INCOMPLETE][53] ([i915#155] / [i915#180] / [i915#636])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl3/igt@kms_fbcon_fbt@fbc-suspend.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl7/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_flip@2x-wf_vblank-ts-check:
    - shard-iclb:         NOTRUN -> [SKIP][54] ([fdo#109274]) +1 similar issue
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb6/igt@kms_flip@2x-wf_vblank-ts-check.html

  * igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a1:
    - shard-glk:          [PASS][55] -> [FAIL][56] ([i915#79])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-glk4/igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a1.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk8/igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a1.html

  * igt@kms_flip@flip-vs-suspend@b-dp1:
    - shard-apl:          NOTRUN -> [DMESG-WARN][57] ([i915#180])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl3/igt@kms_flip@flip-vs-suspend@b-dp1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@a-hdmi-a1:
    - shard-glk:          [PASS][58] -> [FAIL][59] ([i915#2122])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-glk5/igt@kms_flip@plain-flip-fb-recreate-interruptible@a-hdmi-a1.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk7/igt@kms_flip@plain-flip-fb-recreate-interruptible@a-hdmi-a1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs:
    - shard-apl:          NOTRUN -> [SKIP][60] ([fdo#109271] / [i915#2672])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl6/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-render:
    - shard-glk:          [PASS][61] -> [FAIL][62] ([i915#2546])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-glk5/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-render.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-onoff:
    - shard-tglb:         NOTRUN -> [SKIP][63] ([fdo#111825]) +20 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-mmap-wc:
    - shard-iclb:         NOTRUN -> [SKIP][64] ([fdo#109280]) +15 similar issues
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb3/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-cpu:
    - shard-glk:          NOTRUN -> [SKIP][65] ([fdo#109271]) +70 similar issues
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk1/igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-cpu.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][66] ([i915#180])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl1/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_pipe_crc_basic@disable-crc-after-crtc-pipe-d:
    - shard-apl:          NOTRUN -> [SKIP][67] ([fdo#109271] / [i915#533]) +1 similar issue
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl2/igt@kms_pipe_crc_basic@disable-crc-after-crtc-pipe-d.html

  * igt@kms_pipe_crc_basic@read-crc-pipe-d:
    - shard-kbl:          NOTRUN -> [SKIP][68] ([fdo#109271] / [i915#533]) +1 similar issue
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl6/igt@kms_pipe_crc_basic@read-crc-pipe-d.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
    - shard-kbl:          [PASS][69] -> [DMESG-WARN][70] ([i915#180]) +6 similar issues
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl7/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-max:
    - shard-glk:          NOTRUN -> [FAIL][71] ([fdo#108145] / [i915#265]) +1 similar issue
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk9/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-max.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-basic:
    - shard-apl:          NOTRUN -> [FAIL][72] ([fdo#108145] / [i915#265]) +1 similar issue
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl6/igt@kms_plane_alpha_blend@pipe-c-alpha-basic.html

  * igt@kms_plane_lowres@pipe-a-tiling-x:
    - shard-iclb:         NOTRUN -> [SKIP][73] ([i915#3536])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb1/igt@kms_plane_lowres@pipe-a-tiling-x.html
    - shard-tglb:         NOTRUN -> [SKIP][74] ([i915#3536])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb3/igt@kms_plane_lowres@pipe-a-tiling-x.html

  * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping:
    - shard-apl:          NOTRUN -> [SKIP][75] ([fdo#109271] / [i915#2733])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl7/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html
    - shard-kbl:          NOTRUN -> [SKIP][76] ([fdo#109271] / [i915#2733])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl3/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html
    - shard-glk:          NOTRUN -> [SKIP][77] ([fdo#109271] / [i915#2733])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk9/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html

  * igt@kms_psr2_sf@cursor-plane-update-sf:
    - shard-tglb:         NOTRUN -> [SKIP][78] ([i915#2920]) +2 similar issues
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb8/igt@kms_psr2_sf@cursor-plane-update-sf.html
    - shard-iclb:         NOTRUN -> [SKIP][79] ([i915#2920])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb2/igt@kms_psr2_sf@cursor-plane-update-sf.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-2:
    - shard-apl:          NOTRUN -> [SKIP][80] ([fdo#109271] / [i915#658]) +3 similar issues
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl7/igt@kms_psr2_sf@plane-move-sf-dmg-area-2.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1:
    - shard-iclb:         NOTRUN -> [SKIP][81] ([i915#658]) +1 similar issue
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb5/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html
    - shard-glk:          NOTRUN -> [SKIP][82] ([fdo#109271] / [i915#658]) +1 similar issue
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk6/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html
    - shard-kbl:          NOTRUN -> [SKIP][83] ([fdo#109271] / [i915#658]) +2 similar issues
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl3/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html

  * igt@kms_psr@psr2_no_drrs:
    - shard-iclb:         [PASS][84] -> [SKIP][85] ([fdo#109441]) +1 similar issue
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-iclb2/igt@kms_psr@psr2_no_drrs.html
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb8/igt@kms_psr@psr2_no_drrs.html

  * igt@kms_psr@psr2_primary_mmap_cpu:
    - shard-iclb:         NOTRUN -> [SKIP][86] ([fdo#109441])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb7/igt@kms_psr@psr2_primary_mmap_cpu.html
    - shard-tglb:         NOTRUN -> [FAIL][87] ([i915#132] / [i915#3467])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb7/igt@kms_psr@psr2_primary_mmap_cpu.html

  * igt@kms_sysfs_edid_timing:
    - shard-apl:          NOTRUN -> [FAIL][88] ([IGT#2])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl1/igt@kms_sysfs_edid_timing.html

  * igt@kms_universal_plane@disable-primary-vs-flip-pipe-d:
    - shard-kbl:          NOTRUN -> [SKIP][89] ([fdo#109271]) +79 similar issues
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl7/igt@kms_universal_plane@disable-primary-vs-flip-pipe-d.html

  * igt@kms_vrr@flipline:
    - shard-tglb:         NOTRUN -> [SKIP][90] ([fdo#109502])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb7/igt@kms_vrr@flipline.html
    - shard-iclb:         NOTRUN -> [SKIP][91] ([fdo#109502])
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb5/igt@kms_vrr@flipline.html

  * igt@nouveau_crc@pipe-a-ctx-flip-skip-current-frame:
    - shard-tglb:         NOTRUN -> [SKIP][92] ([i915#2530]) +1 similar issue
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb8/igt@nouveau_crc@pipe-a-ctx-flip-skip-current-frame.html
    - shard-iclb:         NOTRUN -> [SKIP][93] ([i915#2530])
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb4/igt@nouveau_crc@pipe-a-ctx-flip-skip-current-frame.html

  * igt@nouveau_crc@pipe-d-ctx-flip-skip-current-frame:
    - shard-iclb:         NOTRUN -> [SKIP][94] ([fdo#109278] / [i915#2530])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb4/igt@nouveau_crc@pipe-d-ctx-flip-skip-current-frame.html

  * igt@prime_nv_pcopy@test1_macro:
    - shard-tglb:         NOTRUN -> [SKIP][95] ([fdo#109291])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb6/igt@prime_nv_pcopy@test1_macro.html
    - shard-iclb:         NOTRUN -> [SKIP][96] ([fdo#109291])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb2/igt@prime_nv_pcopy@test1_macro.html

  * igt@sysfs_clients@fair-7:
    - shard-iclb:         NOTRUN -> [SKIP][97] ([i915#2994]) +1 similar issue
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb2/igt@sysfs_clients@fair-7.html
    - shard-tglb:         NOTRUN -> [SKIP][98] ([i915#2994]) +1 similar issue
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb8/igt@sysfs_clients@fair-7.html

  * igt@sysfs_clients@sema-50:
    - shard-kbl:          NOTRUN -> [SKIP][99] ([fdo#109271] / [i915#2994]) +2 similar issues
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl6/igt@sysfs_clients@sema-50.html
    - shard-apl:          NOTRUN -> [SKIP][100] ([fdo#109271] / [i915#2994])
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-apl8/igt@sysfs_clients@sema-50.html
    - shard-glk:          NOTRUN -> [SKIP][101] ([fdo#109271] / [i915#2994]) +1 similar issue
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk8/igt@sysfs_clients@sema-50.html

  
#### Possible fixes ####

  * igt@feature_discovery@psr2:
    - shard-iclb:         [SKIP][102] ([i915#658]) -> [PASS][103]
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-iclb8/igt@feature_discovery@psr2.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb2/igt@feature_discovery@psr2.html

  * igt@gem_eio@in-flight-contexts-10ms:
    - shard-tglb:         [TIMEOUT][104] ([i915#3063]) -> [PASS][105]
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-tglb3/igt@gem_eio@in-flight-contexts-10ms.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb7/igt@gem_eio@in-flight-contexts-10ms.html

  * igt@gem_eio@unwedge-stress:
    - shard-iclb:         [TIMEOUT][106] ([i915#2369] / [i915#2481] / [i915#3070]) -> [PASS][107]
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-iclb8/igt@gem_eio@unwedge-stress.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb1/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-kbl:          [FAIL][108] ([i915#2846]) -> [PASS][109]
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl6/igt@gem_exec_fair@basic-deadline.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl6/igt@gem_exec_fair@basic-deadline.html
    - shard-glk:          [FAIL][110] ([i915#2846]) -> [PASS][111]
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-glk1/igt@gem_exec_fair@basic-deadline.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk8/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-kbl:          [FAIL][112] ([i915#2842]) -> [PASS][113] +2 similar issues
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl3/igt@gem_exec_fair@basic-none@vcs0.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl4/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
    - shard-iclb:         [FAIL][114] ([i915#2842]) -> [PASS][115]
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-iclb8/igt@gem_exec_fair@basic-pace-solo@rcs0.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb6/igt@gem_exec_fair@basic-pace-solo@rcs0.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [FAIL][116] ([i915#454]) -> [PASS][117]
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-iclb6/igt@i915_pm_dc@dc6-psr.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb1/igt@i915_pm_dc@dc6-psr.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2:
    - shard-glk:          [FAIL][118] ([i915#2122]) -> [PASS][119] +1 similar issue
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-glk3/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2.html
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-glk7/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-kbl:          [DMESG-WARN][120] ([i915#180]) -> [PASS][121]
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl1/igt@kms_frontbuffer_tracking@fbc-suspend.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl4/igt@kms_frontbuffer_tracking@fbc-suspend.html

  * igt@kms_hdmi_inject@inject-audio:
    - shard-tglb:         [SKIP][122] ([i915#433]) -> [PASS][123]
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-tglb8/igt@kms_hdmi_inject@inject-audio.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-tglb5/igt@kms_hdmi_inject@inject-audio.html

  * igt@kms_psr@psr2_primary_mmap_gtt:
    - shard-iclb:         [SKIP][124] ([fdo#109441]) -> [PASS][125] +1 similar issue
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-iclb3/igt@kms_psr@psr2_primary_mmap_gtt.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb2/igt@kms_psr@psr2_primary_mmap_gtt.html

  
#### Warnings ####

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-iclb:         [FAIL][126] ([i915#2842]) -> [FAIL][127] ([i915#2849])
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-iclb1/igt@gem_exec_fair@basic-throttle@rcs0.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb3/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@i915_suspend@forcewake:
    - shard-kbl:          [INCOMPLETE][128] ([i915#155] / [i915#636]) -> [DMESG-WARN][129] ([i915#180])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl3/igt@i915_suspend@forcewake.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl1/igt@i915_suspend@forcewake.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-4:
    - shard-iclb:         [SKIP][130] ([i915#2920]) -> [SKIP][131] ([i915#658])
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-4.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-iclb6/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-4.html

  * igt@runner@aborted:
    - shard-kbl:          ([FAIL][132], [FAIL][133], [FAIL][134], [FAIL][135]) ([fdo#109271] / [i915#180] / [i915#1814] / [i915#3002] / [i915#3363]) -> ([FAIL][136], [FAIL][137], [FAIL][138], [FAIL][139], [FAIL][140], [FAIL][141], [FAIL][142], [FAIL][143]) ([i915#1436] / [i915#180] / [i915#1814] / [i915#3002] / [i915#3363] / [i915#602] / [i915#92])
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl1/igt@runner@aborted.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl2/igt@runner@aborted.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl7/igt@runner@aborted.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10489/shard-kbl7/igt@runner@aborted.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl1/igt@runner@aborted.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl1/igt@runner@aborted.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl1/igt@runner@aborted.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl1/igt@runner@aborted.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl1/igt@runner@aborted.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/shard-kbl7/igt@runner@aborted.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6124/index.html

[-- Attachment #2: Type: text/html, Size: 34313 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 1/5] tests/gem_exec_capture: Remove unnecessary multiprocess stop() call
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 1/5] tests/gem_exec_capture: Remove unnecessary multiprocess stop() call Zbigniew Kempczyński
@ 2021-08-17  0:25   ` Dixit, Ashutosh
  0 siblings, 0 replies; 15+ messages in thread
From: Dixit, Ashutosh @ 2021-08-17  0:25 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 16 Aug 2021 04:56:22 -0700, Zbigniew Kempczyński wrote:
>
> I've incidentally missed this during review and line calling
> intel_allocator_multiprocess_stop() left before merge.
>
> Remove this as source of confusion - for igt_fork() we can use
> standalone allocator within child for some cases (reopen driver
> or work within new created context).

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 2/5] tests/gem_exec_big: Move check of pressumed offset out of no-reloc scope
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 2/5] tests/gem_exec_big: Move check of pressumed offset out of no-reloc scope Zbigniew Kempczyński
@ 2021-08-17  0:31   ` Dixit, Ashutosh
  0 siblings, 0 replies; 15+ messages in thread
From: Dixit, Ashutosh @ 2021-08-17  0:31 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Mon, 16 Aug 2021 04:56:23 -0700, Zbigniew Kempczyński wrote:
>
> Missed during addressing last review - we don't want to process checking
> of pressumed offset on no-reloc path. Move this out of no-reloc scope.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: Adopt to use allocator
  2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: " Zbigniew Kempczyński
@ 2021-08-17  2:20   ` Dixit, Ashutosh
  2021-08-17  4:11     ` Dixit, Ashutosh
  2021-08-17  5:18     ` Zbigniew Kempczyński
  0 siblings, 2 replies; 15+ messages in thread
From: Dixit, Ashutosh @ 2021-08-17  2:20 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 16 Aug 2021 04:56:25 -0700, Zbigniew Kempczyński wrote:
>
> @@ -117,12 +119,23 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
>
> +	if (ahnd) {
> +		obj[0].offset = cork_offset;
> +		obj[0].flags |= EXEC_OBJECT_PINNED;
> +		obj[1].offset = target_offset;
> +		obj[1].flags |= EXEC_OBJECT_PINNED;
> +		if (write_domain)
> +			obj[1].flags |= EXEC_OBJECT_WRITE;
> +		obj[2].offset = get_offset(ahnd, obj[2].handle, 4096, 0);
> +		obj[2].flags |= EXEC_OBJECT_PINNED;
> +	} else {
> +		obj[0].offset = cork << 20;
> +		obj[1].offset = target << 20;
> +		obj[2].offset = 256 << 10;
> +		obj[2].offset += (random() % 128) << 12;
> +	}
>
>	memset(&reloc, 0, sizeof(reloc));
>	reloc.target_handle = obj[1].handle;
> @@ -132,13 +145,13 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
>	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
>	reloc.write_domain = write_domain;
>	obj[2].relocs_ptr = to_user_pointer(&reloc);
> -	obj[2].relocation_count = 1;
> +	obj[2].relocation_count = !ahnd ? 1 : 0;
>
>	i = 0;
>	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
>	if (gen >= 8) {
>		batch[++i] = reloc.presumed_offset + reloc.delta;
> -		batch[++i] = 0;
> +		batch[++i] = (reloc.presumed_offset + reloc.delta) >> 32;
>	} else if (gen >= 4) {
>		batch[++i] = 0;
>		batch[++i] = reloc.presumed_offset + reloc.delta;
> @@ -155,31 +168,38 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,

I think we need this here (or in the callers):

	if (ahnd)
		put_offset(ahnd, obj[2].offset);

> @@ -2584,12 +2824,14 @@ static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
>		} s_spin[2], s_sema[2];
>		double baseline, total;
>		int64_t jiffie = 1;
> -		igt_spin_t *spin;
> +		igt_spin_t *spin, *sema[GEM_MAX_ENGINES] = {};
> +		int i;
>
>		if (!gem_class_can_store_dword(i915, signaler->class))
>			continue;
>
>		spin = __igt_spin_new(i915,
> +				      .ahnd = ahnd,
>				      .ctx = ctx,
>				      .engine = signaler->flags,
>				      .flags = IGT_SPIN_POLL_RUN);
> @@ -2603,19 +2845,23 @@ static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
>		rapl_read(&pkg, &s_spin[1].pkg);
>
>		/* Add a waiter to each engine */
> +		i = 0;
>		for_each_ctx_engine(i915, ctx, e) {
> -			igt_spin_t *sema;
> -
> -			if (e->flags == signaler->flags)
> +			if (e->flags == signaler->flags) {
> +				i++;
>				continue;
> +			}
>
> -			sema = __igt_spin_new(i915,
> -					      .ctx = ctx,
> -					      .engine = e->flags,
> -					      .dependency = spin->handle);
> -
> -			igt_spin_free(i915, sema);
> +			sema[i] = __igt_spin_new(i915,
> +						 .ahnd = ahnd,
> +						 .ctx = ctx,
> +						 .engine = e->flags,
> +						 .dependency = spin->handle);
> +			i++;
>		}
> +		for (i = 0; i < GEM_MAX_ENGINES; i++)
> +			if (sema[i])
> +				igt_spin_free(i915, sema[i]);

Did we create this array etc. to avoid the stall when the spin is freed and
the offset is reused? Or is there a different reason?

Otherwise this is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: Adopt to use allocator
  2021-08-17  2:20   ` Dixit, Ashutosh
@ 2021-08-17  4:11     ` Dixit, Ashutosh
  2021-08-17  5:21       ` Zbigniew Kempczyński
  2021-08-17  5:18     ` Zbigniew Kempczyński
  1 sibling, 1 reply; 15+ messages in thread
From: Dixit, Ashutosh @ 2021-08-17  4:11 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Mon, 16 Aug 2021 19:20:01 -0700, Dixit, Ashutosh wrote:
>
> > +		for (i = 0; i < GEM_MAX_ENGINES; i++)
> > +			if (sema[i])
> > +				igt_spin_free(i915, sema[i]);
>
> Did we create this array etc. to avoid the stall when the spin is freed and
> the offset is reused? Or is there a different reason?

A one line comment about why this was done would be nice prior to merging.

>
> Otherwise this is:
>
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: Adopt to use allocator
  2021-08-17  2:20   ` Dixit, Ashutosh
  2021-08-17  4:11     ` Dixit, Ashutosh
@ 2021-08-17  5:18     ` Zbigniew Kempczyński
  1 sibling, 0 replies; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-17  5:18 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Mon, Aug 16, 2021 at 07:20:01PM -0700, Dixit, Ashutosh wrote:
> On Mon, 16 Aug 2021 04:56:25 -0700, Zbigniew Kempczyński wrote:
> >
> > @@ -117,12 +119,23 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
> >
> > +	if (ahnd) {
> > +		obj[0].offset = cork_offset;
> > +		obj[0].flags |= EXEC_OBJECT_PINNED;
> > +		obj[1].offset = target_offset;
> > +		obj[1].flags |= EXEC_OBJECT_PINNED;
> > +		if (write_domain)
> > +			obj[1].flags |= EXEC_OBJECT_WRITE;
> > +		obj[2].offset = get_offset(ahnd, obj[2].handle, 4096, 0);
> > +		obj[2].flags |= EXEC_OBJECT_PINNED;
> > +	} else {
> > +		obj[0].offset = cork << 20;
> > +		obj[1].offset = target << 20;
> > +		obj[2].offset = 256 << 10;
> > +		obj[2].offset += (random() % 128) << 12;
> > +	}
> >
> >	memset(&reloc, 0, sizeof(reloc));
> >	reloc.target_handle = obj[1].handle;
> > @@ -132,13 +145,13 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
> >	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
> >	reloc.write_domain = write_domain;
> >	obj[2].relocs_ptr = to_user_pointer(&reloc);
> > -	obj[2].relocation_count = 1;
> > +	obj[2].relocation_count = !ahnd ? 1 : 0;
> >
> >	i = 0;
> >	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
> >	if (gen >= 8) {
> >		batch[++i] = reloc.presumed_offset + reloc.delta;
> > -		batch[++i] = 0;
> > +		batch[++i] = (reloc.presumed_offset + reloc.delta) >> 32;
> >	} else if (gen >= 4) {
> >		batch[++i] = 0;
> >		batch[++i] = reloc.presumed_offset + reloc.delta;
> > @@ -155,31 +168,38 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
> 
> I think we need this here (or in the callers):
> 
> 	if (ahnd)
> 		put_offset(ahnd, obj[2].offset);
>

Yes, we need this in the callers (store_dword(), store_dword_plug(), store_dword_fenced())
especially when I would like to use Simple in the future. I'm going to catch gem handle
there and free offset there.
 
> > @@ -2584,12 +2824,14 @@ static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
> >		} s_spin[2], s_sema[2];
> >		double baseline, total;
> >		int64_t jiffie = 1;
> > -		igt_spin_t *spin;
> > +		igt_spin_t *spin, *sema[GEM_MAX_ENGINES] = {};
> > +		int i;
> >
> >		if (!gem_class_can_store_dword(i915, signaler->class))
> >			continue;
> >
> >		spin = __igt_spin_new(i915,
> > +				      .ahnd = ahnd,
> >				      .ctx = ctx,
> >				      .engine = signaler->flags,
> >				      .flags = IGT_SPIN_POLL_RUN);
> > @@ -2603,19 +2845,23 @@ static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
> >		rapl_read(&pkg, &s_spin[1].pkg);
> >
> >		/* Add a waiter to each engine */
> > +		i = 0;
> >		for_each_ctx_engine(i915, ctx, e) {
> > -			igt_spin_t *sema;
> > -
> > -			if (e->flags == signaler->flags)
> > +			if (e->flags == signaler->flags) {
> > +				i++;
> >				continue;
> > +			}
> >
> > -			sema = __igt_spin_new(i915,
> > -					      .ctx = ctx,
> > -					      .engine = e->flags,
> > -					      .dependency = spin->handle);
> > -
> > -			igt_spin_free(i915, sema);
> > +			sema[i] = __igt_spin_new(i915,
> > +						 .ahnd = ahnd,
> > +						 .ctx = ctx,
> > +						 .engine = e->flags,
> > +						 .dependency = spin->handle);
> > +			i++;
> >		}
> > +		for (i = 0; i < GEM_MAX_ENGINES; i++)
> > +			if (sema[i])
> > +				igt_spin_free(i915, sema[i]);
> 
> Did we create this array etc. to avoid the stall when the spin is freed and
> the offset is reused? Or is there a different reason?
> 
> Otherwise this is:
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

Yes, we need same offset for .dependency = spin->handle in each sema, 
so I couldn't use Reloc in this case. But with Simple I got same 
batchbuffer offset for each sema what is not we want. We need 
each spin batchbuffer will occupy different offset so we have to
defer freeing spinners. 

I'm going to write that shifting offset strategy you've proposed
in Simple but I little bit afraid to doing this now. When all tests
will be ready to no-reloc then I'm going to change Simple implementation
and then we will see after reloc->simple change in each tests we
will see if this won't introduce regression.

Thanks for review and r-b.

--
Zbigniew

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: Adopt to use allocator
  2021-08-17  4:11     ` Dixit, Ashutosh
@ 2021-08-17  5:21       ` Zbigniew Kempczyński
  0 siblings, 0 replies; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-17  5:21 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev

On Mon, Aug 16, 2021 at 09:11:29PM -0700, Dixit, Ashutosh wrote:
> On Mon, 16 Aug 2021 19:20:01 -0700, Dixit, Ashutosh wrote:
> >
> > > +		for (i = 0; i < GEM_MAX_ENGINES; i++)
> > > +			if (sema[i])
> > > +				igt_spin_free(i915, sema[i]);
> >
> > Did we create this array etc. to avoid the stall when the spin is freed and
> > the offset is reused? Or is there a different reason?
> 
> A one line comment about why this was done would be nice prior to merging.

Ok, I'm going to add this before merge.

--
Zbigniew

> 
> >
> > Otherwise this is:
> >
> > Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [igt-dev] [PATCH i-g-t 2/5] tests/gem_exec_big: Move check of pressumed offset out of no-reloc scope
  2021-08-17  6:31 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
@ 2021-08-17  6:31 ` Zbigniew Kempczyński
  0 siblings, 0 replies; 15+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-17  6:31 UTC (permalink / raw)
  To: igt-dev

Missed during addressing last review - we don't want to process checking
of pressumed offset on no-reloc path. Move this out of no-reloc scope.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_big.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_exec_big.c b/tests/i915/gem_exec_big.c
index 90230fc33..2f47de398 100644
--- a/tests/i915/gem_exec_big.c
+++ b/tests/i915/gem_exec_big.c
@@ -98,12 +98,13 @@ static void exec1(int fd, uint32_t handle, uint64_t reloc_ofs, unsigned flags, c
 
 	gem_execbuf(fd, &execbuf);
 
-	igt_warn_on(gem_reloc[0].presumed_offset == -1);
 	gem_set_domain(fd, gem_exec[0].handle, I915_GEM_DOMAIN_WC, 0);
 
 	if (!has_relocs)
 		return;
 
+	igt_warn_on(gem_reloc[0].presumed_offset == -1);
+
 	if (use_64bit_relocs) {
 		uint64_t tmp;
 		if (ptr)
-- 
2.26.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-08-17  6:31 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-16 11:56 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 1/5] tests/gem_exec_capture: Remove unnecessary multiprocess stop() call Zbigniew Kempczyński
2021-08-17  0:25   ` Dixit, Ashutosh
2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 2/5] tests/gem_exec_big: Move check of pressumed offset out of no-reloc scope Zbigniew Kempczyński
2021-08-17  0:31   ` Dixit, Ashutosh
2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 3/5] tests/gem_exec_fence: Adopt to use allocator Zbigniew Kempczyński
2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 4/5] tests/gem_exec_schedule: " Zbigniew Kempczyński
2021-08-17  2:20   ` Dixit, Ashutosh
2021-08-17  4:11     ` Dixit, Ashutosh
2021-08-17  5:21       ` Zbigniew Kempczyński
2021-08-17  5:18     ` Zbigniew Kempczyński
2021-08-16 11:56 ` [igt-dev] [PATCH i-g-t 5/5] HAX: remove gttfill for tgl ci Zbigniew Kempczyński
2021-08-16 14:19 ` [igt-dev] ✓ Fi.CI.BAT: success for Adopt to use allocator (rev3) Patchwork
2021-08-16 17:22 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
2021-08-17  6:31 [igt-dev] [PATCH i-g-t 0/5] Adopt to use allocator Zbigniew Kempczyński
2021-08-17  6:31 ` [igt-dev] [PATCH i-g-t 2/5] tests/gem_exec_big: Move check of pressumed offset out of no-reloc scope Zbigniew Kempczyński

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.