All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT
@ 2021-07-26 19:59 Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 01/52] lib/igt_dummyload: Add support of using allocator in igt spinner Zbigniew Kempczyński
                   ` (53 more replies)
  0 siblings, 54 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

This is first batch which should decrease gap on platforms which 
are switched to softpin mode only (RKL, ADL and newer).

Be aware current drm-tip contains core-for-CI patch which temporarily
reenables relocations for ADL so CI run may be not expose bugs
in the series code. I'm going to disable relocations for ADL 
in separate patch for intel-gfx trybot to verify results.

v2: - add NO_RELOC in intel_batchbuffer blitting functions
    - switch back to gem_mmap__cpu() in gem_tiled_fence_blits.c

v3: - fix gem_spin_batch ctx fail (occurs in rebase on top of new
      intel_ctx_t)

Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>

Andrzej Turko (5):
  tests/gem_exec_big: Require relocation support
  tests/gem_exec_capture: Support gens without relocations
  tests/gem_exec_gttfill: Require relocation support
  tests/gem_exec_store: Support gens without relocations
  tests/gem_softpin: Exercise eviction with softpinning

Bhanuprakash Modem (3):
  lib/intel_batchbuffer: Add allocator support in blitter fast copy
  tests/kms_flip: Adopt to use allocator
  tests/kms_vblank: Adopt to use allocator

Ch Sai Gowtham (1):
  tests/gem_mmap_gtt: Add allocator support

Sai Gowtham (4):
  tests/gem_exec_params: Support gens without relocations
  tests/gem_mmap: Add allocator support
  tests/gem_mmap_offset: Add allocator support
  tests/gem_request_retire: Add allocator support

Zbigniew Kempczyński (39):
  lib/igt_dummyload: Add support of using allocator in igt spinner
  lib/intel_allocator: Add few helper functions for common use
  lib/igt_gt: Add passing ahnd as an argument to igt_hang
  lib/intel_batchbuffer: Ensure relocation code will be called
  lib/intel_batchbuffer: Add allocator support in blitter src copy
  lib/intel_batchbuffer: Try to avoid relocations in blitting
  lib/huc_copy: Extend huc copy prototype to pass allocator handle
  tests/gem_bad_reloc: Skip on gens where relocations are not supported
  tests/gem_busy: Adopt to use allocator
  tests/gem_create: Adopt to use allocator
  tests/gem_ctx_engines: Adopt to use allocator
  tests/gem_ctx_exec: Adopt to use allocator
  tests/gem_ctx_freq: Adopt to use allocator
  tests/gem_ctx_isolation: Adopt to use allocator
  tests/gem_ctx_param: Adopt to use allocator
  tests/gem_eio: Adopt to use allocator
  tests/gem_exec_async: Adopt to use allocator
  tests/gem_exec_suspend: Adopt to use allocator
  tests/gem_exec_parallel: Adopt to use alloctor
  tests/gem_mmap_wc: Adopt to use allocator
  tests/gem_ringfill: Adopt to use allocator
  tests/gem_spin_batch: Adopt to use allocator
  tests/gem_tiled_fence_blits: Adopt to use allocator
  tests/gem_unfence_active_buffers: Adopt to use allocator
  tests/gem_unref_active_buffers: Adopt to use allocator
  tests/gem_wait: Adopt to use allocator
  tests/gem_watchdog: Adopt to use no-reloc
  tests/gem_workarounds: Adopt to use allocator
  tests/i915_hangman: Adopt to use allocator
  tests/i915_module_load: Adopt to use allocator
  tests/i915_pm_rc6_residency: Adopt to use allocator
  tests/i915_pm_rpm: Adopt to use no-reloc
  tests/i915_pm_rps: Alter to use no-reloc
  tests/kms_busy: Adopt to use allocator
  tests/kms_cursor_legacy: Adopt to use allocator
  tests/perf_pmu: Adopt to use allocator
  tests/sysfs_heartbeat_interval: Adopt to use allocator
  tests/sysfs_preempt_timeout: Adopt to use allocator
  tests/sysfs_timeslice_duration: Adopt to use allocator

 lib/huc_copy.c                          |  27 ++-
 lib/huc_copy.h                          |   4 +-
 lib/igt_dummyload.c                     |  74 ++++++--
 lib/igt_dummyload.h                     |   3 +
 lib/igt_fb.c                            |  22 ++-
 lib/igt_gt.c                            |  21 ++-
 lib/igt_gt.h                            |   4 +
 lib/intel_allocator.h                   |  55 ++++++
 lib/intel_batchbuffer.c                 | 118 +++++++++----
 lib/intel_batchbuffer.h                 |  18 +-
 tests/i915/gem_bad_reloc.c              |   1 +
 tests/i915/gem_busy.c                   |  35 +++-
 tests/i915/gem_create.c                 |  11 +-
 tests/i915/gem_ctx_engines.c            |  25 ++-
 tests/i915/gem_ctx_exec.c               |  45 ++++-
 tests/i915/gem_ctx_freq.c               |   4 +-
 tests/i915/gem_ctx_isolation.c          |  97 ++++++++---
 tests/i915/gem_ctx_param.c              |   7 +-
 tests/i915/gem_eio.c                    |  60 +++++--
 tests/i915/gem_exec_async.c             |  57 +++++--
 tests/i915/gem_exec_big.c               |   1 +
 tests/i915/gem_exec_capture.c           | 131 +++++++++++----
 tests/i915/gem_exec_gttfill.c           |   1 +
 tests/i915/gem_exec_parallel.c          |  33 +++-
 tests/i915/gem_exec_params.c            |  74 +++++---
 tests/i915/gem_exec_store.c             | 134 +++++++++++----
 tests/i915/gem_exec_suspend.c           |  44 +++--
 tests/i915/gem_huc_copy.c               |  12 +-
 tests/i915/gem_mmap.c                   |   4 +-
 tests/i915/gem_mmap_gtt.c               |  15 +-
 tests/i915/gem_mmap_offset.c            |   4 +-
 tests/i915/gem_mmap_wc.c                |   4 +-
 tests/i915/gem_request_retire.c         |  14 +-
 tests/i915/gem_ringfill.c               |  36 +++-
 tests/i915/gem_softpin.c                | 213 +++++++++++++++++++++++-
 tests/i915/gem_spin_batch.c             |  37 ++--
 tests/i915/gem_tiled_fence_blits.c      |  65 ++++++--
 tests/i915/gem_unfence_active_buffers.c |   5 +-
 tests/i915/gem_unref_active_buffers.c   |   5 +-
 tests/i915/gem_wait.c                   |   3 +
 tests/i915/gem_watchdog.c               |  11 +-
 tests/i915/gem_workarounds.c            |  17 +-
 tests/i915/i915_hangman.c               |  15 +-
 tests/i915/i915_module_load.c           |  23 ++-
 tests/i915/i915_pm_rc6_residency.c      |   8 +-
 tests/i915/i915_pm_rpm.c                |  27 ++-
 tests/i915/i915_pm_rps.c                |  19 ++-
 tests/i915/perf_pmu.c                   | 147 +++++++++++-----
 tests/i915/sysfs_heartbeat_interval.c   |  24 ++-
 tests/i915/sysfs_preempt_timeout.c      |  21 ++-
 tests/i915/sysfs_timeslice_duration.c   |  21 ++-
 tests/kms_busy.c                        |   9 +
 tests/kms_cursor_legacy.c               |   6 +
 tests/kms_flip.c                        |  14 +-
 tests/kms_vblank.c                      |  11 +-
 tests/prime_vgem.c                      | 120 +++++++++----
 56 files changed, 1630 insertions(+), 386 deletions(-)

-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 01/52] lib/igt_dummyload: Add support of using allocator in igt spinner
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-03 23:07   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use Zbigniew Kempczyński
                   ` (52 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala, Chris Wilson

For gens without relocations we need to use softpin with valid offsets
which do not overlap other execbuf objects. As spinner during creation
knows nothing about vm it has to run into allocator handle must be
passed to properly acquire offsets from allocator instance.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_dummyload.c | 74 ++++++++++++++++++++++++++++++++++++---------
 lib/igt_dummyload.h |  3 ++
 2 files changed, 63 insertions(+), 14 deletions(-)

diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c
index 8a5ad5ee3..645db9222 100644
--- a/lib/igt_dummyload.c
+++ b/lib/igt_dummyload.c
@@ -35,6 +35,7 @@
 #include "i915/gem_engine_topology.h"
 #include "i915/gem_mman.h"
 #include "i915/gem_submission.h"
+#include "igt_aux.h"
 #include "igt_core.h"
 #include "igt_device.h"
 #include "igt_dummyload.h"
@@ -101,7 +102,7 @@ emit_recursive_batch(igt_spin_t *spin,
 	unsigned int flags[GEM_MAX_ENGINES];
 	unsigned int nengine;
 	int fence_fd = -1;
-	uint64_t addr;
+	uint64_t addr, addr_scratch, ahnd = opts->ahnd, objflags = 0;
 	uint32_t *cs;
 	int i;
 
@@ -119,11 +120,17 @@ emit_recursive_batch(igt_spin_t *spin,
 	 * are not allowed in the first 256KiB, for fear of negative relocations
 	 * that wrap.
 	 */
-	addr = gem_aperture_size(fd) / 2;
-	if (addr >> 31)
-		addr = 1u << 31;
-	addr += random() % addr / 2;
-	addr &= -4096;
+
+	if (!ahnd) {
+		addr = gem_aperture_size(fd) / 2;
+		if (addr >> 31)
+			addr = 1u << 31;
+		addr += random() % addr / 2;
+		addr &= -4096;
+	} else {
+		spin->ahnd = ahnd;
+		objflags |= EXEC_OBJECT_PINNED;
+	}
 
 	igt_assert(!(opts->ctx && opts->ctx_id));
 
@@ -164,16 +171,34 @@ emit_recursive_batch(igt_spin_t *spin,
 	execbuf->buffer_count++;
 	cs = spin->batch;
 
-	obj[BATCH].offset = addr;
+	if (ahnd)
+		addr = intel_allocator_alloc_with_strategy(ahnd, obj[BATCH].handle,
+							   BATCH_SIZE, 0,
+							   ALLOC_STRATEGY_LOW_TO_HIGH);
+	obj[BATCH].offset = CANONICAL(addr);
+	obj[BATCH].flags |= objflags;
+	if (obj[BATCH].offset >= (1ull << 32))
+		obj[BATCH].flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
 	addr += BATCH_SIZE;
 
 	if (opts->dependency) {
 		igt_assert(!(opts->flags & IGT_SPIN_POLL_RUN));
+		if (ahnd)
+			addr_scratch = intel_allocator_alloc_with_strategy(ahnd, opts->dependency,
+									   BATCH_SIZE, 0,
+									   ALLOC_STRATEGY_LOW_TO_HIGH);
+		else
+			addr_scratch = addr;
 
 		obj[SCRATCH].handle = opts->dependency;
-		obj[SCRATCH].offset = addr;
+		obj[SCRATCH].offset = CANONICAL(addr_scratch);
+		obj[SCRATCH].flags |= objflags;
+		if (obj[SCRATCH].offset >= (1ull << 32))
+			obj[SCRATCH].flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
 		if (!(opts->flags & IGT_SPIN_SOFTDEP)) {
-			obj[SCRATCH].flags = EXEC_OBJECT_WRITE;
+			obj[SCRATCH].flags |= EXEC_OBJECT_WRITE;
 
 			/* dummy write to dependency */
 			r = &relocs[obj[BATCH].relocation_count++];
@@ -212,8 +237,14 @@ emit_recursive_batch(igt_spin_t *spin,
 								       0, 4096,
 								       PROT_READ | PROT_WRITE);
 		}
+
+		if (ahnd)
+			addr = intel_allocator_alloc_with_strategy(ahnd,
+								   spin->poll_handle,
+								   BATCH_SIZE * 3, 0,
+								   ALLOC_STRATEGY_LOW_TO_HIGH);
 		addr += 4096; /* guard page */
-		obj[SCRATCH].offset = addr;
+		obj[SCRATCH].offset = CANONICAL(addr);
 		addr += 4096;
 
 		igt_assert_eq(spin->poll[SPIN_POLL_START_IDX], 0);
@@ -223,11 +254,15 @@ emit_recursive_batch(igt_spin_t *spin,
 		r->offset = sizeof(uint32_t) * 1;
 		r->delta = sizeof(uint32_t) * SPIN_POLL_START_IDX;
 
+		obj[SCRATCH].flags |= objflags;
+		if (obj[SCRATCH].offset >= (1ull << 32))
+			obj[SCRATCH].flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
 		*cs++ = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 
 		if (gen >= 8) {
 			*cs++ = r->presumed_offset + r->delta;
-			*cs++ = 0;
+			*cs++ = (r->presumed_offset + r->delta) >> 32;
 		} else if (gen >= 4) {
 			*cs++ = 0;
 			*cs++ = r->presumed_offset + r->delta;
@@ -314,10 +349,11 @@ emit_recursive_batch(igt_spin_t *spin,
 	r->offset = (cs + 1 - spin->batch) * sizeof(*cs);
 	r->read_domains = I915_GEM_DOMAIN_COMMAND;
 	r->delta = LOOP_START_OFFSET;
+
 	if (gen >= 8) {
 		*cs++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
 		*cs++ = r->presumed_offset + r->delta;
-		*cs++ = 0;
+		*cs++ = (r->presumed_offset + r->delta) >> 32;
 	} else if (gen >= 6) {
 		*cs++ = MI_BATCH_BUFFER_START | 1 << 8;
 		*cs++ = r->presumed_offset + r->delta;
@@ -351,6 +387,10 @@ emit_recursive_batch(igt_spin_t *spin,
 		execbuf->flags &= ~ENGINE_MASK;
 		execbuf->flags |= flags[i];
 
+		/* For allocator we have to rid of relocation_count */
+		for (int j = 0; j < ARRAY_SIZE(spin->obj) && ahnd; j++)
+			spin->obj[j].relocation_count = 0;
+
 		gem_execbuf_wr(fd, execbuf);
 
 		if (opts->flags & IGT_SPIN_FENCE_OUT) {
@@ -569,11 +609,17 @@ static void __igt_spin_free(int fd, igt_spin_t *spin)
 	if (spin->batch)
 		gem_munmap(spin->batch, BATCH_SIZE);
 
-	if (spin->poll_handle)
+	if (spin->poll_handle) {
 		gem_close(fd, spin->poll_handle);
+		if (spin->ahnd)
+			intel_allocator_free(spin->ahnd, spin->poll_handle);
+	}
 
-	if (spin->handle)
+	if (spin->handle) {
 		gem_close(fd, spin->handle);
+		if (spin->ahnd)
+			intel_allocator_free(spin->ahnd, spin->handle);
+	}
 
 	if (spin->out_fence >= 0)
 		close(spin->out_fence);
diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h
index 67e1a08ed..f02052614 100644
--- a/lib/igt_dummyload.h
+++ b/lib/igt_dummyload.h
@@ -58,6 +58,8 @@ typedef struct igt_spin {
 
 	unsigned int flags;
 #define SPIN_CLFLUSH (1 << 0)
+
+	uint64_t ahnd;
 } igt_spin_t;
 
 /**
@@ -78,6 +80,7 @@ typedef struct igt_spin_factory {
 	unsigned int engine;
 	unsigned int flags;
 	int fence;
+	uint64_t ahnd;
 } igt_spin_factory_t;
 
 #define IGT_SPIN_FENCE_IN      (1 << 0)
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 01/52] lib/igt_dummyload: Add support of using allocator in igt spinner Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-03 21:01   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 03/52] lib/igt_gt: Add passing ahnd as an argument to igt_hang Zbigniew Kempczyński
                   ` (51 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala, Chris Wilson

Add few helper functions which can be used in reloc/no-reloc tests.

Common name is get_<ALLOCATOR_TYPE>_ahnd(i915, ctx), like:
get_reloc_ahnd(), get_simple_ahnd(). As simple allows acquiring
offsets starting from top or bottom of vm additional two were added:
get_simple_l2h_ahnd() and get_simple_h2l_ahnd(). put_ahnd() closes
allocator handle (if it is valid).

To acquire / release an offset get_offset() and put_offset() were
added. When allocator handle is invalid (equal to zero) get_offset()
just returns 0, put_offset() does nothing in this case. We can then
call them regardless reloc/no-reloc code keeping conditional code in
these functions.

Be aware that each get_..._ahnd() functions calls checking kernel
relocation capabilities. This generates extra execbuf ioctl() call (but
without queueing job to gpu). If that is a problem and we want to avoid
additional execbuf calls, checking relocation caps should be done on the
beginning of the test. Allocator handle (open()) should be acquired then
conditionally according to the result of this check.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator.h | 55 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/lib/intel_allocator.h b/lib/intel_allocator.h
index c14f57b4d..f6511ffbf 100644
--- a/lib/intel_allocator.h
+++ b/lib/intel_allocator.h
@@ -11,6 +11,7 @@
 #include <pthread.h>
 #include <stdint.h>
 #include <stdatomic.h>
+#include "i915/gem_submission.h"
 
 /**
  * SECTION:intel_allocator
@@ -228,4 +229,58 @@ static inline uint64_t CANONICAL(uint64_t offset)
 
 #define DECANONICAL(offset) (offset & ((1ull << GEN8_GTT_ADDRESS_WIDTH) - 1))
 
+static inline uint64_t get_simple_ahnd(int fd, uint32_t ctx)
+{
+	bool do_relocs = gem_has_relocations(fd);
+
+	return do_relocs ? 0 : intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_SIMPLE);
+}
+
+static inline uint64_t get_simple_l2h_ahnd(int fd, uint32_t ctx)
+{
+	bool do_relocs = gem_has_relocations(fd);
+
+	return do_relocs ? 0 : intel_allocator_open_full(fd, ctx, 0, 0,
+							 INTEL_ALLOCATOR_SIMPLE,
+							 ALLOC_STRATEGY_LOW_TO_HIGH);
+}
+
+static inline uint64_t get_simple_h2l_ahnd(int fd, uint32_t ctx)
+{
+	bool do_relocs = gem_has_relocations(fd);
+
+	return do_relocs ? 0 : intel_allocator_open_full(fd, ctx, 0, 0,
+							 INTEL_ALLOCATOR_SIMPLE,
+							 ALLOC_STRATEGY_LOW_TO_HIGH);
+}
+
+static inline uint64_t get_reloc_ahnd(int fd, uint32_t ctx)
+{
+	bool do_relocs = gem_has_relocations(fd);
+
+	return do_relocs ? 0 : intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_RELOC);
+}
+
+static inline bool put_ahnd(uint64_t ahnd)
+{
+	return !ahnd || intel_allocator_close(ahnd);
+}
+
+static inline uint64_t get_offset(uint64_t ahnd, uint32_t handle,
+				  uint64_t size, uint64_t alignment)
+{
+	if (!ahnd)
+		return 0;
+
+	return intel_allocator_alloc(ahnd, handle, size, alignment);
+}
+
+static inline bool put_offset(uint64_t ahnd, uint32_t handle)
+{
+	if (!ahnd)
+		return 0;
+
+	return intel_allocator_free(ahnd, handle);
+}
+
 #endif
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 03/52] lib/igt_gt: Add passing ahnd as an argument to igt_hang
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 01/52] lib/igt_dummyload: Add support of using allocator in igt spinner Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-03 23:15   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 04/52] lib/intel_batchbuffer: Ensure relocation code will be called Zbigniew Kempczyński
                   ` (50 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala, Chris Wilson

Required as spinner is used, see gem_ringfill.c

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_gt.c | 21 ++++++++++++++++++++-
 lib/igt_gt.h |  4 ++++
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/lib/igt_gt.c b/lib/igt_gt.c
index c049477db..a0ba04cc1 100644
--- a/lib/igt_gt.c
+++ b/lib/igt_gt.c
@@ -269,7 +269,8 @@ static bool has_ctx_exec(int fd, unsigned ring, uint32_t ctx)
  * Returns:
  * Structure with helper internal state for igt_post_hang_ring().
  */
-igt_hang_t igt_hang_ctx(int fd, uint32_t ctx, int ring, unsigned flags)
+static igt_hang_t __igt_hang_ctx(int fd, uint64_t ahnd, uint32_t ctx, int ring,
+				 unsigned flags)
 {
 	struct drm_i915_gem_context_param param;
 	igt_spin_t *spin;
@@ -298,6 +299,7 @@ igt_hang_t igt_hang_ctx(int fd, uint32_t ctx, int ring, unsigned flags)
 		context_set_ban(fd, ctx, 0);
 
 	spin = __igt_spin_new(fd,
+			      .ahnd = ahnd,
 			      .ctx_id = ctx,
 			      .engine = ring,
 			      .flags = IGT_SPIN_NO_PREEMPTION);
@@ -305,6 +307,17 @@ igt_hang_t igt_hang_ctx(int fd, uint32_t ctx, int ring, unsigned flags)
 	return (igt_hang_t){ spin, ctx, ban, flags };
 }
 
+igt_hang_t igt_hang_ctx(int fd, uint32_t ctx, int ring, unsigned flags)
+{
+	return __igt_hang_ctx(fd, 0, ctx, ring, flags);
+}
+
+igt_hang_t igt_hang_ctx_with_ahnd(int fd, uint64_t ahnd, uint32_t ctx, int ring,
+				  unsigned flags)
+{
+	return __igt_hang_ctx(fd, ahnd, ctx, ring, flags);
+}
+
 /**
  * igt_hang_ring:
  * @fd: open i915 drm file descriptor
@@ -322,6 +335,12 @@ igt_hang_t igt_hang_ring(int fd, int ring)
 	return igt_hang_ctx(fd, 0, ring, 0);
 }
 
+igt_hang_t igt_hang_ring_with_ahnd(int fd, int ring, uint64_t ahnd)
+{
+	return igt_hang_ctx_with_ahnd(fd, ahnd, 0, ring, 0);
+}
+
+
 /**
  * igt_post_hang_ring:
  * @fd: open i915 drm file descriptor
diff --git a/lib/igt_gt.h b/lib/igt_gt.h
index 2ea360cc4..fabb89cde 100644
--- a/lib/igt_gt.h
+++ b/lib/igt_gt.h
@@ -45,10 +45,14 @@ void igt_disallow_hang(int fd, igt_hang_t arg);
 #define HANG_POISON 0xc5c5c5c5
 
 igt_hang_t igt_hang_ctx(int fd, uint32_t ctx, int ring, unsigned flags);
+igt_hang_t igt_hang_ctx_with_ahnd(int fd, uint64_t ahnd, uint32_t ctx, int ring,
+				  unsigned flags);
+
 #define HANG_ALLOW_BAN 1
 #define HANG_ALLOW_CAPTURE 2
 
 igt_hang_t igt_hang_ring(int fd, int ring);
+igt_hang_t igt_hang_ring_with_ahnd(int fd, int ring, uint64_t ahnd);
 void igt_post_hang_ring(int fd, igt_hang_t arg);
 
 void igt_force_gpu_reset(int fd);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 04/52] lib/intel_batchbuffer: Ensure relocation code will be called
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (2 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 03/52] lib/igt_gt: Add passing ahnd as an argument to igt_hang Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-03 23:34   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 05/52] lib/intel_batchbuffer: Add allocator support in blitter fast copy Zbigniew Kempczyński
                   ` (49 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala, Chris Wilson

Currently we're not sure relocations code will be called (presumed_offset
== offset == 0) so enforce them. Passing presumed_offset and offset to
auxiliary functions will prepare code to switch to no-reloc mode.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 2b8b903e2..3747019a5 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -679,24 +679,27 @@ static uint32_t fast_copy_dword1(unsigned int src_tiling,
 
 static void
 fill_relocation(struct drm_i915_gem_relocation_entry *reloc,
-		uint32_t gem_handle, uint32_t delta, /* in bytes */
+		uint32_t gem_handle, uint64_t presumed_offset,
+		uint32_t delta, /* in bytes */
 		uint32_t offset, /* in dwords */
 		uint32_t read_domains, uint32_t write_domains)
 {
 	reloc->target_handle = gem_handle;
 	reloc->delta = delta;
 	reloc->offset = offset * sizeof(uint32_t);
-	reloc->presumed_offset = 0;
+	reloc->presumed_offset = presumed_offset;
 	reloc->read_domains = read_domains;
 	reloc->write_domain = write_domains;
 }
 
 static void
-fill_object(struct drm_i915_gem_exec_object2 *obj, uint32_t gem_handle,
+fill_object(struct drm_i915_gem_exec_object2 *obj,
+	    uint32_t gem_handle, uint64_t gem_offset,
 	    struct drm_i915_gem_relocation_entry *relocs, uint32_t count)
 {
 	memset(obj, 0, sizeof(*obj));
 	obj->handle = gem_handle;
+	obj->offset = gem_offset;
 	obj->relocation_count = count;
 	obj->relocs_ptr = to_user_pointer(relocs);
 }
@@ -881,14 +884,14 @@ void igt_blitter_src_copy(int fd,
 	batch_handle = gem_create(fd, 4096);
 	gem_write(fd, batch_handle, 0, batch, sizeof(batch));
 
-	fill_relocation(&relocs[0], dst_handle, dst_delta, dst_reloc_offset,
+	fill_relocation(&relocs[0], dst_handle, -1, dst_delta, dst_reloc_offset,
 			I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER);
-	fill_relocation(&relocs[1], src_handle, src_delta, src_reloc_offset,
+	fill_relocation(&relocs[1], src_handle, -1, src_delta, src_reloc_offset,
 			I915_GEM_DOMAIN_RENDER, 0);
 
-	fill_object(&objs[0], dst_handle, NULL, 0);
-	fill_object(&objs[1], src_handle, NULL, 0);
-	fill_object(&objs[2], batch_handle, relocs, 2);
+	fill_object(&objs[0], dst_handle, 0, NULL, 0);
+	fill_object(&objs[1], src_handle, 0, NULL, 0);
+	fill_object(&objs[2], batch_handle, 0, relocs, 2);
 
 	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE;
 	objs[1].flags |= EXEC_OBJECT_NEEDS_FENCE;
@@ -978,13 +981,14 @@ void igt_blitter_fast_copy__raw(int fd,
 	batch_handle = gem_create(fd, 4096);
 	gem_write(fd, batch_handle, 0, batch, sizeof(batch));
 
-	fill_relocation(&relocs[0], dst_handle, dst_delta, 4,
+	fill_relocation(&relocs[0], dst_handle, -1, dst_delta, 4,
 			I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER);
-	fill_relocation(&relocs[1], src_handle, src_delta, 8, I915_GEM_DOMAIN_RENDER, 0);
+	fill_relocation(&relocs[1], src_handle, -1, src_delta, 8,
+			I915_GEM_DOMAIN_RENDER, 0);
 
-	fill_object(&objs[0], dst_handle, NULL, 0);
-	fill_object(&objs[1], src_handle, NULL, 0);
-	fill_object(&objs[2], batch_handle, relocs, 2);
+	fill_object(&objs[0], dst_handle, 0, NULL, 0);
+	fill_object(&objs[1], src_handle, 0, NULL, 0);
+	fill_object(&objs[2], batch_handle, 0, relocs, 2);
 
 	exec_blit(fd, objs, 3, intel_gen(intel_get_drm_devid(fd)));
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 05/52] lib/intel_batchbuffer: Add allocator support in blitter fast copy
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (3 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 04/52] lib/intel_batchbuffer: Ensure relocation code will be called Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy Zbigniew Kempczyński
                   ` (48 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Chris Wilson, Petri Latvala

From: Bhanuprakash Modem <bhanuprakash.modem@intel.com>

For newer gens kernel will reject relocations by returning -EINVAL
so we should support allocator and acquire offsets for blit.

Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_fb.c            | 17 ++++++++++++-
 lib/intel_batchbuffer.c | 55 ++++++++++++++++++++++++++++++-----------
 lib/intel_batchbuffer.h |  6 ++++-
 3 files changed, 62 insertions(+), 16 deletions(-)

diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index 75ab217b8..1bb32cd8a 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2417,6 +2417,8 @@ static void blitcopy(const struct igt_fb *dst_fb,
 		     const struct igt_fb *src_fb)
 {
 	uint32_t src_tiling, dst_tiling;
+	uint32_t ctx = 0;
+	uint64_t ahnd = 0;
 
 	igt_assert_eq(dst_fb->fd, src_fb->fd);
 	igt_assert_eq(dst_fb->num_planes, src_fb->num_planes);
@@ -2424,6 +2426,12 @@ static void blitcopy(const struct igt_fb *dst_fb,
 	src_tiling = igt_fb_mod_to_tiling(src_fb->modifier);
 	dst_tiling = igt_fb_mod_to_tiling(dst_fb->modifier);
 
+	if (!gem_has_relocations(dst_fb->fd)) {
+		igt_require(gem_has_contexts(dst_fb->fd));
+		ctx = gem_context_create(dst_fb->fd);
+		ahnd = get_reloc_ahnd(dst_fb->fd, ctx);
+	}
+
 	for (int i = 0; i < dst_fb->num_planes; i++) {
 		igt_assert_eq(dst_fb->plane_bpp[i], src_fb->plane_bpp[i]);
 		igt_assert_eq(dst_fb->plane_width[i], src_fb->plane_width[i]);
@@ -2435,11 +2443,13 @@ static void blitcopy(const struct igt_fb *dst_fb,
 		 */
 		if (fast_blit_ok(src_fb) && fast_blit_ok(dst_fb)) {
 			igt_blitter_fast_copy__raw(dst_fb->fd,
+						   ahnd, ctx,
 						   src_fb->gem_handle,
 						   src_fb->offsets[i],
 						   src_fb->strides[i],
 						   src_tiling,
 						   0, 0, /* src_x, src_y */
+						   src_fb->size,
 						   dst_fb->plane_width[i],
 						   dst_fb->plane_height[i],
 						   dst_fb->plane_bpp[i],
@@ -2447,7 +2457,8 @@ static void blitcopy(const struct igt_fb *dst_fb,
 						   dst_fb->offsets[i],
 						   dst_fb->strides[i],
 						   dst_tiling,
-						   0, 0 /* dst_x, dst_y */);
+						   0, 0 /* dst_x, dst_y */,
+						   dst_fb->size);
 		} else {
 			igt_blitter_src_copy(dst_fb->fd,
 					     src_fb->gem_handle,
@@ -2465,6 +2476,10 @@ static void blitcopy(const struct igt_fb *dst_fb,
 					     0, 0 /* dst_x, dst_y */);
 		}
 	}
+
+	if (ctx)
+		gem_context_destroy(dst_fb->fd, ctx);
+	put_ahnd(ahnd);
 }
 
 static void free_linear_mapping(struct fb_blit_upload *blit)
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 3747019a5..d9cc4d89c 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -706,12 +706,13 @@ fill_object(struct drm_i915_gem_exec_object2 *obj,
 
 static void exec_blit(int fd,
 		      struct drm_i915_gem_exec_object2 *objs, uint32_t count,
-		      unsigned int gen)
+		      unsigned int gen, uint32_t ctx)
 {
 	struct drm_i915_gem_execbuffer2 exec = {
 		.buffers_ptr = to_user_pointer(objs),
 		.buffer_count = count,
 		.flags = gen >= 6 ? I915_EXEC_BLT : 0,
+		.rsvd1 = ctx,
 	};
 
 	gem_execbuf(fd, &exec);
@@ -896,7 +897,7 @@ void igt_blitter_src_copy(int fd,
 	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE;
 	objs[1].flags |= EXEC_OBJECT_NEEDS_FENCE;
 
-	exec_blit(fd, objs, 3, gen);
+	exec_blit(fd, objs, 3, gen, 0);
 
 	gem_close(fd, batch_handle);
 }
@@ -904,12 +905,15 @@ void igt_blitter_src_copy(int fd,
 /**
  * igt_blitter_fast_copy__raw:
  * @fd: file descriptor of the i915 driver
+ * @ahnd: handle to an allocator
+ * @ctx: context within which execute copy blit
  * @src_handle: GEM handle of the source buffer
  * @src_delta: offset into the source GEM bo, in bytes
  * @src_stride: Stride (in bytes) of the source buffer
  * @src_tiling: Tiling mode of the source buffer
  * @src_x: X coordinate of the source region to copy
  * @src_y: Y coordinate of the source region to copy
+ * @src_size: size of the src bo required for allocator and softpin
  * @width: Width of the region to copy
  * @height: Height of the region to copy
  * @bpp: source and destination bits per pixel
@@ -919,16 +923,20 @@ void igt_blitter_src_copy(int fd,
  * @dst_tiling: Tiling mode of the destination buffer
  * @dst_x: X coordinate of destination
  * @dst_y: Y coordinate of destination
+ * @dst_size: size of the dst bo required for allocator and softpin
  *
  * Like igt_blitter_fast_copy(), but talking to the kernel directly.
  */
 void igt_blitter_fast_copy__raw(int fd,
+				uint64_t ahnd,
+				uint32_t ctx,
 				/* src */
 				uint32_t src_handle,
 				unsigned int src_delta,
 				unsigned int src_stride,
 				unsigned int src_tiling,
 				unsigned int src_x, unsigned src_y,
+				uint64_t src_size,
 
 				/* size */
 				unsigned int width, unsigned int height,
@@ -941,7 +949,8 @@ void igt_blitter_fast_copy__raw(int fd,
 				unsigned dst_delta,
 				unsigned int dst_stride,
 				unsigned int dst_tiling,
-				unsigned int dst_x, unsigned dst_y)
+				unsigned int dst_x, unsigned dst_y,
+				uint64_t dst_size)
 {
 	uint32_t batch[12];
 	struct drm_i915_gem_exec_object2 objs[3];
@@ -949,8 +958,20 @@ void igt_blitter_fast_copy__raw(int fd,
 	uint32_t batch_handle;
 	uint32_t dword0, dword1;
 	uint32_t src_pitch, dst_pitch;
+	uint64_t batch_offset, src_offset, dst_offset;
 	int i = 0;
 
+	batch_handle = gem_create(fd, 4096);
+	if (ahnd) {
+		src_offset = get_offset(ahnd, src_handle, src_size, 0);
+		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
+		batch_offset = get_offset(ahnd, batch_handle, 4096, 0);
+	} else {
+		src_offset = 16 << 20;
+		dst_offset = ALIGN(src_offset + src_size, 1 << 20);
+		batch_offset = ALIGN(dst_offset + dst_size, 1 << 20);
+	}
+
 	src_pitch = fast_copy_pitch(src_stride, src_tiling);
 	dst_pitch = fast_copy_pitch(dst_stride, dst_tiling);
 	dword0 = fast_copy_dword0(src_tiling, dst_tiling);
@@ -967,30 +988,36 @@ void igt_blitter_fast_copy__raw(int fd,
 	batch[i++] = dword1 | dst_pitch;
 	batch[i++] = (dst_y << 16) | dst_x; /* dst x1,y1 */
 	batch[i++] = ((dst_y + height) << 16) | (dst_x + width); /* dst x2,y2 */
-	batch[i++] = dst_delta; /* dst address lower bits */
-	batch[i++] = 0;	/* dst address upper bits */
+	batch[i++] = dst_offset + dst_delta; /* dst address lower bits */
+	batch[i++] = (dst_offset + dst_delta) >> 32; /* dst address upper bits */
 	batch[i++] = (src_y << 16) | src_x; /* src x1,y1 */
 	batch[i++] = src_pitch;
-	batch[i++] = src_delta; /* src address lower bits */
-	batch[i++] = 0;	/* src address upper bits */
+	batch[i++] = src_offset + src_delta; /* src address lower bits */
+	batch[i++] = (src_offset + src_delta) >> 32; /* src address upper bits */
 	batch[i++] = MI_BATCH_BUFFER_END;
 	batch[i++] = MI_NOOP;
 
 	igt_assert(i == ARRAY_SIZE(batch));
 
-	batch_handle = gem_create(fd, 4096);
 	gem_write(fd, batch_handle, 0, batch, sizeof(batch));
 
-	fill_relocation(&relocs[0], dst_handle, -1, dst_delta, 4,
+	fill_relocation(&relocs[0], dst_handle, dst_offset, dst_delta, 4,
 			I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER);
-	fill_relocation(&relocs[1], src_handle, -1, src_delta, 8,
+	fill_relocation(&relocs[1], src_handle, src_offset, src_delta, 8,
 			I915_GEM_DOMAIN_RENDER, 0);
 
-	fill_object(&objs[0], dst_handle, 0, NULL, 0);
-	fill_object(&objs[1], src_handle, 0, NULL, 0);
-	fill_object(&objs[2], batch_handle, 0, relocs, 2);
+	fill_object(&objs[0], dst_handle, dst_offset, NULL, 0);
+	objs[0].flags |= EXEC_OBJECT_WRITE;
+	fill_object(&objs[1], src_handle, src_offset, NULL, 0);
+	fill_object(&objs[2], batch_handle, batch_offset, relocs, !ahnd ? 2 : 0);
+
+	if (ahnd) {
+		objs[0].flags |= EXEC_OBJECT_PINNED;
+		objs[1].flags |= EXEC_OBJECT_PINNED;
+		objs[2].flags |= EXEC_OBJECT_PINNED;
+	}
 
-	exec_blit(fd, objs, 3, intel_gen(intel_get_drm_devid(fd)));
+	exec_blit(fd, objs, 3, intel_gen(intel_get_drm_devid(fd)), ctx);
 
 	gem_close(fd, batch_handle);
 }
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index bd417e998..74c21c40e 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -300,12 +300,15 @@ void igt_blitter_fast_copy(struct intel_batchbuffer *batch,
 			   unsigned dst_x, unsigned dst_y);
 
 void igt_blitter_fast_copy__raw(int fd,
+				uint64_t ahnd,
+				uint32_t ctx,
 				/* src */
 				uint32_t src_handle,
 				unsigned int src_delta,
 				unsigned int src_stride,
 				unsigned int src_tiling,
 				unsigned int src_x, unsigned src_y,
+				uint64_t src_size,
 
 				/* size */
 				unsigned int width, unsigned int height,
@@ -318,7 +321,8 @@ void igt_blitter_fast_copy__raw(int fd,
 				unsigned int dst_delta,
 				unsigned int dst_stride,
 				unsigned int dst_tiling,
-				unsigned int dst_x, unsigned dst_y);
+				unsigned int dst_x, unsigned dst_y,
+				uint64_t dst_size);
 
 /**
  * igt_render_copyfunc_t:
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (4 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 05/52] lib/intel_batchbuffer: Add allocator support in blitter fast copy Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-04 23:26   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 07/52] lib/intel_batchbuffer: Try to avoid relocations in blitting Zbigniew Kempczyński
                   ` (47 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala, Chris Wilson

Adjust igt_fb library + prime_vgem test as they are blitter src copy
users.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_fb.c            |   5 +-
 lib/intel_batchbuffer.c |  53 +++++++++++++-----
 lib/intel_batchbuffer.h |   6 +-
 tests/prime_vgem.c      | 120 +++++++++++++++++++++++++++++-----------
 4 files changed, 138 insertions(+), 46 deletions(-)

diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index 1bb32cd8a..c7dcadcf6 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2461,11 +2461,13 @@ static void blitcopy(const struct igt_fb *dst_fb,
 						   dst_fb->size);
 		} else {
 			igt_blitter_src_copy(dst_fb->fd,
+					     ahnd, ctx,
 					     src_fb->gem_handle,
 					     src_fb->offsets[i],
 					     src_fb->strides[i],
 					     src_tiling,
 					     0, 0, /* src_x, src_y */
+					     src_fb->size,
 					     dst_fb->plane_width[i],
 					     dst_fb->plane_height[i],
 					     dst_fb->plane_bpp[i],
@@ -2473,7 +2475,8 @@ static void blitcopy(const struct igt_fb *dst_fb,
 					     dst_fb->offsets[i],
 					     dst_fb->strides[i],
 					     dst_tiling,
-					     0, 0 /* dst_x, dst_y */);
+					     0, 0 /* dst_x, dst_y */,
+					     dst_fb->size);
 		}
 	}
 
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index d9cc4d89c..d4a59e508 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -762,12 +762,15 @@ static uint32_t src_copy_dword1(uint32_t dst_pitch, uint32_t bpp)
 /**
  * igt_blitter_src_copy:
  * @fd: file descriptor of the i915 driver
+ * @ahnd: handle to an allocator
+ * @ctx: context within which execute copy blit
  * @src_handle: GEM handle of the source buffer
  * @src_delta: offset into the source GEM bo, in bytes
  * @src_stride: Stride (in bytes) of the source buffer
  * @src_tiling: Tiling mode of the source buffer
  * @src_x: X coordinate of the source region to copy
  * @src_y: Y coordinate of the source region to copy
+ * @src_size: size of the src bo required for allocator and softpin
  * @width: Width of the region to copy
  * @height: Height of the region to copy
  * @bpp: source and destination bits per pixel
@@ -777,16 +780,20 @@ static uint32_t src_copy_dword1(uint32_t dst_pitch, uint32_t bpp)
  * @dst_tiling: Tiling mode of the destination buffer
  * @dst_x: X coordinate of destination
  * @dst_y: Y coordinate of destination
+ * @dst_size: size of the dst bo required for allocator and softpin
  *
  * Copy @src into @dst using the XY_SRC blit command.
  */
 void igt_blitter_src_copy(int fd,
+			  uint64_t ahnd,
+			  uint32_t ctx,
 			  /* src */
 			  uint32_t src_handle,
 			  uint32_t src_delta,
 			  uint32_t src_stride,
 			  uint32_t src_tiling,
 			  uint32_t src_x, uint32_t src_y,
+			  uint64_t src_size,
 
 			  /* size */
 			  uint32_t width, uint32_t height,
@@ -799,7 +806,8 @@ void igt_blitter_src_copy(int fd,
 			  uint32_t dst_delta,
 			  uint32_t dst_stride,
 			  uint32_t dst_tiling,
-			  uint32_t dst_x, uint32_t dst_y)
+			  uint32_t dst_x, uint32_t dst_y,
+			  uint64_t dst_size)
 {
 	uint32_t batch[32];
 	struct drm_i915_gem_exec_object2 objs[3];
@@ -808,9 +816,21 @@ void igt_blitter_src_copy(int fd,
 	uint32_t src_pitch, dst_pitch;
 	uint32_t dst_reloc_offset, src_reloc_offset;
 	uint32_t gen = intel_gen(intel_get_drm_devid(fd));
+	uint64_t batch_offset, src_offset, dst_offset;
 	const bool has_64b_reloc = gen >= 8;
 	int i = 0;
 
+	batch_handle = gem_create(fd, 4096);
+	if (ahnd) {
+		src_offset = get_offset(ahnd, src_handle, src_size, 0);
+		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
+		batch_offset = get_offset(ahnd, batch_handle, 4096, 0);
+	} else {
+		src_offset = 16 << 20;
+		dst_offset = ALIGN(src_offset + src_size, 1 << 20);
+		batch_offset = ALIGN(dst_offset + dst_size, 1 << 20);
+	}
+
 	memset(batch, 0, sizeof(batch));
 
 	igt_assert((src_tiling == I915_TILING_NONE) ||
@@ -855,15 +875,15 @@ void igt_blitter_src_copy(int fd,
 	batch[i++] = (dst_y << 16) | dst_x; /* dst x1,y1 */
 	batch[i++] = ((dst_y + height) << 16) | (dst_x + width); /* dst x2,y2 */
 	dst_reloc_offset = i;
-	batch[i++] = dst_delta; /* dst address lower bits */
+	batch[i++] = dst_offset + dst_delta; /* dst address lower bits */
 	if (has_64b_reloc)
-		batch[i++] = 0;	/* dst address upper bits */
+		batch[i++] = (dst_offset + dst_delta) >> 32; /* dst address upper bits */
 	batch[i++] = (src_y << 16) | src_x; /* src x1,y1 */
 	batch[i++] = src_pitch;
 	src_reloc_offset = i;
-	batch[i++] = src_delta; /* src address lower bits */
+	batch[i++] = src_offset + src_delta; /* src address lower bits */
 	if (has_64b_reloc)
-		batch[i++] = 0;	/* src address upper bits */
+		batch[i++] = (src_offset + src_delta) >> 32; /* src address upper bits */
 
 	if ((src_tiling | dst_tiling) >= I915_TILING_Y) {
 		igt_assert(gen >= 6);
@@ -882,22 +902,29 @@ void igt_blitter_src_copy(int fd,
 
 	igt_assert(i <= ARRAY_SIZE(batch));
 
-	batch_handle = gem_create(fd, 4096);
 	gem_write(fd, batch_handle, 0, batch, sizeof(batch));
 
-	fill_relocation(&relocs[0], dst_handle, -1, dst_delta, dst_reloc_offset,
+	fill_relocation(&relocs[0], dst_handle, dst_offset,
+			dst_delta, dst_reloc_offset,
 			I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER);
-	fill_relocation(&relocs[1], src_handle, -1, src_delta, src_reloc_offset,
+	fill_relocation(&relocs[1], src_handle, src_offset,
+			src_delta, src_reloc_offset,
 			I915_GEM_DOMAIN_RENDER, 0);
 
-	fill_object(&objs[0], dst_handle, 0, NULL, 0);
-	fill_object(&objs[1], src_handle, 0, NULL, 0);
-	fill_object(&objs[2], batch_handle, 0, relocs, 2);
+	fill_object(&objs[0], dst_handle, dst_offset, NULL, 0);
+	fill_object(&objs[1], src_handle, src_offset, NULL, 0);
+	fill_object(&objs[2], batch_handle, batch_offset, relocs, 2);
 
-	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE;
+	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE | EXEC_OBJECT_WRITE;
 	objs[1].flags |= EXEC_OBJECT_NEEDS_FENCE;
 
-	exec_blit(fd, objs, 3, gen, 0);
+	if (ahnd) {
+		objs[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+		objs[1].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+		objs[2].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	}
+
+	exec_blit(fd, objs, 3, gen, ctx);
 
 	gem_close(fd, batch_handle);
 }
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 74c21c40e..c1974fe73 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -271,12 +271,15 @@ unsigned int igt_buf_intel_ccs_height(unsigned int gen,
 				      const struct igt_buf *buf);
 
 void igt_blitter_src_copy(int fd,
+			  uint64_t ahnd,
+			  uint32_t ctx,
 			  /* src */
 			  uint32_t src_handle,
 			  uint32_t src_delta,
 			  uint32_t src_stride,
 			  uint32_t src_tiling,
 			  uint32_t src_x, uint32_t src_y,
+			  uint64_t src_size,
 
 			  /* size */
 			  uint32_t width, uint32_t height,
@@ -289,7 +292,8 @@ void igt_blitter_src_copy(int fd,
 			  uint32_t dst_delta,
 			  uint32_t dst_stride,
 			  uint32_t dst_tiling,
-			  uint32_t dst_x, uint32_t dst_y);
+			  uint32_t dst_x, uint32_t dst_y,
+			  uint64_t dst_size);
 
 void igt_blitter_fast_copy(struct intel_batchbuffer *batch,
 			   const struct igt_buf *src, unsigned src_delta,
diff --git a/tests/prime_vgem.c b/tests/prime_vgem.c
index 25c5f42f5..b837f2bfa 100644
--- a/tests/prime_vgem.c
+++ b/tests/prime_vgem.c
@@ -207,10 +207,14 @@ static void test_fence_blt(int i915, int vgem)
 
 	igt_fork(child, 1) {
 		uint32_t native;
+		uint64_t ahnd;
 
 		close(master[0]);
 		close(slave[1]);
 
+		intel_allocator_init();
+		ahnd = get_reloc_ahnd(i915, 0);
+
 		native = gem_create(i915, scratch.size);
 
 		ptr = gem_mmap__wc(i915, native, 0, scratch.size, PROT_READ);
@@ -221,10 +225,11 @@ static void test_fence_blt(int i915, int vgem)
 		write(master[1], &child, sizeof(child));
 		read(slave[0], &child, sizeof(child));
 
-		igt_blitter_src_copy(i915, prime, 0, scratch.pitch,
-				     I915_TILING_NONE, 0, 0, scratch.width,
-				     scratch.height, scratch.bpp, native, 0,
-				     scratch.pitch, I915_TILING_NONE, 0, 0);
+		igt_blitter_src_copy(i915, ahnd, 0, prime, 0, scratch.pitch,
+				     I915_TILING_NONE, 0, 0, scratch.size,
+				     scratch.width, scratch.height, scratch.bpp,
+				     native, 0, scratch.pitch,
+				     I915_TILING_NONE, 0, 0, scratch.size);
 		gem_sync(i915, native);
 
 		for (i = 0; i < scratch.height; i++)
@@ -234,6 +239,7 @@ static void test_fence_blt(int i915, int vgem)
 		munmap(ptr, scratch.size);
 		gem_close(i915, native);
 		gem_close(i915, prime);
+		put_ahnd(ahnd);
 	}
 
 	close(master[1]);
@@ -375,6 +381,7 @@ static void test_blt(int vgem, int i915)
 	uint32_t prime, native;
 	uint32_t *ptr;
 	int dmabuf, i;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	scratch.width = 1024;
 	scratch.height = 1024;
@@ -391,9 +398,11 @@ static void test_blt(int vgem, int i915)
 		ptr[scratch.pitch * i / sizeof(*ptr)] = i;
 	munmap(ptr, scratch.size);
 
-	igt_blitter_src_copy(i915, native, 0, scratch.pitch, I915_TILING_NONE,
-			     0, 0, scratch.width, scratch.height, scratch.bpp,
-			     prime, 0, scratch.pitch, I915_TILING_NONE, 0, 0);
+	igt_blitter_src_copy(i915, ahnd, 0, native, 0, scratch.pitch,
+			     I915_TILING_NONE, 0, 0, scratch.size,
+			     scratch.width, scratch.height, scratch.bpp,
+			     prime, 0, scratch.pitch, I915_TILING_NONE, 0, 0,
+			     scratch.size);
 	prime_sync_start(dmabuf, true);
 	prime_sync_end(dmabuf, true);
 	close(dmabuf);
@@ -405,9 +414,11 @@ static void test_blt(int vgem, int i915)
 	}
 	munmap(ptr, scratch.size);
 
-	igt_blitter_src_copy(i915, prime, 0, scratch.pitch, I915_TILING_NONE,
-			     0, 0, scratch.width, scratch.height, scratch.bpp,
-			     native, 0, scratch.pitch, I915_TILING_NONE, 0, 0);
+	igt_blitter_src_copy(i915, ahnd, 0, prime, 0, scratch.pitch,
+			     I915_TILING_NONE, 0, 0, scratch.size,
+			     scratch.width, scratch.height, scratch.bpp,
+			     native, 0, scratch.pitch, I915_TILING_NONE, 0, 0,
+			     scratch.size);
 	gem_sync(i915, native);
 
 	ptr = gem_mmap__wc(i915, native, 0, scratch.size, PROT_READ);
@@ -418,6 +429,7 @@ static void test_blt(int vgem, int i915)
 	gem_close(i915, native);
 	gem_close(i915, prime);
 	gem_close(vgem, scratch.handle);
+	put_ahnd(ahnd);
 }
 
 static void test_shrink(int vgem, int i915)
@@ -509,6 +521,7 @@ static void test_blt_interleaved(int vgem, int i915)
 	uint32_t prime, native;
 	uint32_t *foreign, *local;
 	int dmabuf, i;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	scratch.width = 1024;
 	scratch.height = 1024;
@@ -525,20 +538,22 @@ static void test_blt_interleaved(int vgem, int i915)
 
 	for (i = 0; i < scratch.height; i++) {
 		local[scratch.pitch * i / sizeof(*local)] = i;
-		igt_blitter_src_copy(i915, native, 0, scratch.pitch,
-				     I915_TILING_NONE, 0, i, scratch.width, 1,
+		igt_blitter_src_copy(i915, ahnd, 0, native, 0, scratch.pitch,
+				     I915_TILING_NONE, 0, i, scratch.size,
+				     scratch.width, 1,
 				     scratch.bpp, prime, 0, scratch.pitch,
-				     I915_TILING_NONE, 0, i);
+				     I915_TILING_NONE, 0, i, scratch.size);
 		prime_sync_start(dmabuf, true);
 		igt_assert_eq_u32(foreign[scratch.pitch * i / sizeof(*foreign)],
 				  i);
 		prime_sync_end(dmabuf, true);
 
 		foreign[scratch.pitch * i / sizeof(*foreign)] = ~i;
-		igt_blitter_src_copy(i915, prime, 0, scratch.pitch,
-				     I915_TILING_NONE, 0, i, scratch.width, 1,
+		igt_blitter_src_copy(i915, ahnd, 0, prime, 0, scratch.pitch,
+				     I915_TILING_NONE, 0, i, scratch.size,
+				     scratch.width, 1,
 				     scratch.bpp, native, 0, scratch.pitch,
-				     I915_TILING_NONE, 0, i);
+				     I915_TILING_NONE, 0, i, scratch.size);
 		gem_sync(i915, native);
 		igt_assert_eq_u32(local[scratch.pitch * i / sizeof(*local)],
 				  ~i);
@@ -551,6 +566,7 @@ static void test_blt_interleaved(int vgem, int i915)
 	gem_close(i915, native);
 	gem_close(i915, prime);
 	gem_close(vgem, scratch.handle);
+	put_ahnd(ahnd);
 }
 
 static bool prime_busy(int fd, bool excl)
@@ -559,7 +575,8 @@ static bool prime_busy(int fd, bool excl)
 	return poll(&pfd, 1, 0) == 0;
 }
 
-static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
+static void work(int i915, uint64_t ahnd, uint64_t scratch_offset, int dmabuf,
+		 const intel_ctx_t *ctx, unsigned ring)
 {
 	const int SCRATCH = 0;
 	const int BATCH = 1;
@@ -584,10 +601,17 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
 	obj[SCRATCH].handle = prime_fd_to_handle(i915, dmabuf);
 
 	obj[BATCH].handle = gem_create(i915, size);
+	obj[BATCH].offset = get_offset(ahnd, obj[BATCH].handle, size, 0);
 	obj[BATCH].relocs_ptr = (uintptr_t)store;
-	obj[BATCH].relocation_count = ARRAY_SIZE(store);
+	obj[BATCH].relocation_count = !ahnd ? ARRAY_SIZE(store) : 0;
 	memset(store, 0, sizeof(store));
 
+	if (ahnd) {
+		obj[SCRATCH].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[SCRATCH].offset = scratch_offset;
+		obj[BATCH].flags = EXEC_OBJECT_PINNED;
+	}
+
 	batch = gem_mmap__wc(i915, obj[BATCH].handle, 0, size, PROT_WRITE);
 	gem_set_domain(i915, obj[BATCH].handle,
 		       I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
@@ -602,8 +626,8 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
 		store[count].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
 		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 		if (gen >= 8) {
-			batch[++i] = 0;
-			batch[++i] = 0;
+			batch[++i] = scratch_offset + store[count].delta;
+			batch[++i] = (scratch_offset + store[count].delta) >> 32;
 		} else if (gen >= 4) {
 			batch[++i] = 0;
 			batch[++i] = 0;
@@ -626,8 +650,8 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
 	batch[i] = MI_BATCH_BUFFER_START;
 	if (gen >= 8) {
 		batch[i] |= 1 << 8 | 1;
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj[BATCH].offset;
+		batch[++i] = obj[BATCH].offset >> 32;
 	} else if (gen >= 6) {
 		batch[i] |= 1 << 8;
 		batch[++i] = 0;
@@ -662,14 +686,18 @@ static void test_busy(int i915, int vgem, const intel_ctx_t *ctx, unsigned ring)
 	uint32_t *ptr;
 	int dmabuf;
 	int i;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id), scratch_offset;
 
 	scratch.width = 1024;
 	scratch.height = 1;
 	scratch.bpp = 32;
 	vgem_create(vgem, &scratch);
+	scratch_offset = get_offset(ahnd, scratch.handle, scratch.size, 0);
 	dmabuf = prime_handle_to_fd(vgem, scratch.handle);
 
-	work(i915, dmabuf, ctx, ring);
+	work(i915, ahnd, scratch_offset, dmabuf, ctx, ring);
+
+	put_ahnd(ahnd);
 
 	/* Calling busy in a loop should be enough to flush the rendering */
 	memset(&tv, 0, sizeof(tv));
@@ -691,14 +719,18 @@ static void test_wait(int i915, int vgem, const intel_ctx_t *ctx, unsigned ring)
 	struct pollfd pfd;
 	uint32_t *ptr;
 	int i;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id), scratch_offset;
 
 	scratch.width = 1024;
 	scratch.height = 1;
 	scratch.bpp = 32;
 	vgem_create(vgem, &scratch);
+	scratch_offset = get_offset(ahnd, scratch.handle, scratch.size, 0);
 	pfd.fd = prime_handle_to_fd(vgem, scratch.handle);
 
-	work(i915, pfd.fd, ctx, ring);
+	work(i915, ahnd, scratch_offset, pfd.fd, ctx, ring);
+
+	put_ahnd(ahnd);
 
 	pfd.events = POLLIN;
 	igt_assert_eq(poll(&pfd, 1, 10000), 1);
@@ -718,18 +750,22 @@ static void test_sync(int i915, int vgem, const intel_ctx_t *ctx, unsigned ring)
 	uint32_t *ptr;
 	int dmabuf;
 	int i;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id), scratch_offset;
 
 	scratch.width = 1024;
 	scratch.height = 1;
 	scratch.bpp = 32;
 	vgem_create(vgem, &scratch);
+	scratch_offset = get_offset(ahnd, scratch.handle, scratch.size, 0);
 	dmabuf = prime_handle_to_fd(vgem, scratch.handle);
 
 	ptr = mmap(NULL, scratch.size, PROT_READ, MAP_SHARED, dmabuf, 0);
 	igt_assert(ptr != MAP_FAILED);
 	gem_close(vgem, scratch.handle);
 
-	work(i915, dmabuf, ctx, ring);
+	work(i915, ahnd, scratch_offset, dmabuf, ctx, ring);
+
+	put_ahnd(ahnd);
 
 	prime_sync_start(dmabuf, false);
 	for (i = 0; i < 1024; i++)
@@ -746,12 +782,13 @@ static void test_fence_wait(int i915, int vgem, const intel_ctx_t *ctx, unsigned
 	uint32_t fence;
 	uint32_t *ptr;
 	int dmabuf;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id), scratch_offset;
 
 	scratch.width = 1024;
 	scratch.height = 1;
 	scratch.bpp = 32;
 	vgem_create(vgem, &scratch);
-
+	scratch_offset = get_offset(ahnd, scratch.handle, scratch.size, 0);
 	dmabuf = prime_handle_to_fd(vgem, scratch.handle);
 	fence = vgem_fence_attach(vgem, &scratch, VGEM_FENCE_WRITE);
 	igt_assert(prime_busy(dmabuf, false));
@@ -760,10 +797,14 @@ static void test_fence_wait(int i915, int vgem, const intel_ctx_t *ctx, unsigned
 	ptr = mmap(NULL, scratch.size, PROT_READ, MAP_SHARED, dmabuf, 0);
 	igt_assert(ptr != MAP_FAILED);
 
-	igt_fork(child, 1)
-		work(i915, dmabuf, ctx, ring);
+	igt_fork(child, 1) {
+		ahnd = get_reloc_ahnd(i915, ctx->id);
+		work(i915, ahnd, scratch_offset, dmabuf, ctx, ring);
+		put_ahnd(ahnd);
+	}
 
 	sleep(1);
+	put_ahnd(ahnd);
 
 	/* Check for invalidly completing the task early */
 	for (int i = 0; i < 1024; i++)
@@ -789,11 +830,13 @@ static void test_fence_hang(int i915, int vgem, unsigned flags)
 	uint32_t *ptr;
 	int dmabuf;
 	int i;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0), scratch_offset;
 
 	scratch.width = 1024;
 	scratch.height = 1;
 	scratch.bpp = 32;
 	vgem_create(vgem, &scratch);
+	scratch_offset = get_offset(ahnd, scratch.handle, scratch.size, 0);
 	dmabuf = prime_handle_to_fd(vgem, scratch.handle);
 	vgem_fence_attach(vgem, &scratch, flags | WIP_VGEM_FENCE_NOTIMEOUT);
 
@@ -801,7 +844,9 @@ static void test_fence_hang(int i915, int vgem, unsigned flags)
 	igt_assert(ptr != MAP_FAILED);
 	gem_close(vgem, scratch.handle);
 
-	work(i915, dmabuf, intel_ctx_0(i915), 0);
+	work(i915, ahnd, scratch_offset, dmabuf, intel_ctx_0(i915), 0);
+
+	put_ahnd(ahnd);
 
 	/* The work should have been cancelled */
 
@@ -1146,8 +1191,6 @@ igt_main
 		igt_subtest("basic-fence-blt")
 			test_fence_blt(i915, vgem);
 
-		test_each_engine("fence-wait", vgem, i915, test_fence_wait);
-
 		igt_subtest("basic-fence-flip")
 			test_flip(i915, vgem, 0);
 
@@ -1166,6 +1209,21 @@ igt_main
 		}
 	}
 
+	/* Fence testing, requires multiprocess allocator */
+	igt_subtest_group {
+		igt_fixture {
+			igt_require(vgem_has_fences(vgem));
+			intel_allocator_multiprocess_start();
+		}
+
+		test_each_engine("fence-wait", vgem, i915, test_fence_wait);
+
+		igt_fixture {
+			intel_allocator_multiprocess_stop();
+		}
+	}
+
+
 	igt_fixture {
 		close(i915);
 		close(vgem);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 07/52] lib/intel_batchbuffer: Try to avoid relocations in blitting
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (5 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-04 23:42   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 08/52] lib/huc_copy: Extend huc copy prototype to pass allocator handle Zbigniew Kempczyński
                   ` (46 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala, Chris Wilson

We're proposing not overlapping offsets in both blitter copying functions
so we can try to skip relocations.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index d4a59e508..bbf8e0da2 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -711,7 +711,7 @@ static void exec_blit(int fd,
 	struct drm_i915_gem_execbuffer2 exec = {
 		.buffers_ptr = to_user_pointer(objs),
 		.buffer_count = count,
-		.flags = gen >= 6 ? I915_EXEC_BLT : 0,
+		.flags = gen >= 6 ? I915_EXEC_BLT : 0 | I915_EXEC_NO_RELOC,
 		.rsvd1 = ctx,
 	};
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 08/52] lib/huc_copy: Extend huc copy prototype to pass allocator handle
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (6 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 07/52] lib/intel_batchbuffer: Try to avoid relocations in blitting Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05  0:31   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 09/52] tests/gem_bad_reloc: Skip on gens where relocations are not supported Zbigniew Kempczyński
                   ` (45 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For testing gem_huc_copy on no-reloc platforms we need to pass
allocator handle and object sizes to properly acquire offsets from
allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 lib/huc_copy.c            | 27 +++++++++++++++++++++++----
 lib/huc_copy.h            |  4 ++--
 lib/intel_batchbuffer.h   |  6 ++++--
 tests/i915/gem_huc_copy.c | 12 ++++++++++--
 4 files changed, 39 insertions(+), 10 deletions(-)

diff --git a/lib/huc_copy.c b/lib/huc_copy.c
index bc98b1f9f..6ec68864b 100644
--- a/lib/huc_copy.c
+++ b/lib/huc_copy.c
@@ -23,7 +23,9 @@
  */
 
 #include <i915_drm.h>
+#include "drmtest.h"
 #include "huc_copy.h"
+#include "intel_allocator.h"
 
 static void
 gen9_emit_huc_virtual_addr_state(struct drm_i915_gem_exec_object2 *src,
@@ -40,6 +42,7 @@ gen9_emit_huc_virtual_addr_state(struct drm_i915_gem_exec_object2 *src,
 			buf[(*i)++] = src->offset;
 
 			reloc_src->target_handle = src->handle;
+			reloc_src->presumed_offset = src->offset;
 			reloc_src->delta = 0;
 			reloc_src->offset = (*i - 1) * sizeof(buf[0]);
 			reloc_src->read_domains = 0;
@@ -48,6 +51,7 @@ gen9_emit_huc_virtual_addr_state(struct drm_i915_gem_exec_object2 *src,
 			buf[(*i)++] = dst->offset;
 
 			reloc_dst->target_handle = dst->handle;
+			reloc_dst->presumed_offset = dst->offset;
 			reloc_dst->delta = 0;
 			reloc_dst->offset = (*i - 1) * sizeof(buf[0]);
 			reloc_dst->read_domains = 0;
@@ -61,8 +65,8 @@ gen9_emit_huc_virtual_addr_state(struct drm_i915_gem_exec_object2 *src,
 }
 
 void
-gen9_huc_copyfunc(int fd,
-		struct drm_i915_gem_exec_object2 *obj)
+gen9_huc_copyfunc(int fd, uint64_t ahnd,
+		  struct drm_i915_gem_exec_object2 *obj, uint64_t *objsize)
 {
 	struct drm_i915_gem_relocation_entry reloc[2];
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -86,6 +90,21 @@ gen9_huc_copyfunc(int fd,
 	buf[i++] = MFX_WAIT;
 
 	memset(reloc, 0, sizeof(reloc));
+
+	if (ahnd) {
+		obj[0].flags = EXEC_OBJECT_PINNED;
+		obj[1].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[2].flags = EXEC_OBJECT_PINNED;
+		obj[0].offset = get_offset(ahnd, obj[0].handle, objsize[0], 0);
+		obj[1].offset = get_offset(ahnd, obj[1].handle, objsize[1], 0);
+		obj[2].offset = get_offset(ahnd, obj[2].handle, objsize[2], 0);
+	} else {
+		obj[0].offset = 1 << 20;
+		obj[1].offset = ALIGN(obj[0].offset + objsize[0], 1 << 20);
+		obj[2].offset = ALIGN(obj[1].offset + objsize[1], 1 << 20);
+		obj[1].flags = EXEC_OBJECT_WRITE;
+	}
+
 	gen9_emit_huc_virtual_addr_state(&obj[0], &obj[1], &reloc[0], &reloc[1], buf, &i);
 
 	buf[i++] = HUC_START;
@@ -94,13 +113,13 @@ gen9_huc_copyfunc(int fd,
 	buf[i++] = MI_BATCH_BUFFER_END;
 
 	gem_write(fd, obj[2].handle, 0, buf, sizeof(buf));
-	obj[2].relocation_count = 2;
+	obj[2].relocation_count = !ahnd ? 2 : 0;
 	obj[2].relocs_ptr = to_user_pointer(reloc);
 
 	memset(&execbuf, 0, sizeof(execbuf));
 	execbuf.buffers_ptr = to_user_pointer(obj);
 	execbuf.buffer_count = 3;
-	execbuf.flags = I915_EXEC_BSD;
+	execbuf.flags = I915_EXEC_BSD | I915_EXEC_NO_RELOC;
 
 	gem_execbuf(fd, &execbuf);
 }
diff --git a/lib/huc_copy.h b/lib/huc_copy.h
index ac31d8009..69d140933 100644
--- a/lib/huc_copy.h
+++ b/lib/huc_copy.h
@@ -43,7 +43,7 @@
 #define HUC_VIRTUAL_ADDR_REGION_DST	14
 
 void
-gen9_huc_copyfunc(int fd,
-		struct drm_i915_gem_exec_object2 *obj);
+gen9_huc_copyfunc(int fd, uint64_t ahnd,
+		  struct drm_i915_gem_exec_object2 *obj, uint64_t *objsize);
 
 #endif /* HUC_COPY_H */
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index c1974fe73..0839d7612 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -679,10 +679,12 @@ void intel_bb_copy_intel_buf(struct intel_bb *ibb,
 /**
  * igt_huc_copyfunc_t:
  * @fd: drm fd
+ * @ahnd: allocator handle, if it is equal to 0 we use relocations
  * @obj: drm_i915_gem_exec_object2 buffer array
  *       obj[0] is source buffer
  *       obj[1] is destination buffer
  *       obj[2] is execution buffer
+ * @objsize: corresponding buffer sizes to @obj
  *
  * This is the type of the per-platform huc copy functions.
  *
@@ -690,8 +692,8 @@ void intel_bb_copy_intel_buf(struct intel_bb *ibb,
  * invoke the HuC Copy kernel to copy 4K bytes from the source buffer
  * to the destination buffer.
  */
-typedef void (*igt_huc_copyfunc_t)(int fd,
-		struct drm_i915_gem_exec_object2 *obj);
+typedef void (*igt_huc_copyfunc_t)(int fd, uint64_t ahnd,
+		struct drm_i915_gem_exec_object2 *obj, uint64_t *objsize);
 
 igt_huc_copyfunc_t	igt_get_huc_copyfunc(int devid);
 #endif
diff --git a/tests/i915/gem_huc_copy.c b/tests/i915/gem_huc_copy.c
index 9a32893ea..ea32b705a 100644
--- a/tests/i915/gem_huc_copy.c
+++ b/tests/i915/gem_huc_copy.c
@@ -89,6 +89,7 @@ igt_main
 	int drm_fd = -1;
 	uint32_t devid;
 	igt_huc_copyfunc_t huc_copy;
+	uint64_t ahnd;
 
 	igt_fixture {
 		drm_fd = drm_open_driver(DRIVER_INTEL);
@@ -97,6 +98,8 @@ igt_main
 		huc_copy = igt_get_huc_copyfunc(devid);
 
 		igt_require_f(huc_copy, "no huc_copy function\n");
+
+		ahnd = get_reloc_ahnd(drm_fd, 0);
 	}
 
 	igt_describe("Make sure that Huc firmware works"
@@ -106,6 +109,9 @@ igt_main
 	igt_subtest("huc-copy") {
 		char inputs[HUC_COPY_DATA_BUF_SIZE];
 		struct drm_i915_gem_exec_object2 obj[3];
+		uint64_t objsize[3] = { HUC_COPY_DATA_BUF_SIZE,
+					HUC_COPY_DATA_BUF_SIZE,
+					4096 };
 
 		test_huc_load(drm_fd);
 		/* Initialize src buffer randomly */
@@ -123,7 +129,7 @@ igt_main
 
 		gem_write(drm_fd, obj[0].handle, 0, inputs, HUC_COPY_DATA_BUF_SIZE);
 
-		huc_copy(drm_fd, obj);
+		huc_copy(drm_fd, ahnd, obj, objsize);
 		compare_huc_copy_result(drm_fd, obj[0].handle, obj[1].handle);
 
 		gem_close(drm_fd, obj[0].handle);
@@ -131,6 +137,8 @@ igt_main
 		gem_close(drm_fd, obj[2].handle);
 	}
 
-	igt_fixture
+	igt_fixture {
+		put_ahnd(ahnd);
 		close(drm_fd);
+	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 09/52] tests/gem_bad_reloc: Skip on gens where relocations are not supported
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (7 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 08/52] lib/huc_copy: Extend huc copy prototype to pass allocator handle Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05  0:33   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator Zbigniew Kempczyński
                   ` (44 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_bad_reloc.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/i915/gem_bad_reloc.c b/tests/i915/gem_bad_reloc.c
index 3ca0f3452..0e9c4c79c 100644
--- a/tests/i915/gem_bad_reloc.c
+++ b/tests/i915/gem_bad_reloc.c
@@ -195,6 +195,7 @@ igt_main
 		/* Check if relocations supported by platform */
 		igt_require(gem_has_relocations(fd));
 		gem_require_blitter(fd);
+		igt_require(gem_has_relocations(fd));
 	}
 
 	igt_subtest("negative-reloc")
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (8 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 09/52] tests/gem_bad_reloc: Skip on gens where relocations are not supported Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05  2:07   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 11/52] tests/gem_create: " Zbigniew Kempczyński
                   ` (43 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_busy.c | 35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/tests/i915/gem_busy.c b/tests/i915/gem_busy.c
index f0fca0e8a..51ec5ad04 100644
--- a/tests/i915/gem_busy.c
+++ b/tests/i915/gem_busy.c
@@ -108,6 +108,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
 	uint32_t handle[3];
 	uint32_t read, write;
 	uint32_t active;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	unsigned i;
 
 	handle[TEST] = gem_create(fd, 4096);
@@ -117,6 +118,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
 	/* Create a long running batch which we can use to hog the GPU */
 	handle[BUSY] = gem_create(fd, 4096);
 	spin = igt_spin_new(fd,
+			    .ahnd = ahnd,
 			    .ctx = ctx,
 			    .engine = e->flags,
 			    .dependency = handle[BUSY]);
@@ -171,8 +173,10 @@ static void one(int fd, const intel_ctx_t *ctx,
 	struct timespec tv;
 	igt_spin_t *spin;
 	int timeout;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 
 	spin = igt_spin_new(fd,
+			    .ahnd = ahnd,
 			    .ctx = ctx,
 			    .engine = e->flags,
 			    .dependency = scratch,
@@ -225,6 +229,7 @@ static void one(int fd, const intel_ctx_t *ctx,
 
 	igt_spin_free(fd, spin);
 	gem_close(fd, scratch);
+	put_ahnd(ahnd);
 }
 
 static void xchg_u32(void *array, unsigned i, unsigned j)
@@ -298,11 +303,13 @@ static void close_race(int fd, const intel_ctx_t *ctx)
 		struct sched_param rt = {.sched_priority = 99 };
 		igt_spin_t *spin[nhandles];
 		unsigned long count = 0;
+		uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 
 		igt_assert(sched_setscheduler(getpid(), SCHED_RR, &rt) == 0);
 
 		for (i = 0; i < nhandles; i++) {
 			spin[i] = __igt_spin_new(fd,
+						 .ahnd = ahnd,
 						 .ctx = ctx,
 						 .engine = engines[rand() % nengine]);
 			handles[i] = spin[i]->handle;
@@ -312,6 +319,7 @@ static void close_race(int fd, const intel_ctx_t *ctx)
 			for (i = 0; i < nhandles; i++) {
 				igt_spin_free(fd, spin[i]);
 				spin[i] = __igt_spin_new(fd,
+							 .ahnd = ahnd,
 							 .ctx = ctx,
 							 .engine = engines[rand() % nengine]);
 				handles[i] = spin[i]->handle;
@@ -324,6 +332,7 @@ static void close_race(int fd, const intel_ctx_t *ctx)
 
 		for (i = 0; i < nhandles; i++)
 			igt_spin_free(fd, spin[i]);
+		put_ahnd(ahnd);
 	}
 	igt_waitchildren();
 
@@ -355,11 +364,13 @@ static bool has_semaphores(int fd)
 
 static bool has_extended_busy_ioctl(int fd)
 {
-	igt_spin_t *spin = igt_spin_new(fd, .engine = I915_EXEC_DEFAULT);
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd, .engine = I915_EXEC_DEFAULT);
 	uint32_t read, write;
 
 	__gem_busy(fd, spin->handle, &read, &write);
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 
 	return read != 0;
 }
@@ -367,8 +378,10 @@ static bool has_extended_busy_ioctl(int fd)
 static void basic(int fd, const intel_ctx_t *ctx,
 		  const struct intel_execution_engine2 *e, unsigned flags)
 {
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	igt_spin_t *spin =
 		igt_spin_new(fd,
+			     .ahnd = ahnd,
 			     .ctx = ctx,
 			     .engine = e->flags,
 			     .flags = flags & HANG ?
@@ -394,6 +407,7 @@ static void basic(int fd, const intel_ctx_t *ctx,
 	}
 
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 }
 
 static void all(int i915, const intel_ctx_t *ctx)
@@ -428,6 +442,7 @@ igt_main
 
 	igt_subtest_group {
 		igt_fixture {
+			intel_allocator_multiprocess_start();
 			igt_fork_hang_detector(fd);
 		}
 
@@ -445,6 +460,21 @@ igt_main
 			}
 		}
 
+		igt_subtest("close-race")
+			close_race(fd, ctx);
+
+		igt_fixture {
+			igt_stop_hang_detector();
+			intel_allocator_multiprocess_stop();
+		}
+	}
+
+
+	igt_subtest_group {
+		igt_fixture {
+			igt_fork_hang_detector(fd);
+		}
+
 		igt_subtest_group {
 			igt_fixture {
 				igt_require(has_extended_busy_ioctl(fd));
@@ -477,9 +507,6 @@ igt_main
 			}
 		}
 
-		igt_subtest("close-race")
-			close_race(fd, ctx);
-
 		igt_fixture {
 			igt_stop_hang_detector();
 		}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 11/52] tests/gem_create: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (9 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05  2:14   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 12/52] tests/gem_ctx_engines: " Zbigniew Kempczyński
                   ` (42 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_create.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tests/i915/gem_create.c b/tests/i915/gem_create.c
index 1acf8ee6a..45804cde0 100644
--- a/tests/i915/gem_create.c
+++ b/tests/i915/gem_create.c
@@ -249,12 +249,16 @@ static void busy_create(int i915, int timeout)
 	const intel_ctx_t *ctx;
 	igt_spin_t *spin[I915_EXEC_RING_MASK + 1];
 	unsigned long count = 0;
+	uint64_t ahnd;
 
 	ctx = intel_ctx_create_all_physical(i915);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	igt_fork_hang_detector(i915);
 	for_each_ctx_engine(i915, ctx, e)
-		spin[e->flags] = igt_spin_new(i915, .ctx = ctx,
+		spin[e->flags] = igt_spin_new(i915,
+					      .ahnd = ahnd,
+					      .ctx = ctx,
 					      .engine = e->flags);
 
 	igt_until_timeout(timeout) {
@@ -263,7 +267,9 @@ static void busy_create(int i915, int timeout)
 			igt_spin_t *next;
 
 			handle = gem_create(i915, 4096);
-			next = igt_spin_new(i915, .ctx = ctx,
+			next = igt_spin_new(i915,
+					    .ahnd = ahnd,
+					    .ctx = ctx,
 					    .engine = e->flags,
 					    .dependency = handle,
 					    .flags = IGT_SPIN_SOFTDEP);
@@ -277,6 +283,7 @@ static void busy_create(int i915, int timeout)
 	}
 
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 
 	igt_info("Created %ld objects while busy\n", count);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 12/52] tests/gem_ctx_engines: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (10 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 11/52] tests/gem_create: " Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05  2:40   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 13/52] tests/gem_ctx_exec: " Zbigniew Kempczyński
                   ` (41 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_ctx_engines.c | 25 ++++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

diff --git a/tests/i915/gem_ctx_engines.c b/tests/i915/gem_ctx_engines.c
index bfa83f7e5..003dd1713 100644
--- a/tests/i915/gem_ctx_engines.c
+++ b/tests/i915/gem_ctx_engines.c
@@ -69,6 +69,7 @@ static void invalid_engines(int i915)
 	uint32_t handle;
 	igt_spin_t *spin;
 	void *ptr;
+	uint64_t ahnd;
 
 	param.size = 0;
 	igt_assert_eq(__set_param_fresh_context(i915, param), -EINVAL);
@@ -180,8 +181,10 @@ static void invalid_engines(int i915)
 
 	/* Test that we can't set engines after we've done an execbuf */
 	param.ctx_id = gem_context_create(i915);
-	spin = igt_spin_new(i915, .ctx_id = param.ctx_id);
+	ahnd = get_reloc_ahnd(i915, param.ctx_id);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx_id = param.ctx_id);
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 	igt_assert_eq(__gem_context_set_param(i915, &param), -EINVAL);
 	gem_context_destroy(i915, param.ctx_id);
 
@@ -283,14 +286,18 @@ static void execute_one(int i915)
 		for (int i = -1; i <= I915_EXEC_RING_MASK; i++) {
 			intel_ctx_cfg_t cfg = {};
 			const intel_ctx_t *ctx;
+			uint64_t ahnd;
 			igt_spin_t *spin;
 
 			cfg.num_engines = 1;
 			cfg.engines[0].engine_class = e->class;
 			cfg.engines[0].engine_instance = e->instance;
 			ctx = intel_ctx_create(i915, &cfg);
+			ahnd = get_reloc_ahnd(i915, ctx->id);
 
-			spin = igt_spin_new(i915, .ctx = ctx,
+			spin = igt_spin_new(i915,
+					    .ahnd = ahnd,
+					    .ctx = ctx,
 					    .flags = (IGT_SPIN_NO_PREEMPTION |
 						      IGT_SPIN_POLL_RUN));
 
@@ -324,6 +331,7 @@ static void execute_one(int i915)
 				      i != -1 ? 1 << e->class : 0);
 
 			igt_spin_free(i915, spin);
+			put_ahnd(ahnd);
 
 			gem_sync(i915, obj.handle);
 			intel_ctx_destroy(i915, ctx);
@@ -344,9 +352,11 @@ static void execute_oneforall(int i915)
 		.size = sizeof(engines),
 	};
 	const struct intel_execution_engine2 *e;
+	uint64_t ahnd;
 
 	for_each_physical_engine(i915, e) {
 		param.ctx_id = gem_context_create(i915);
+		ahnd = get_reloc_ahnd(i915, param.ctx_id);
 
 		memset(&engines, 0, sizeof(engines));
 		for (int i = 0; i <= I915_EXEC_RING_MASK; i++) {
@@ -360,6 +370,7 @@ static void execute_oneforall(int i915)
 			igt_spin_t *spin;
 
 			spin = __igt_spin_new(i915,
+					      .ahnd = ahnd,
 					      .ctx_id = param.ctx_id,
 					      .engine = i);
 
@@ -371,6 +382,7 @@ static void execute_oneforall(int i915)
 		}
 
 		gem_context_destroy(i915, param.ctx_id);
+		put_ahnd(ahnd);
 	}
 }
 
@@ -384,6 +396,7 @@ static void execute_allforone(int i915)
 	};
 	const struct intel_execution_engine2 *e;
 	int i;
+	uint64_t ahnd = get_reloc_ahnd(i915, param.ctx_id);
 
 	i = 0;
 	memset(&engines, 0, sizeof(engines));
@@ -401,6 +414,7 @@ static void execute_allforone(int i915)
 		igt_spin_t *spin;
 
 		spin = __igt_spin_new(i915,
+				      .ahnd = ahnd,
 				      .ctx_id = param.ctx_id,
 				      .engine = i++);
 
@@ -412,6 +426,7 @@ static void execute_allforone(int i915)
 	}
 
 	gem_context_destroy(i915, param.ctx_id);
+	put_ahnd(ahnd);
 }
 
 static uint32_t read_result(int timeline, uint32_t *map, int idx)
@@ -539,6 +554,7 @@ static void independent_all(int i915, const intel_ctx_t *ctx)
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *e;
 	igt_spin_t *spin = NULL;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	for_each_ctx_engine(i915, ctx, e) {
 		if (spin) {
@@ -546,7 +562,9 @@ static void independent_all(int i915, const intel_ctx_t *ctx)
 			spin->execbuf.flags |= e->flags;
 			gem_execbuf(i915, &spin->execbuf);
 		} else {
-			spin = igt_spin_new(i915, .ctx = ctx,
+			spin = igt_spin_new(i915,
+					    .ahnd = ahnd,
+					    .ctx = ctx,
 					    .engine = e->flags,
 					    .flags = (IGT_SPIN_NO_PREEMPTION |
 						      IGT_SPIN_POLL_RUN));
@@ -567,6 +585,7 @@ static void independent_all(int i915, const intel_ctx_t *ctx)
 	}
 	sched_yield();
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 	igt_waitchildren();
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 13/52] tests/gem_ctx_exec: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (11 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 12/52] tests/gem_ctx_engines: " Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05  3:06   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 14/52] tests/gem_ctx_freq: " Zbigniew Kempczyński
                   ` (40 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_ctx_exec.c | 45 +++++++++++++++++++++++++++++++++------
 1 file changed, 38 insertions(+), 7 deletions(-)

diff --git a/tests/i915/gem_ctx_exec.c b/tests/i915/gem_ctx_exec.c
index 4d3d1c12f..5691546c9 100644
--- a/tests/i915/gem_ctx_exec.c
+++ b/tests/i915/gem_ctx_exec.c
@@ -178,6 +178,7 @@ static void norecovery(int i915)
 		};
 		int expect = pass == 0 ? -EIO : 0;
 		igt_spin_t *spin;
+		uint64_t ahnd = get_reloc_ahnd(i915, param.ctx_id);
 
 		gem_context_set_param(i915, &param);
 
@@ -185,7 +186,9 @@ static void norecovery(int i915)
 		gem_context_get_param(i915, &param);
 		igt_assert_eq(param.value, pass);
 
-		spin = __igt_spin_new(i915, .ctx = ctx,
+		spin = __igt_spin_new(i915,
+				      .ahnd = ahnd,
+				      .ctx = ctx,
 				      .flags = IGT_SPIN_POLL_RUN);
 		igt_spin_busywait_until_started(spin);
 
@@ -196,6 +199,7 @@ static void norecovery(int i915)
 		igt_spin_free(i915, spin);
 
 		intel_ctx_destroy(i915, ctx);
+		put_ahnd(ahnd);
 	}
 
 	 igt_disallow_hang(i915, hang);
@@ -271,6 +275,7 @@ static void nohangcheck_hostile(int i915)
 	const intel_ctx_t *ctx;
 	int err = 0;
 	int dir;
+	uint64_t ahnd;
 
 	/*
 	 * Even if the user disables hangcheck during their context,
@@ -284,6 +289,7 @@ static void nohangcheck_hostile(int i915)
 
 	ctx = intel_ctx_create_all_physical(i915);
 	hang = igt_allow_hang(i915, ctx->id, 0);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	igt_require(__enable_hangcheck(dir, false));
 
@@ -295,7 +301,9 @@ static void nohangcheck_hostile(int i915)
 		gem_engine_property_printf(i915, e->name,
 					   "preempt_timeout_ms", "%d", 50);
 
-		spin = __igt_spin_new(i915, .ctx = ctx,
+		spin = __igt_spin_new(i915,
+				      .ahnd = ahnd,
+				      .ctx = ctx,
 				      .engine = e->flags,
 				      .flags = (IGT_SPIN_NO_PREEMPTION |
 						IGT_SPIN_FENCE_OUT));
@@ -333,6 +341,7 @@ static void nohangcheck_hostile(int i915)
 
 	igt_assert_eq(sync_fence_status(fence), -EIO);
 	close(fence);
+	put_ahnd(ahnd);
 
 	close(dir);
 	close(i915);
@@ -345,10 +354,14 @@ static void close_race(int i915)
 	const intel_ctx_t **ctx;
 	uint32_t *ctx_id;
 	igt_spin_t *spin;
+	uint64_t ahnd;
 
 	/* Check we can execute a polling spinner */
 	base_ctx = intel_ctx_create(i915, NULL);
-	igt_spin_free(i915, igt_spin_new(i915, .ctx = base_ctx,
+	ahnd = get_reloc_ahnd(i915, base_ctx->id);
+	igt_spin_free(i915, igt_spin_new(i915,
+					 .ahnd = ahnd,
+					 .ctx = base_ctx,
 					 .flags = IGT_SPIN_POLL_RUN));
 
 	ctx = calloc(ncpus, sizeof(*ctx));
@@ -361,7 +374,10 @@ static void close_race(int i915)
 	}
 
 	igt_fork(child, ncpus) {
-		spin = __igt_spin_new(i915, .ctx = base_ctx,
+		ahnd = get_reloc_ahnd(i915, base_ctx->id);
+		spin = __igt_spin_new(i915,
+				      .ahnd = ahnd,
+				      .ctx = base_ctx,
 				      .flags = IGT_SPIN_POLL_RUN);
 		igt_spin_end(spin);
 		gem_sync(i915, spin->handle);
@@ -403,6 +419,7 @@ static void close_race(int i915)
 		}
 
 		igt_spin_free(i915, spin);
+		put_ahnd(ahnd);
 	}
 
 	igt_until_timeout(5) {
@@ -474,11 +491,22 @@ igt_main
 	igt_subtest("basic-nohangcheck")
 		nohangcheck_hostile(fd);
 
-	igt_subtest("basic-close-race")
-		close_race(fd);
+	igt_subtest_group {
+		igt_fixture {
+			intel_allocator_multiprocess_start();
+		}
+
+		igt_subtest("basic-close-race")
+				close_race(fd);
+
+		igt_fixture {
+			intel_allocator_multiprocess_stop();
+		}
+	}
 
 	igt_subtest("reset-pin-leak") {
 		int i;
+		uint64_t ahnd;
 
 		/*
 		 * Use an explicit context to isolate the test from
@@ -486,6 +514,7 @@ igt_main
 		 * default context (eg. if they would be eliminated).
 		 */
 		ctx_id = gem_context_create(fd);
+		ahnd = get_reloc_ahnd(fd, ctx_id);
 
 		/*
 		 * Iterate enough times that the kernel will
@@ -493,7 +522,8 @@ igt_main
 		 * the last context is leaked at every reset.
 		 */
 		for (i = 0; i < 20; i++) {
-			igt_hang_t hang = igt_hang_ring(fd, 0);
+
+			igt_hang_t hang = igt_hang_ring_with_ahnd(fd, 0, ahnd);
 
 			igt_assert_eq(exec(fd, handle, 0, 0), 0);
 			igt_assert_eq(exec(fd, handle, 0, ctx_id), 0);
@@ -501,5 +531,6 @@ igt_main
 		}
 
 		gem_context_destroy(fd, ctx_id);
+		put_ahnd(ahnd);
 	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 14/52] tests/gem_ctx_freq: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (12 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 13/52] tests/gem_ctx_exec: " Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05  6:07   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 15/52] tests/gem_ctx_isolation: " Zbigniew Kempczyński
                   ` (39 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_ctx_freq.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_ctx_freq.c b/tests/i915/gem_ctx_freq.c
index a34472de5..a29fe68b7 100644
--- a/tests/i915/gem_ctx_freq.c
+++ b/tests/i915/gem_ctx_freq.c
@@ -124,6 +124,7 @@ static void sysfs_range(int i915)
 	igt_spin_t *spin;
 	double measured;
 	int pmu;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	/*
 	 * The sysfs interface sets the global limits and overrides the
@@ -145,7 +146,7 @@ static void sysfs_range(int i915)
 		uint32_t cur, discard;
 
 		gem_quiescent_gpu(i915);
-		spin = igt_spin_new(i915);
+		spin = igt_spin_new(i915, .ahnd = ahnd);
 		usleep(10000);
 
 		set_sysfs_freq(sys_freq, sys_freq);
@@ -164,6 +165,7 @@ static void sysfs_range(int i915)
 	gem_quiescent_gpu(i915);
 
 	close(pmu);
+	put_ahnd(ahnd);
 
 #undef N_STEPS
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 15/52] tests/gem_ctx_isolation: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (13 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 14/52] tests/gem_ctx_freq: " Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 16/52] tests/gem_ctx_param: " Zbigniew Kempczyński
                   ` (38 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_ctx_isolation.c | 97 +++++++++++++++++++++++++---------
 1 file changed, 73 insertions(+), 24 deletions(-)

diff --git a/tests/i915/gem_ctx_isolation.c b/tests/i915/gem_ctx_isolation.c
index 24ddde0bc..c57e05079 100644
--- a/tests/i915/gem_ctx_isolation.c
+++ b/tests/i915/gem_ctx_isolation.c
@@ -277,6 +277,7 @@ static void tmpl_regs(int fd,
 }
 
 static uint32_t read_regs(int fd,
+			  uint64_t ahnd,
 			  const intel_ctx_t *ctx,
 			  const struct intel_execution_engine2 *e,
 			  unsigned int flags)
@@ -305,6 +306,12 @@ static uint32_t read_regs(int fd,
 	obj[0].handle = gem_create(fd, regs_size);
 	obj[1].handle = gem_create(fd, batch_size);
 	obj[1].relocs_ptr = to_user_pointer(reloc);
+	if (ahnd) {
+		obj[0].offset = get_offset(ahnd, obj[0].handle, regs_size, 0);
+		obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[1].offset = get_offset(ahnd, obj[1].handle, batch_size, 0);
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+	}
 
 	b = batch = gem_mmap__cpu(fd, obj[1].handle, 0, batch_size, PROT_WRITE);
 	gem_set_domain(fd, obj[1].handle,
@@ -334,14 +341,14 @@ static uint32_t read_regs(int fd,
 			reloc[n].delta = offset;
 			reloc[n].read_domains = I915_GEM_DOMAIN_RENDER;
 			reloc[n].write_domain = I915_GEM_DOMAIN_RENDER;
-			*b++ = offset;
+			*b++ = obj[0].offset + offset;
 			if (r64b)
-				*b++ = 0;
+				*b++ = (obj[0].offset + offset) >> 32;
 			n++;
 		}
 	}
 
-	obj[1].relocation_count = n;
+	obj[1].relocation_count = !ahnd ? n : 0;
 	*b++ = MI_BATCH_BUFFER_END;
 	munmap(batch, batch_size);
 
@@ -353,6 +360,7 @@ static uint32_t read_regs(int fd,
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj[1].handle);
 	free(reloc);
+	put_offset(ahnd, obj[1].handle);
 
 	return obj[0].handle;
 }
@@ -419,6 +427,7 @@ static void write_regs(int fd,
 }
 
 static void restore_regs(int fd,
+			 uint64_t ahnd,
 			 const intel_ctx_t *ctx,
 			 const struct intel_execution_engine2 *e,
 			 unsigned int flags,
@@ -434,6 +443,7 @@ static void restore_regs(int fd,
 	struct drm_i915_gem_relocation_entry *reloc;
 	unsigned int batch_size, n;
 	uint32_t *batch, *b;
+	uint32_t regs_size = NUM_REGS * sizeof(uint32_t);
 
 	if (gen < 7) /* no LRM */
 		return;
@@ -448,6 +458,12 @@ static void restore_regs(int fd,
 	obj[0].handle = regs;
 	obj[1].handle = gem_create(fd, batch_size);
 	obj[1].relocs_ptr = to_user_pointer(reloc);
+	if (ahnd) {
+		obj[0].offset = get_offset(ahnd, obj[0].handle, regs_size, 0);
+		obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[1].offset = get_offset(ahnd, obj[1].handle, batch_size, 0);
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+	}
 
 	b = batch = gem_mmap__cpu(fd, obj[1].handle, 0, batch_size, PROT_WRITE);
 	gem_set_domain(fd, obj[1].handle,
@@ -477,13 +493,13 @@ static void restore_regs(int fd,
 			reloc[n].delta = offset;
 			reloc[n].read_domains = I915_GEM_DOMAIN_RENDER;
 			reloc[n].write_domain = 0;
-			*b++ = offset;
+			*b++ = obj[0].offset + offset;
 			if (r64b)
-				*b++ = 0;
+				*b++ = (obj[0].offset + offset) >> 32;
 			n++;
 		}
 	}
-	obj[1].relocation_count = n;
+	obj[1].relocation_count = !ahnd ? n : 0;
 	*b++ = MI_BATCH_BUFFER_END;
 	munmap(batch, batch_size);
 
@@ -494,6 +510,7 @@ static void restore_regs(int fd,
 	execbuf.rsvd1 = ctx->id;
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj[1].handle);
+	put_offset(ahnd, obj[1].offset);
 }
 
 __attribute__((unused))
@@ -622,15 +639,20 @@ static void nonpriv(int fd, const intel_ctx_cfg_t *cfg,
 		igt_spin_t *spin = NULL;
 		const intel_ctx_t *ctx;
 		uint32_t regs[2], tmpl;
+		uint64_t ahnd;
 
 		ctx = intel_ctx_create(fd, cfg);
+		ahnd = get_reloc_ahnd(fd, ctx->id);
 
-		tmpl = read_regs(fd, ctx, e, flags);
-		regs[0] = read_regs(fd, ctx, e, flags);
+		tmpl = read_regs(fd, ahnd, ctx, e, flags);
+		regs[0] = read_regs(fd, ahnd, ctx, e, flags);
 
 		tmpl_regs(fd, e, tmpl, values[v]);
 
-		spin = igt_spin_new(fd, .ctx = ctx, .engine = e->flags);
+		spin = igt_spin_new(fd,
+				    .ahnd = ahnd,
+				    .ctx = ctx,
+				    .engine = e->flags);
 
 		igt_debug("%s[%d]: Setting all registers to 0x%08x\n",
 			  __func__, v, values[v]);
@@ -638,15 +660,18 @@ static void nonpriv(int fd, const intel_ctx_cfg_t *cfg,
 
 		if (flags & DIRTY2) {
 			const intel_ctx_t *sw = intel_ctx_create(fd, &ctx->cfg);
+			uint64_t ahnd_sw = get_reloc_ahnd(fd, sw->id);
 			igt_spin_t *syncpt, *dirt;
 
 			/* Explicit sync to keep the switch between write/read */
 			syncpt = igt_spin_new(fd,
+					      .ahnd = ahnd,
 					      .ctx = ctx,
 					      .engine = e->flags,
 					      .flags = IGT_SPIN_FENCE_OUT);
 
 			dirt = igt_spin_new(fd,
+					    .ahnd = ahnd_sw,
 					    .ctx = sw,
 					    .engine = e->flags,
 					    .fence = syncpt->out_fence,
@@ -655,6 +680,7 @@ static void nonpriv(int fd, const intel_ctx_cfg_t *cfg,
 			igt_spin_free(fd, syncpt);
 
 			syncpt = igt_spin_new(fd,
+					      .ahnd = ahnd,
 					      .ctx = ctx,
 					      .engine = e->flags,
 					      .fence = dirt->out_fence,
@@ -663,15 +689,16 @@ static void nonpriv(int fd, const intel_ctx_cfg_t *cfg,
 
 			igt_spin_free(fd, syncpt);
 			intel_ctx_destroy(fd, sw);
+			put_ahnd(ahnd_sw);
 		}
 
-		regs[1] = read_regs(fd, ctx, e, flags);
+		regs[1] = read_regs(fd, ahnd, ctx, e, flags);
 
 		/*
 		 * Restore the original register values before the HW idles.
 		 * Or else it may never restart!
 		 */
-		restore_regs(fd, ctx, e, flags, regs[0]);
+		restore_regs(fd, ahnd, ctx, e, flags, regs[0]);
 
 		igt_spin_free(fd, spin);
 
@@ -681,6 +708,7 @@ static void nonpriv(int fd, const intel_ctx_cfg_t *cfg,
 			gem_close(fd, regs[n]);
 		intel_ctx_destroy(fd, ctx);
 		gem_close(fd, tmpl);
+		put_ahnd(ahnd);
 	}
 }
 
@@ -706,11 +734,16 @@ static void isolation(int fd, const intel_ctx_cfg_t *cfg,
 		igt_spin_t *spin = NULL;
 		const intel_ctx_t *ctx[2];
 		uint32_t regs[2], tmp;
+		uint64_t ahnd[2];
 
 		ctx[0] = intel_ctx_create(fd, cfg);
-		regs[0] = read_regs(fd, ctx[0], e, flags);
+		ahnd[0] = get_reloc_ahnd(fd, ctx[0]->id);
+		regs[0] = read_regs(fd, ahnd[0], ctx[0], e, flags);
 
-		spin = igt_spin_new(fd, .ctx = ctx[0], .engine = e->flags);
+		spin = igt_spin_new(fd,
+				    .ahnd = ahnd[0],
+				    .ctx = ctx[0],
+				    .engine = e->flags);
 
 		if (flags & DIRTY1) {
 			igt_debug("%s[%d]: Setting all registers of ctx 0 to 0x%08x\n",
@@ -727,7 +760,8 @@ static void isolation(int fd, const intel_ctx_cfg_t *cfg,
 		 * see the corruption from the previous context instead!
 		 */
 		ctx[1] = intel_ctx_create(fd, cfg);
-		regs[1] = read_regs(fd, ctx[1], e, flags);
+		ahnd[1] = get_reloc_ahnd(fd, ctx[1]->id);
+		regs[1] = read_regs(fd, ahnd[1], ctx[1], e, flags);
 
 		if (flags & DIRTY2) {
 			igt_debug("%s[%d]: Setting all registers of ctx 1 to 0x%08x\n",
@@ -739,8 +773,8 @@ static void isolation(int fd, const intel_ctx_cfg_t *cfg,
 		 * Restore the original register values before the HW idles.
 		 * Or else it may never restart!
 		 */
-		tmp = read_regs(fd, ctx[0], e, flags);
-		restore_regs(fd, ctx[0], e, flags, regs[0]);
+		tmp = read_regs(fd, ahnd[0], ctx[0], e, flags);
+		restore_regs(fd, ahnd[0], ctx[0], e, flags, regs[0]);
 
 		igt_spin_free(fd, spin);
 
@@ -752,6 +786,7 @@ static void isolation(int fd, const intel_ctx_cfg_t *cfg,
 		for (int n = 0; n < ARRAY_SIZE(ctx); n++) {
 			gem_close(fd, regs[n]);
 			intel_ctx_destroy(fd, ctx[n]);
+			put_ahnd(ahnd[n]);
 		}
 		gem_close(fd, tmp);
 	}
@@ -781,7 +816,9 @@ static void inject_reset_context(int fd, const intel_ctx_cfg_t *cfg,
 				 const struct intel_execution_engine2 *e)
 {
 	const intel_ctx_t *ctx = create_reset_context(fd, cfg);
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	struct igt_spin_factory opts = {
+		.ahnd = ahnd,
 		.ctx = ctx,
 		.engine = e->flags,
 		.flags = IGT_SPIN_FAST,
@@ -826,21 +863,27 @@ static void preservation(int fd, const intel_ctx_cfg_t *cfg,
 	const unsigned int num_values = ARRAY_SIZE(values);
 	const intel_ctx_t *ctx[num_values + 1];
 	uint32_t regs[num_values + 1][2];
+	uint64_t ahnd[num_values + 1];
 	igt_spin_t *spin;
 
 	gem_quiescent_gpu(fd);
 
 	ctx[num_values] = intel_ctx_create(fd, cfg);
-	spin = igt_spin_new(fd, .ctx = ctx[num_values], .engine = e->flags);
-	regs[num_values][0] = read_regs(fd, ctx[num_values], e, flags);
+	ahnd[num_values] = get_reloc_ahnd(fd, ctx[num_values]->id);
+	spin = igt_spin_new(fd,
+			    .ahnd = ahnd[num_values],
+			    .ctx = ctx[num_values],
+			    .engine = e->flags);
+	regs[num_values][0] = read_regs(fd, ahnd[num_values], ctx[num_values],
+					e, flags);
 	for (int v = 0; v < num_values; v++) {
 		ctx[v] = intel_ctx_create(fd, cfg);
+		ahnd[v] = get_reloc_ahnd(fd, ctx[v]->id);
 		write_regs(fd, ctx[v], e, flags, values[v]);
 
-		regs[v][0] = read_regs(fd, ctx[v], e, flags);
-
+		regs[v][0] = read_regs(fd, ahnd[v], ctx[v], e, flags);
 	}
-	gem_close(fd, read_regs(fd, ctx[num_values], e, flags));
+	gem_close(fd, read_regs(fd, ahnd[num_values], ctx[num_values], e, flags));
 	igt_spin_free(fd, spin);
 
 	if (flags & RESET)
@@ -871,10 +914,14 @@ static void preservation(int fd, const intel_ctx_cfg_t *cfg,
 		break;
 	}
 
-	spin = igt_spin_new(fd, .ctx = ctx[num_values], .engine = e->flags);
+	spin = igt_spin_new(fd,
+			    .ahnd = ahnd[num_values],
+			    .ctx = ctx[num_values],
+			    .engine = e->flags);
 	for (int v = 0; v < num_values; v++)
-		regs[v][1] = read_regs(fd, ctx[v], e, flags);
-	regs[num_values][1] = read_regs(fd, ctx[num_values], e, flags);
+		regs[v][1] = read_regs(fd, ahnd[v], ctx[v], e, flags);
+	regs[num_values][1] = read_regs(fd, ahnd[num_values], ctx[num_values],
+					e, flags);
 	igt_spin_free(fd, spin);
 
 	for (int v = 0; v < num_values; v++) {
@@ -886,9 +933,11 @@ static void preservation(int fd, const intel_ctx_cfg_t *cfg,
 		gem_close(fd, regs[v][0]);
 		gem_close(fd, regs[v][1]);
 		intel_ctx_destroy(fd, ctx[v]);
+		put_ahnd(ahnd[v]);
 	}
 	compare_regs(fd, e, regs[num_values][0], regs[num_values][1], "clean");
 	intel_ctx_destroy(fd, ctx[num_values]);
+	put_ahnd(ahnd[num_values]);
 }
 
 static unsigned int __has_context_isolation(int fd)
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 16/52] tests/gem_ctx_param: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (14 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 15/52] tests/gem_ctx_isolation: " Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05  7:18   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 17/52] tests/gem_eio: " Zbigniew Kempczyński
                   ` (37 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_ctx_param.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/tests/i915/gem_ctx_param.c b/tests/i915/gem_ctx_param.c
index c795f1b45..11bc08e36 100644
--- a/tests/i915/gem_ctx_param.c
+++ b/tests/i915/gem_ctx_param.c
@@ -165,6 +165,7 @@ static void test_vm(int i915)
 	int err;
 	uint32_t parent, child;
 	igt_spin_t *spin;
+	uint64_t ahnd;
 
 	/*
 	 * Proving 2 contexts share the same GTT is quite tricky as we have no
@@ -190,7 +191,8 @@ static void test_vm(int i915)
 
 	/* Test that we can't set the VM after we've done an execbuf */
 	arg.ctx_id = gem_context_create(i915);
-	spin = igt_spin_new(i915, .ctx_id = arg.ctx_id);
+	ahnd = get_reloc_ahnd(i915, arg.ctx_id);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx_id = arg.ctx_id);
 	igt_spin_free(i915, spin);
 	arg.value = gem_vm_create(i915);
 	err = __gem_context_set_param(i915, &arg);
@@ -202,7 +204,7 @@ static void test_vm(int i915)
 	child = gem_context_create(i915);
 
 	/* Create a background spinner to keep the engines busy */
-	spin = igt_spin_new(i915);
+	spin = igt_spin_new(i915, .ahnd = ahnd);
 	for (int i = 0; i < 16; i++) {
 		spin->execbuf.rsvd1 = gem_context_create(i915);
 		__gem_context_set_priority(i915, spin->execbuf.rsvd1, 1023);
@@ -259,6 +261,7 @@ static void test_vm(int i915)
 	igt_spin_free(i915, spin);
 	gem_sync(i915, batch.handle);
 	gem_close(i915, batch.handle);
+	put_ahnd(ahnd);
 }
 
 static void test_set_invalid_param(int fd, uint64_t param, uint64_t value)
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 17/52] tests/gem_eio: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (15 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 16/52] tests/gem_ctx_param: " Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-05 21:44   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 18/52] tests/gem_exec_async: " Zbigniew Kempczyński
                   ` (36 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_eio.c | 60 +++++++++++++++++++++++++++++++-------------
 1 file changed, 43 insertions(+), 17 deletions(-)

diff --git a/tests/i915/gem_eio.c b/tests/i915/gem_eio.c
index 76a15274e..d9ff1981a 100644
--- a/tests/i915/gem_eio.c
+++ b/tests/i915/gem_eio.c
@@ -174,10 +174,11 @@ static int __gem_wait(int fd, uint32_t handle, int64_t timeout)
 	return err;
 }
 
-static igt_spin_t * __spin_poll(int fd, const intel_ctx_t *ctx,
-				unsigned long flags)
+static igt_spin_t *__spin_poll(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			       unsigned long flags)
 {
 	struct igt_spin_factory opts = {
+		.ahnd = ahnd,
 		.ctx = ctx,
 		.engine = flags,
 		.flags = IGT_SPIN_NO_PREEMPTION | IGT_SPIN_FENCE_OUT,
@@ -206,10 +207,10 @@ static void __spin_wait(int fd, igt_spin_t *spin)
 	}
 }
 
-static igt_spin_t * spin_sync(int fd, const intel_ctx_t *ctx,
-			      unsigned long flags)
+static igt_spin_t *spin_sync(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			     unsigned long flags)
 {
-	igt_spin_t *spin = __spin_poll(fd, ctx, flags);
+	igt_spin_t *spin = __spin_poll(fd, ahnd, ctx, flags);
 
 	__spin_wait(fd, spin);
 
@@ -346,6 +347,7 @@ static void __test_banned(int fd)
 
 	igt_until_timeout(5) {
 		igt_spin_t *hang;
+		uint64_t ahnd;
 
 		if (__gem_execbuf(fd, &execbuf) == -EIO) {
 			uint32_t ctx = 0;
@@ -366,9 +368,11 @@ static void __test_banned(int fd)
 		}
 
 		/* Trigger a reset, making sure we are detected as guilty */
-		hang = spin_sync(fd, intel_ctx_0(fd), 0);
+		ahnd = get_reloc_ahnd(fd, 0);
+		hang = spin_sync(fd, ahnd, intel_ctx_0(fd), 0);
 		trigger_reset(fd);
 		igt_spin_free(fd, hang);
+		put_ahnd(ahnd);
 
 		count++;
 	}
@@ -428,6 +432,7 @@ static void test_banned(int fd)
 static void test_wait(int fd, unsigned int flags, unsigned int wait)
 {
 	igt_spin_t *hang;
+	uint64_t ahnd;
 
 	fd = reopen_device(fd);
 
@@ -441,12 +446,14 @@ static void test_wait(int fd, unsigned int flags, unsigned int wait)
 	else
 		igt_require(i915_reset_control(fd, true));
 
-	hang = spin_sync(fd, intel_ctx_0(fd), I915_EXEC_DEFAULT);
+	ahnd = get_reloc_ahnd(fd, 0);
+	hang = spin_sync(fd, ahnd, intel_ctx_0(fd), I915_EXEC_DEFAULT);
 
 	igt_debugfs_dump(fd, "i915_engine_info");
 	check_wait(fd, hang->handle, wait, NULL);
 
 	igt_spin_free(fd, hang);
+	put_ahnd(ahnd);
 
 	igt_require(i915_reset_control(fd, true));
 
@@ -490,8 +497,10 @@ static void test_inflight(int fd, unsigned int wait)
 		struct drm_i915_gem_exec_object2 obj[2];
 		struct drm_i915_gem_execbuffer2 execbuf;
 		igt_spin_t *hang;
+		uint64_t ahnd;
 
 		fd = reopen_device(parent_fd);
+		ahnd = get_reloc_ahnd(fd, 0);
 
 		memset(obj, 0, sizeof(obj));
 		obj[0].flags = EXEC_OBJECT_WRITE;
@@ -502,7 +511,7 @@ static void test_inflight(int fd, unsigned int wait)
 		igt_debug("Starting %s on engine '%s'\n", __func__, e->name);
 		igt_require(i915_reset_control(fd, false));
 
-		hang = spin_sync(fd, intel_ctx_0(fd), eb_ring(e));
+		hang = spin_sync(fd, ahnd, intel_ctx_0(fd), eb_ring(e));
 		obj[0].handle = hang->handle;
 
 		memset(&execbuf, 0, sizeof(execbuf));
@@ -524,6 +533,7 @@ static void test_inflight(int fd, unsigned int wait)
 			close(fence[n]);
 		}
 		igt_spin_free(fd, hang);
+		put_ahnd(ahnd);
 
 		igt_assert(i915_reset_control(fd, true));
 		trigger_reset(fd);
@@ -541,6 +551,7 @@ static void test_inflight_suspend(int fd)
 	int fence[64]; /* mostly conservative estimate of ring size */
 	igt_spin_t *hang;
 	int max;
+	uint64_t ahnd;
 
 	/* Do a suspend first so that we don't skip inside the test */
 	igt_system_suspend_autoresume(SUSPEND_STATE_MEM, SUSPEND_TEST_DEVICES);
@@ -553,13 +564,14 @@ static void test_inflight_suspend(int fd)
 	fd = reopen_device(fd);
 	igt_require(gem_has_exec_fence(fd));
 	igt_require(i915_reset_control(fd, false));
+	ahnd = get_reloc_ahnd(fd, 0);
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].flags = EXEC_OBJECT_WRITE;
 	obj[1].handle = gem_create(fd, 4096);
 	gem_write(fd, obj[1].handle, 0, &bbe, sizeof(bbe));
 
-	hang = spin_sync(fd, intel_ctx_0(fd), 0);
+	hang = spin_sync(fd, ahnd, intel_ctx_0(fd), 0);
 	obj[0].handle = hang->handle;
 
 	memset(&execbuf, 0, sizeof(execbuf));
@@ -584,6 +596,7 @@ static void test_inflight_suspend(int fd)
 		close(fence[n]);
 	}
 	igt_spin_free(fd, hang);
+	put_ahnd(ahnd);
 
 	igt_assert(i915_reset_control(fd, true));
 	trigger_reset(fd);
@@ -624,6 +637,7 @@ static void test_inflight_contexts(int fd, unsigned int wait)
 		igt_spin_t *hang;
 		const intel_ctx_t *ctx[64];
 		int fence[64];
+		uint64_t ahnd;
 
 		fd = reopen_device(parent_fd);
 
@@ -640,7 +654,8 @@ static void test_inflight_contexts(int fd, unsigned int wait)
 		obj[1].handle = gem_create(fd, 4096);
 		gem_write(fd, obj[1].handle, 0, &bbe, sizeof(bbe));
 
-		hang = spin_sync(fd, intel_ctx_0(fd), eb_ring(e));
+		ahnd = get_reloc_ahnd(fd, 0);
+		hang = spin_sync(fd, ahnd, intel_ctx_0(fd), eb_ring(e));
 		obj[0].handle = hang->handle;
 
 		memset(&execbuf, 0, sizeof(execbuf));
@@ -667,6 +682,7 @@ static void test_inflight_contexts(int fd, unsigned int wait)
 		}
 		igt_spin_free(fd, hang);
 		gem_close(fd, obj[1].handle);
+		put_ahnd(ahnd);
 
 		igt_assert(i915_reset_control(fd, true));
 		trigger_reset(fd);
@@ -685,6 +701,7 @@ static void test_inflight_external(int fd)
 	struct drm_i915_gem_exec_object2 obj;
 	igt_spin_t *hang;
 	uint32_t fence;
+	uint64_t ahnd;
 	IGT_CORK_FENCE(cork);
 
 	fd = reopen_device(fd);
@@ -694,7 +711,8 @@ static void test_inflight_external(int fd)
 	fence = igt_cork_plug(&cork, fd);
 
 	igt_require(i915_reset_control(fd, false));
-	hang = __spin_poll(fd, intel_ctx_0(fd), 0);
+	ahnd = get_reloc_ahnd(fd, 0);
+	hang = __spin_poll(fd, ahnd, intel_ctx_0(fd), 0);
 
 	memset(&obj, 0, sizeof(obj));
 	obj.handle = gem_create(fd, 4096);
@@ -725,6 +743,7 @@ static void test_inflight_external(int fd)
 	close(fence);
 
 	igt_spin_free(fd, hang);
+	put_ahnd(ahnd);
 	igt_assert(i915_reset_control(fd, true));
 	trigger_reset(fd);
 	close(fd);
@@ -738,11 +757,13 @@ static void test_inflight_internal(int fd, unsigned int wait)
 	int fences[I915_EXEC_RING_MASK + 1];
 	unsigned nfence = 0;
 	igt_spin_t *hang;
+	uint64_t ahnd;
 
 	fd = reopen_device(fd);
 	igt_require(gem_has_exec_fence(fd));
 	igt_require(i915_reset_control(fd, false));
-	hang = spin_sync(fd, intel_ctx_0(fd), 0);
+	ahnd = get_reloc_ahnd(fd, 0);
+	hang = spin_sync(fd, ahnd, intel_ctx_0(fd), 0);
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = hang->handle;
@@ -771,13 +792,14 @@ static void test_inflight_internal(int fd, unsigned int wait)
 		close(fences[nfence]);
 	}
 	igt_spin_free(fd, hang);
+	put_ahnd(ahnd);
 
 	igt_assert(i915_reset_control(fd, true));
 	trigger_reset(fd);
 	close(fd);
 }
 
-static void reset_stress(int fd, const intel_ctx_t *ctx0,
+static void reset_stress(int fd, uint64_t ahnd, const intel_ctx_t *ctx0,
 			 const char *name, unsigned int engine,
 			 unsigned int flags)
 {
@@ -815,7 +837,7 @@ static void reset_stress(int fd, const intel_ctx_t *ctx0,
 		 * Start executing a spin batch with some queued batches
 		 * against a different context after it.
 		 */
-		hang = spin_sync(fd, ctx0, engine);
+		hang = spin_sync(fd, ahnd, ctx0, engine);
 
 		execbuf.rsvd1 = ctx->id;
 		for (i = 0; i < max; i++)
@@ -863,11 +885,13 @@ static void reset_stress(int fd, const intel_ctx_t *ctx0,
 static void test_reset_stress(int fd, unsigned int flags)
 {
 	const intel_ctx_t *ctx0 = context_create_safe(fd);
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx0->id);
 
 	for_each_ring(e, fd)
-		reset_stress(fd, ctx0, e->name, eb_ring(e), flags);
+		reset_stress(fd, ahnd, ctx0, e->name, eb_ring(e), flags);
 
 	intel_ctx_destroy(fd, ctx0);
+	put_ahnd(ahnd);
 }
 
 /*
@@ -928,13 +952,15 @@ static void test_kms(int i915, igt_display_t *dpy)
 	test_inflight(i915, 0);
 	if (gem_has_contexts(i915)) {
 		const intel_ctx_t *ctx = context_create_safe(i915);
+		uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
-		reset_stress(i915, ctx,
+		reset_stress(i915, ahnd, ctx,
 			     "default", I915_EXEC_DEFAULT, 0);
-		reset_stress(i915, ctx,
+		reset_stress(i915, ahnd, ctx,
 			     "default", I915_EXEC_DEFAULT, TEST_WEDGE);
 
 		intel_ctx_destroy(i915, ctx);
+		put_ahnd(ahnd);
 	}
 
 	*shared = 1;
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 18/52] tests/gem_exec_async: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (16 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 17/52] tests/gem_eio: " Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-06  1:43   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 19/52] tests/gem_exec_big: Require relocation support Zbigniew Kempczyński
                   ` (35 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_async.c | 57 ++++++++++++++++++++++++++++++-------
 1 file changed, 46 insertions(+), 11 deletions(-)

diff --git a/tests/i915/gem_exec_async.c b/tests/i915/gem_exec_async.c
index a3be6b3ee..41f3b752b 100644
--- a/tests/i915/gem_exec_async.c
+++ b/tests/i915/gem_exec_async.c
@@ -27,8 +27,11 @@
 
 IGT_TEST_DESCRIPTION("Check that we can issue concurrent writes across the engines.");
 
-static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
-			uint32_t target, uint32_t offset, uint32_t value)
+#define SZ_1M (1024 * 1024)
+
+static void store_dword(int fd, int id, const intel_ctx_t *ctx,
+			 unsigned ring, uint32_t target, uint64_t target_offset,
+			 uint32_t offset, uint32_t value)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
@@ -50,6 +53,15 @@ static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 	obj[0].flags = EXEC_OBJECT_ASYNC;
 	obj[1].handle = gem_create(fd, 4096);
 
+	if (id) {
+		obj[0].offset = target_offset;
+		obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE |
+				EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+		obj[1].offset = (id + 1) * SZ_1M;
+		obj[1].flags |= EXEC_OBJECT_PINNED |
+				EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	}
+
 	memset(&reloc, 0, sizeof(reloc));
 	reloc.target_handle = obj[0].handle;
 	reloc.presumed_offset = 0;
@@ -58,13 +70,13 @@ static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 	reloc.write_domain = I915_GEM_DOMAIN_INSTRUCTION;
 	obj[1].relocs_ptr = to_user_pointer(&reloc);
-	obj[1].relocation_count = 1;
+	obj[1].relocation_count = !id ? 1 : 0;
 
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = offset;
-		batch[++i] = 0;
+		batch[++i] = target_offset + offset;
+		batch[++i] = (target_offset + offset) >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
 		batch[++i] = offset;
@@ -89,6 +101,8 @@ static void one(int fd, const intel_ctx_t *ctx,
 	uint32_t scratch = gem_create(fd, 4096);
 	igt_spin_t *spin;
 	uint32_t *result;
+	uint64_t ahnd = get_simple_l2h_ahnd(fd, ctx->id);
+	uint64_t scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 	int i;
 
 	/*
@@ -96,11 +110,16 @@ static void one(int fd, const intel_ctx_t *ctx,
 	 * the scratch for write. Then on the other rings try and
 	 * write into that target. If it blocks we hang the GPU...
 	 */
-	spin = igt_spin_new(fd, .ctx = ctx, .engine = engine,
+	spin = igt_spin_new(fd,
+			    .ahnd = ahnd,
+			    .ctx = ctx,
+			    .engine = engine,
 			    .dependency = scratch);
 
 	i = 0;
 	for_each_ctx_engine(fd, ctx, e) {
+		int id = ahnd ? (i + 1) : 0;
+
 		if (e->flags == engine)
 			continue;
 
@@ -108,10 +127,15 @@ static void one(int fd, const intel_ctx_t *ctx,
 			continue;
 
 		if (flags & FORKED) {
-			igt_fork(child, 1)
-				store_dword(fd, ctx, e->flags, scratch, 4*i, ~i);
+			igt_fork(child, 1) {
+				store_dword(fd, id, ctx, e->flags,
+					    scratch, scratch_offset,
+					    4*i, ~i);
+			}
 		} else {
-			store_dword(fd, ctx, e->flags, scratch, 4*i, ~i);
+			store_dword(fd, id, ctx, e->flags,
+				    scratch, scratch_offset,
+				    4*i, ~i);
 		}
 		i++;
 	}
@@ -124,6 +148,7 @@ static void one(int fd, const intel_ctx_t *ctx,
 
 	igt_spin_free(fd, spin);
 	gem_close(fd, scratch);
+	put_ahnd(ahnd);
 }
 
 static bool has_async_execbuf(int fd)
@@ -162,8 +187,18 @@ igt_main
 	test_each_engine("concurrent-writes", fd, ctx, e)
 		one(fd, ctx, e->flags, 0);
 
-	test_each_engine("forked-writes", fd, ctx, e)
-		one(fd, ctx, e->flags, FORKED);
+	igt_subtest_group {
+		igt_fixture {
+			intel_allocator_multiprocess_start();
+		}
+
+		test_each_engine("forked-writes", fd, ctx, e)
+			one(fd, ctx, e->flags, FORKED);
+
+		igt_fixture {
+			intel_allocator_multiprocess_stop();
+		}
+	}
 
 	igt_fixture {
 		igt_stop_hang_detector();
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 19/52] tests/gem_exec_big: Require relocation support
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (17 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 18/52] tests/gem_exec_async: " Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 20/52] tests/gem_exec_capture: Support gens without relocations Zbigniew Kempczyński
                   ` (34 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

From: Andrzej Turko <andrzej.turko@linux.intel.com>

This test only verifies the correctness of relocations so it should be
skipped if running on a platform which does not support them.

Signed-off-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_big.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/i915/gem_exec_big.c b/tests/i915/gem_exec_big.c
index 1f8c720b6..9ea49eec1 100644
--- a/tests/i915/gem_exec_big.c
+++ b/tests/i915/gem_exec_big.c
@@ -303,6 +303,7 @@ igt_main
 	igt_fixture {
 		i915 = drm_open_driver(DRIVER_INTEL);
 		igt_require_gem(i915);
+		igt_require(gem_has_relocations(i915));
 
 		use_64bit_relocs = intel_gen(intel_get_drm_devid(i915)) >= 8;
 	}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 20/52] tests/gem_exec_capture: Support gens without relocations
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (18 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 19/52] tests/gem_exec_big: Require relocation support Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 21/52] tests/gem_exec_gttfill: Require relocation support Zbigniew Kempczyński
                   ` (33 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev

From: Andrzej Turko <andrzej.turko@linux.intel.com>

With relocations disabled on newer generations tests must assign addresses
to objects by themselves instead of relying on the driver.

Signed-off-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
---
 tests/i915/gem_exec_capture.c | 131 ++++++++++++++++++++++++++--------
 1 file changed, 101 insertions(+), 30 deletions(-)

diff --git a/tests/i915/gem_exec_capture.c b/tests/i915/gem_exec_capture.c
index f59cb09da..6e817c46c 100644
--- a/tests/i915/gem_exec_capture.c
+++ b/tests/i915/gem_exec_capture.c
@@ -33,6 +33,8 @@
 
 IGT_TEST_DESCRIPTION("Check that we capture the user specified objects on a hang");
 
+#define ALIGNMENT (1 << 12)
+
 static void check_error_state(int dir, struct drm_i915_gem_exec_object2 *obj)
 {
 	char *error, *str;
@@ -53,7 +55,7 @@ static void check_error_state(int dir, struct drm_i915_gem_exec_object2 *obj)
 		addr = hi;
 		addr <<= 32;
 		addr |= lo;
-		igt_assert_eq_u64(addr, obj->offset);
+		igt_assert_eq_u64(addr, DECANONICAL(obj->offset));
 		found = true;
 	}
 
@@ -61,8 +63,8 @@ static void check_error_state(int dir, struct drm_i915_gem_exec_object2 *obj)
 	igt_assert(found);
 }
 
-static void __capture1(int fd, int dir, const intel_ctx_t *ctx,
-		       unsigned ring, uint32_t target)
+static void __capture1(int fd, int dir, uint64_t ahnd, const intel_ctx_t *ctx,
+		       unsigned ring, uint32_t target, uint64_t target_size)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[4];
@@ -74,32 +76,46 @@ static void __capture1(int fd, int dir, const intel_ctx_t *ctx,
 	struct drm_i915_gem_execbuffer2 execbuf;
 	uint32_t *batch, *seqno;
 	int i;
+	bool do_relocs = gem_has_relocations(fd);
 
 	memset(obj, 0, sizeof(obj));
 	obj[SCRATCH].handle = gem_create(fd, 4096);
+	obj[SCRATCH].flags = EXEC_OBJECT_WRITE;
 	obj[CAPTURE].handle = target;
 	obj[CAPTURE].flags = EXEC_OBJECT_CAPTURE;
 	obj[NOCAPTURE].handle = gem_create(fd, 4096);
 
 	obj[BATCH].handle = gem_create(fd, 4096);
-	obj[BATCH].relocs_ptr = (uintptr_t)reloc;
-	obj[BATCH].relocation_count = ARRAY_SIZE(reloc);
+
+	for (i = 0; i < 4; i++) {
+		obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+						      i == CAPTURE ? target_size : 4096,
+						      ALIGNMENT);
+		obj[i].offset = CANONICAL(obj[i].offset);
+		obj[i].flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS |
+				(do_relocs ? 0 : EXEC_OBJECT_PINNED);
+	}
 
 	memset(reloc, 0, sizeof(reloc));
 	reloc[0].target_handle = obj[BATCH].handle; /* recurse */
-	reloc[0].presumed_offset = 0;
+	reloc[0].presumed_offset = obj[BATCH].offset;
 	reloc[0].offset = 5*sizeof(uint32_t);
 	reloc[0].delta = 0;
 	reloc[0].read_domains = I915_GEM_DOMAIN_COMMAND;
 	reloc[0].write_domain = 0;
 
 	reloc[1].target_handle = obj[SCRATCH].handle; /* breadcrumb */
-	reloc[1].presumed_offset = 0;
+	reloc[1].presumed_offset = obj[SCRATCH].offset;
 	reloc[1].offset = sizeof(uint32_t);
 	reloc[1].delta = 0;
 	reloc[1].read_domains = I915_GEM_DOMAIN_RENDER;
 	reloc[1].write_domain = I915_GEM_DOMAIN_RENDER;
 
+	if (do_relocs) {
+		obj[BATCH].relocs_ptr = (uintptr_t)reloc;
+		obj[BATCH].relocation_count = ARRAY_SIZE(reloc);
+	}
+
 	seqno = gem_mmap__wc(fd, obj[SCRATCH].handle, 0, 4096, PROT_READ);
 	gem_set_domain(fd, obj[SCRATCH].handle,
 			I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
@@ -111,8 +127,8 @@ static void __capture1(int fd, int dir, const intel_ctx_t *ctx,
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj[SCRATCH].offset;
+		batch[++i] = obj[SCRATCH].offset >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
 		batch[++i] = 0;
@@ -128,8 +144,8 @@ static void __capture1(int fd, int dir, const intel_ctx_t *ctx,
 	batch[++i] = MI_BATCH_BUFFER_START; /* not crashed? try again! */
 	if (gen >= 8) {
 		batch[i] |= 1 << 8 | 1;
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj[BATCH].offset;
+		batch[++i] = obj[BATCH].offset >> 32;
 	} else if (gen >= 6) {
 		batch[i] |= 1 << 8;
 		batch[++i] = 0;
@@ -165,6 +181,9 @@ static void __capture1(int fd, int dir, const intel_ctx_t *ctx,
 
 	gem_sync(fd, obj[BATCH].handle);
 
+	intel_allocator_free(ahnd, obj[BATCH].handle);
+	intel_allocator_free(ahnd, obj[NOCAPTURE].handle);
+	intel_allocator_free(ahnd, obj[SCRATCH].handle);
 	gem_close(fd, obj[BATCH].handle);
 	gem_close(fd, obj[NOCAPTURE].handle);
 	gem_close(fd, obj[SCRATCH].handle);
@@ -173,10 +192,16 @@ static void __capture1(int fd, int dir, const intel_ctx_t *ctx,
 static void capture(int fd, int dir, const intel_ctx_t *ctx, unsigned ring)
 {
 	uint32_t handle;
+	uint64_t ahnd;
 
 	handle = gem_create(fd, 4096);
-	__capture1(fd, dir, ctx, ring, handle);
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	__capture1(fd, dir, ahnd, ctx, ring, handle, 4096);
+
 	gem_close(fd, handle);
+	intel_allocator_free(ahnd, handle);
+	intel_allocator_close(ahnd);
 }
 
 static int cmp(const void *A, const void *B)
@@ -195,7 +220,7 @@ static int cmp(const void *A, const void *B)
 static struct offset {
 	uint64_t addr;
 	unsigned long idx;
-} *__captureN(int fd, int dir, unsigned ring,
+} *__captureN(int fd, int dir, uint64_t ahnd, unsigned ring,
 	      unsigned int size, int count,
 	      unsigned int flags)
 #define INCREMENTAL 0x1
@@ -208,18 +233,30 @@ static struct offset {
 	uint32_t *batch, *seqno;
 	struct offset *offsets;
 	int i;
+	bool do_relocs = gem_has_relocations(fd);
 
-	offsets = calloc(count , sizeof(*offsets));
+	offsets = calloc(count, sizeof(*offsets));
 	igt_assert(offsets);
 
 	obj = calloc(count + 2, sizeof(*obj));
 	igt_assert(obj);
 
 	obj[0].handle = gem_create(fd, 4096);
+	obj[0].offset = intel_allocator_alloc(ahnd, obj[0].handle,
+					      4096, ALIGNMENT);
+	obj[0].offset = CANONICAL(obj[0].offset);
+	obj[0].flags = EXEC_OBJECT_WRITE | EXEC_OBJECT_SUPPORTS_48B_ADDRESS |
+		       (do_relocs ? 0 : EXEC_OBJECT_PINNED);
+
 	for (i = 0; i < count; i++) {
 		obj[i + 1].handle = gem_create(fd, size);
-		obj[i + 1].flags =
-			EXEC_OBJECT_CAPTURE | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+		obj[i + 1].offset = intel_allocator_alloc(ahnd, obj[i + 1].handle,
+							  size, ALIGNMENT);
+		obj[i + 1].offset = CANONICAL(obj[i + 1].offset);
+		obj[i + 1].flags = EXEC_OBJECT_CAPTURE |
+				   EXEC_OBJECT_SUPPORTS_48B_ADDRESS |
+				   (do_relocs ? 0 : EXEC_OBJECT_PINNED);
+
 		if (flags & INCREMENTAL) {
 			uint32_t *ptr;
 
@@ -232,23 +269,32 @@ static struct offset {
 	}
 
 	obj[count + 1].handle = gem_create(fd, 4096);
-	obj[count + 1].relocs_ptr = (uintptr_t)reloc;
-	obj[count + 1].relocation_count = ARRAY_SIZE(reloc);
+	obj[count + 1].offset = intel_allocator_alloc(ahnd, obj[count + 1].handle,
+						      4096, ALIGNMENT);
+	obj[count + 1].offset = CANONICAL(obj[count + 1].offset);
+	obj[count + 1].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS |
+			       (do_relocs ? 0 : EXEC_OBJECT_PINNED);
 
 	memset(reloc, 0, sizeof(reloc));
 	reloc[0].target_handle = obj[count + 1].handle; /* recurse */
-	reloc[0].presumed_offset = 0;
+	reloc[0].presumed_offset = obj[count + 1].offset;
 	reloc[0].offset = 5*sizeof(uint32_t);
 	reloc[0].delta = 0;
 	reloc[0].read_domains = I915_GEM_DOMAIN_COMMAND;
 	reloc[0].write_domain = 0;
 
 	reloc[1].target_handle = obj[0].handle; /* breadcrumb */
-	reloc[1].presumed_offset = 0;
+	reloc[1].presumed_offset = obj[0].offset;
 	reloc[1].offset = sizeof(uint32_t);
 	reloc[1].delta = 0;
 	reloc[1].read_domains = I915_GEM_DOMAIN_RENDER;
 	reloc[1].write_domain = I915_GEM_DOMAIN_RENDER;
+	if (do_relocs) {
+		obj[count + 1].relocs_ptr = (uintptr_t)reloc;
+		obj[count + 1].relocation_count = ARRAY_SIZE(reloc);
+	} else {
+		execbuf.flags = I915_EXEC_NO_RELOC;
+	}
 
 	seqno = gem_mmap__wc(fd, obj[0].handle, 0, 4096, PROT_READ);
 	gem_set_domain(fd, obj[0].handle,
@@ -261,8 +307,8 @@ static struct offset {
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj[0].offset;
+		batch[++i] = obj[0].offset >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
 		batch[++i] = 0;
@@ -278,8 +324,8 @@ static struct offset {
 	batch[++i] = MI_BATCH_BUFFER_START; /* not crashed? try again! */
 	if (gen >= 8) {
 		batch[i] |= 1 << 8 | 1;
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj[count + 1].offset;
+		batch[++i] = obj[count + 1].offset >> 32;
 	} else if (gen >= 6) {
 		batch[i] |= 1 << 8;
 		batch[++i] = 0;
@@ -315,12 +361,16 @@ static struct offset {
 
 	gem_close(fd, obj[count + 1].handle);
 	for (i = 0; i < count; i++) {
-		offsets[i].addr = obj[i + 1].offset;
+		offsets[i].addr = DECANONICAL(obj[i + 1].offset);
 		offsets[i].idx = i;
 		gem_close(fd, obj[i + 1].handle);
+		intel_allocator_free(ahnd, obj[i + 1].handle);
 	}
 	gem_close(fd, obj[0].handle);
 
+	intel_allocator_free(ahnd, obj[0].handle);
+	intel_allocator_free(ahnd, obj[count + 1].handle);
+
 	qsort(offsets, count, sizeof(*offsets), cmp);
 	igt_assert(offsets[0].addr <= offsets[count-1].addr);
 	return offsets;
@@ -414,7 +464,7 @@ ascii85_decode(char *in, uint32_t **out, bool inflate, char **end)
 
 static void many(int fd, int dir, uint64_t size, unsigned int flags)
 {
-	uint64_t ram, gtt;
+	uint64_t ram, gtt, ahnd;
 	unsigned long count, blobs;
 	struct offset *offsets;
 	char *error, *str;
@@ -428,8 +478,9 @@ static void many(int fd, int dir, uint64_t size, unsigned int flags)
 	igt_require(count > 1);
 
 	intel_require_memory(count, size, CHECK_RAM);
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
 
-	offsets = __captureN(fd, dir, 0, size, count, flags);
+	offsets = __captureN(fd, dir, ahnd, 0, size, count, flags);
 
 	error = igt_sysfs_get(dir, "error");
 	igt_sysfs_set(dir, "error", "Begone!");
@@ -496,6 +547,7 @@ static void many(int fd, int dir, uint64_t size, unsigned int flags)
 
 	free(error);
 	free(offsets);
+	intel_allocator_close(ahnd);
 }
 
 static void prioinv(int fd, int dir, const intel_ctx_t *ctx,
@@ -512,10 +564,16 @@ static void prioinv(int fd, int dir, const intel_ctx_t *ctx,
 		.rsvd1 = ctx->id,
 	};
 	int64_t timeout = NSEC_PER_SEC; /* 1s, feeling generous, blame debug */
-	uint64_t ram, gtt, size = 4 << 20;
+	uint64_t ram, gtt, ahnd, size = 4 << 20;
 	unsigned long count;
 	int link[2], dummy;
 
+	intel_allocator_multiprocess_start();
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	obj.offset = intel_allocator_alloc(ahnd, obj.handle, 4096, ALIGNMENT);
+	obj.offset = CANONICAL(obj.offset);
+	obj.flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
 	igt_require(gem_scheduler_enabled(fd));
 	igt_require(igt_params_set(fd, "reset", "%u", -1)); /* engine resets! */
 	igt_require(gem_gpu_reset_type(fd) > 1);
@@ -544,7 +602,13 @@ static void prioinv(int fd, int dir, const intel_ctx_t *ctx,
 		fd = gem_reopen_driver(fd);
 		igt_debug("Submitting large capture [%ld x %dMiB objects]\n",
 			  count, (int)(size >> 20));
-		free(__captureN(fd, dir, ring, size, count, ASYNC));
+
+		/* Reopen the allocator in the new process. */
+		ahnd = intel_allocator_open(fd, child + 1, INTEL_ALLOCATOR_SIMPLE);
+
+		free(__captureN(fd, dir, ahnd, ring, size, count, ASYNC));
+		intel_allocator_close(ahnd);
+
 		write(link[1], &fd, sizeof(fd)); /* wake the parent up */
 		igt_force_gpu_reset(fd);
 		write(link[1], &fd, sizeof(fd)); /* wake the parent up */
@@ -567,19 +631,26 @@ static void prioinv(int fd, int dir, const intel_ctx_t *ctx,
 	close(link[1]);
 
 	gem_quiescent_gpu(fd);
+	intel_allocator_free(ahnd, obj.handle);
+	intel_allocator_close(ahnd);
+	intel_allocator_multiprocess_stop();
 }
 
 static void userptr(int fd, int dir)
 {
 	uint32_t handle;
+	uint64_t ahnd;
 	void *ptr;
 
 	igt_assert(posix_memalign(&ptr, 4096, 4096) == 0);
 	igt_require(__gem_userptr(fd, ptr, 4096, 0, 0, &handle) == 0);
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
 
-	__capture1(fd, dir, intel_ctx_0(fd), 0, handle);
+	__capture1(fd, dir, ahnd, intel_ctx_0(fd), 0, handle, 4096);
 
 	gem_close(fd, handle);
+	intel_allocator_free(ahnd, handle);
+	intel_allocator_close(ahnd);
 	free(ptr);
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 21/52] tests/gem_exec_gttfill: Require relocation support
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (19 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 20/52] tests/gem_exec_capture: Support gens without relocations Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 22/52] tests/gem_exec_store: Support gens without relocations Zbigniew Kempczyński
                   ` (32 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev

From: Andrzej Turko <andrzej.turko@linux.intel.com>

Since this test uses relocations, which are now disabled on newer
generations, we need to skip the test if they are not supported.
In order to maintain coverage a slightly modified version of this test
using softpinning instead of relocations is added to gem_softpin.

Signed-off-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
---
 tests/i915/gem_exec_gttfill.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/i915/gem_exec_gttfill.c b/tests/i915/gem_exec_gttfill.c
index b8283eb8f..f16428714 100644
--- a/tests/i915/gem_exec_gttfill.c
+++ b/tests/i915/gem_exec_gttfill.c
@@ -216,6 +216,7 @@ igt_main
 	igt_fixture {
 		i915 = drm_open_driver(DRIVER_INTEL);
 		igt_require_gem(i915);
+		igt_require(gem_has_relocations(i915));
 		ctx = intel_ctx_create_all_physical(i915);
 		igt_fork_hang_detector(i915);
 	}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 22/52] tests/gem_exec_store: Support gens without relocations
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (20 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 21/52] tests/gem_exec_gttfill: Require relocation support Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 23/52] tests/gem_exec_suspend: Adopt to use allocator Zbigniew Kempczyński
                   ` (31 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev

From: Andrzej Turko <andrzej.turko@linux.intel.com>

With relocations disabled on newer generations
tests must assign addresses to objects by
themselves instead of relying on the driver.

Signed-off-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
---
 tests/i915/gem_exec_store.c | 134 ++++++++++++++++++++++++++++--------
 1 file changed, 106 insertions(+), 28 deletions(-)

diff --git a/tests/i915/gem_exec_store.c b/tests/i915/gem_exec_store.c
index 0798f61d7..38c595e34 100644
--- a/tests/i915/gem_exec_store.c
+++ b/tests/i915/gem_exec_store.c
@@ -37,6 +37,9 @@
 
 #define ENGINE_MASK  (I915_EXEC_RING_MASK | I915_EXEC_BSD_MASK)
 
+/* Without alignment detection we assume the worst-case scenario. */
+#define ALIGNMENT (1 << 21)
+
 static void store_dword(int fd, const intel_ctx_t *ctx,
 			const struct intel_execution_engine2 *e)
 {
@@ -45,6 +48,7 @@ static void store_dword(int fd, const intel_ctx_t *ctx,
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	uint32_t batch[16];
+	uint64_t ahnd;
 	int i;
 
 	intel_detect_and_clear_missed_interrupts(fd);
@@ -56,43 +60,63 @@ static void store_dword(int fd, const intel_ctx_t *ctx,
 		execbuf.flags |= I915_EXEC_SECURE;
 	execbuf.rsvd1 = ctx->id;
 
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = gem_create(fd, 4096);
+	obj[0].offset = intel_allocator_alloc(ahnd, obj[0].handle,
+					      4096, ALIGNMENT);
+	obj[0].offset = CANONICAL(obj[0].offset);
+	obj[0].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS | EXEC_OBJECT_WRITE;
 	obj[1].handle = gem_create(fd, 4096);
+	obj[1].offset = intel_allocator_alloc(ahnd, obj[1].handle,
+					      4096, ALIGNMENT);
+	obj[1].offset = CANONICAL(obj[1].offset);
+	obj[1].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 
 	memset(&reloc, 0, sizeof(reloc));
 	reloc.target_handle = obj[0].handle;
-	reloc.presumed_offset = 0;
+	reloc.presumed_offset = obj[0].offset;
 	reloc.offset = sizeof(uint32_t);
 	reloc.delta = 0;
 	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 	reloc.write_domain = I915_GEM_DOMAIN_INSTRUCTION;
-	obj[1].relocs_ptr = to_user_pointer(&reloc);
-	obj[1].relocation_count = 1;
+
+	if (gem_has_relocations(fd)) {
+		obj[1].relocs_ptr = to_user_pointer(&reloc);
+		obj[1].relocation_count = 1;
+	} else {
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		execbuf.flags |= I915_EXEC_NO_RELOC;
+	}
 
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj[0].offset;
+		batch[++i] = obj[0].offset >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj[0].offset;
 		reloc.offset += sizeof(uint32_t);
 	} else {
 		batch[i]--;
-		batch[++i] = 0;
+		batch[++i] = obj[0].offset;
 	}
 	batch[++i] = 0xc0ffee;
 	batch[++i] = MI_BATCH_BUFFER_END;
 	gem_write(fd, obj[1].handle, 0, batch, sizeof(batch));
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj[1].handle);
+	intel_allocator_free(ahnd, obj[1].handle);
 
 	gem_read(fd, obj[0].handle, 0, batch, sizeof(batch));
 	gem_close(fd, obj[0].handle);
+	intel_allocator_free(ahnd, obj[0].handle);
 	igt_assert_eq(*batch, 0xc0ffee);
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
+	intel_allocator_close(ahnd);
 }
 
 #define PAGES 1
@@ -106,7 +130,9 @@ static void store_cachelines(int fd, const intel_ctx_t *ctx,
 	struct drm_i915_gem_execbuffer2 execbuf;
 #define NCACHELINES (4096/64)
 	uint32_t *batch;
+	uint64_t ahnd, reloc_value;
 	int i;
+	bool do_relocs = gem_has_relocations(fd);
 
 	reloc = calloc(NCACHELINES, sizeof(*reloc));
 	igt_assert(reloc);
@@ -119,12 +145,25 @@ static void store_cachelines(int fd, const intel_ctx_t *ctx,
 		execbuf.flags |= I915_EXEC_SECURE;
 	execbuf.rsvd1 = ctx->id;
 
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
 	obj = calloc(execbuf.buffer_count, sizeof(*obj));
 	igt_assert(obj);
-	for (i = 0; i < execbuf.buffer_count; i++)
+	for (i = 0; i < execbuf.buffer_count; i++) {
 		obj[i].handle = gem_create(fd, 4096);
-	obj[i-1].relocs_ptr = to_user_pointer(reloc);
-	obj[i-1].relocation_count = NCACHELINES;
+		obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+						      4096, ALIGNMENT);
+		obj[i].offset = CANONICAL(obj[i].offset);
+		obj[i].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS |
+			       (do_relocs ? 0 : EXEC_OBJECT_PINNED);
+		if (i + 1 < execbuf.buffer_count)
+			obj[i].flags |= EXEC_OBJECT_WRITE;
+	}
+	if (do_relocs) {
+		obj[i-1].relocs_ptr = to_user_pointer(reloc);
+		obj[i-1].relocation_count = NCACHELINES;
+	} else {
+		execbuf.flags |= I915_EXEC_NO_RELOC;
+	}
 	execbuf.buffers_ptr = to_user_pointer(obj);
 
 	batch = gem_mmap__cpu(fd, obj[i-1].handle, 0, 4096, PROT_WRITE);
@@ -132,23 +171,24 @@ static void store_cachelines(int fd, const intel_ctx_t *ctx,
 	i = 0;
 	for (unsigned n = 0; n < NCACHELINES; n++) {
 		reloc[n].target_handle = obj[n % (execbuf.buffer_count-1)].handle;
-		reloc[n].presumed_offset = -1;
+		reloc[n].presumed_offset = obj[n % (execbuf.buffer_count-1)].offset;
 		reloc[n].offset = (i + 1)*sizeof(uint32_t);
 		reloc[n].delta = 4 * (n * 16 + n % 16);
 		reloc[n].read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 		reloc[n].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
+		reloc_value = CANONICAL(reloc[n].presumed_offset + reloc[n].delta);
 
 		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 		if (gen >= 8) {
-			batch[++i] = 0;
-			batch[++i] = 0;
+			batch[++i] = reloc_value;
+			batch[++i] = reloc_value >> 32;
 		} else if (gen >= 4) {
 			batch[++i] = 0;
-			batch[++i] = 0;
+			batch[++i] = reloc_value;
 			reloc[n].offset += sizeof(uint32_t);
 		} else {
 			batch[i]--;
-			batch[++i] = 0;
+			batch[++i] = reloc_value;
 		}
 		batch[++i] = n | ~n << 16;
 		i++;
@@ -168,11 +208,14 @@ static void store_cachelines(int fd, const intel_ctx_t *ctx,
 	}
 	free(reloc);
 
-	for (unsigned n = 0; n < execbuf.buffer_count; n++)
+	for (unsigned n = 0; n < execbuf.buffer_count; n++) {
 		gem_close(fd, obj[n].handle);
+		intel_allocator_free(ahnd, obj[n].handle);
+	}
 	free(obj);
 
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
+	intel_allocator_close(ahnd);
 }
 
 static void store_all(int fd, const intel_ctx_t *ctx)
@@ -184,10 +227,11 @@ static void store_all(int fd, const intel_ctx_t *ctx)
 	struct drm_i915_gem_execbuffer2 execbuf;
 	unsigned *engines, *permuted;
 	uint32_t batch[16];
-	uint64_t offset;
+	uint64_t offset, ahnd, reloc_value;
 	unsigned nengine;
-	int value;
+	int value, address;
 	int i, j;
+	bool do_relocs = gem_has_relocations(fd);
 
 	nengine = 0;
 	for_each_ctx_engine(fd, ctx, engine) {
@@ -213,24 +257,41 @@ static void store_all(int fd, const intel_ctx_t *ctx)
 		execbuf.flags |= I915_EXEC_SECURE;
 	execbuf.rsvd1 = ctx->id;
 
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = gem_create(fd, nengine*sizeof(uint32_t));
+	obj[0].offset = intel_allocator_alloc(ahnd, obj[0].handle,
+					      nengine*sizeof(uint32_t), ALIGNMENT);
+	obj[0].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS | EXEC_OBJECT_WRITE;
+	obj[0].offset = CANONICAL(obj[0].offset);
 	obj[1].handle = gem_create(fd, 2*nengine*sizeof(batch));
-	obj[1].relocation_count = 1;
+	obj[1].offset = intel_allocator_alloc(ahnd, obj[1].handle,
+					      nengine*sizeof(uint32_t), ALIGNMENT);
+	obj[1].offset = CANONICAL(obj[1].offset);
+	obj[1].flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+	if (do_relocs) {
+		obj[1].relocation_count = 1;
+	} else {
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		execbuf.flags |= I915_EXEC_NO_RELOC;
+	}
 
 	offset = sizeof(uint32_t);
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = 0;
+		batch[address = ++i] = 0;
 		batch[++i] = 0;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[address = ++i] = 0;
 		offset += sizeof(uint32_t);
 	} else {
 		batch[i]--;
-		batch[++i] = 0;
+		batch[address = ++i] = 0;
 	}
 	batch[value = ++i] = 0xc0ffee;
 	batch[++i] = MI_BATCH_BUFFER_END;
@@ -246,12 +307,17 @@ static void store_all(int fd, const intel_ctx_t *ctx)
 
 		j = 2*nengine;
 		reloc[j].target_handle = obj[0].handle;
-		reloc[j].presumed_offset = ~0;
+		reloc[j].presumed_offset = obj[0].offset;
 		reloc[j].offset = j*sizeof(batch) + offset;
 		reloc[j].delta = nengine*sizeof(uint32_t);
 		reloc[j].read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 		reloc[j].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
-		obj[1].relocs_ptr = to_user_pointer(&reloc[j]);
+		reloc_value = CANONICAL(obj[0].offset + nengine*sizeof(uint32_t));
+		batch[address] = reloc_value;
+		if (gen >= 8)
+			batch[address + 1] = reloc_value >> 32;
+		if (do_relocs)
+			obj[1].relocs_ptr = to_user_pointer(&reloc[j]);
 
 		batch[value] = 0xdeadbeef;
 		gem_write(fd, obj[1].handle, j*sizeof(batch),
@@ -261,12 +327,17 @@ static void store_all(int fd, const intel_ctx_t *ctx)
 
 		j = 2*nengine + 1;
 		reloc[j].target_handle = obj[0].handle;
-		reloc[j].presumed_offset = ~0;
+		reloc[j].presumed_offset = obj[0].offset;
 		reloc[j].offset = j*sizeof(batch) + offset;
 		reloc[j].delta = nengine*sizeof(uint32_t);
 		reloc[j].read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 		reloc[j].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
-		obj[1].relocs_ptr = to_user_pointer(&reloc[j]);
+		reloc_value = CANONICAL(obj[0].offset + nengine*sizeof(uint32_t));
+		batch[address] = reloc_value;
+		if (gen >= 8)
+			batch[address + 1] = reloc_value >> 32;
+		if (do_relocs)
+			obj[1].relocs_ptr = to_user_pointer(&reloc[j]);
 
 		batch[value] = nengine;
 		gem_write(fd, obj[1].handle, j*sizeof(batch),
@@ -279,30 +350,37 @@ static void store_all(int fd, const intel_ctx_t *ctx)
 	gem_sync(fd, obj[1].handle);
 
 	for (i = 0; i < nengine; i++) {
-		obj[1].relocs_ptr = to_user_pointer(&reloc[2*i]);
 		execbuf.batch_start_offset = 2*i*sizeof(batch);
 		memcpy(permuted, engines, nengine*sizeof(engines[0]));
 		igt_permute_array(permuted, nengine, igt_exchange_int);
+		if (do_relocs)
+			obj[1].relocs_ptr = to_user_pointer(&reloc[2*i]);
+
 		for (j = 0; j < nengine; j++) {
 			execbuf.flags &= ~ENGINE_MASK;
 			execbuf.flags |= permuted[j];
 			gem_execbuf(fd, &execbuf);
 		}
-		obj[1].relocs_ptr = to_user_pointer(&reloc[2*i+1]);
 		execbuf.batch_start_offset = (2*i+1)*sizeof(batch);
 		execbuf.flags &= ~ENGINE_MASK;
 		execbuf.flags |= engines[i];
+		if (do_relocs)
+			obj[1].relocs_ptr = to_user_pointer(&reloc[2*i+1]);
+
 		gem_execbuf(fd, &execbuf);
 	}
 	gem_close(fd, obj[1].handle);
+	intel_allocator_free(ahnd, obj[1].handle);
 
 	gem_read(fd, obj[0].handle, 0, engines, nengine*sizeof(engines[0]));
 	gem_close(fd, obj[0].handle);
+	intel_allocator_free(ahnd, obj[0].handle);
 
 	for (i = 0; i < nengine; i++)
 		igt_assert_eq_u32(engines[i], i);
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
 
+	intel_allocator_close(ahnd);
 	free(permuted);
 	free(engines);
 	free(reloc);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 23/52] tests/gem_exec_suspend: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (21 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 22/52] tests/gem_exec_store: Support gens without relocations Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-06  2:15   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 24/52] tests/gem_exec_parallel: Adopt to use alloctor Zbigniew Kempczyński
                   ` (30 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_suspend.c | 44 +++++++++++++++++++++++------------
 1 file changed, 29 insertions(+), 15 deletions(-)

diff --git a/tests/i915/gem_exec_suspend.c b/tests/i915/gem_exec_suspend.c
index 0ef26ce11..dbe0c8a71 100644
--- a/tests/i915/gem_exec_suspend.c
+++ b/tests/i915/gem_exec_suspend.c
@@ -83,6 +83,7 @@ static void run_test(int fd, const intel_ctx_t *ctx,
 	unsigned engines[I915_EXEC_RING_MASK + 1];
 	unsigned nengine;
 	igt_spin_t *spin = NULL;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
 
 	nengine = 0;
 	if (engine == ALL_ENGINES) {
@@ -120,27 +121,39 @@ static void run_test(int fd, const intel_ctx_t *ctx,
 	igt_require(__gem_execbuf(fd, &execbuf) == 0);
 	gem_close(fd, obj[1].handle);
 
-	memset(&reloc, 0, sizeof(reloc));
-	reloc.target_handle = obj[0].handle;
-	reloc.presumed_offset = obj[0].offset;
-	reloc.offset = sizeof(uint32_t);
-	if (gen >= 4 && gen < 8)
-		reloc.offset += sizeof(uint32_t);
-	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
-	reloc.write_domain = I915_GEM_DOMAIN_INSTRUCTION;
-
-	obj[1].relocs_ptr = to_user_pointer(&reloc);
-	obj[1].relocation_count = 1;
+	if (!ahnd) {
+		memset(&reloc, 0, sizeof(reloc));
+		reloc.target_handle = obj[0].handle;
+		reloc.presumed_offset = obj[0].offset;
+		reloc.offset = sizeof(uint32_t);
+		if (gen >= 4 && gen < 8)
+			reloc.offset += sizeof(uint32_t);
+		reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
+		reloc.write_domain = I915_GEM_DOMAIN_INSTRUCTION;
+
+		obj[1].relocs_ptr = to_user_pointer(&reloc);
+		obj[1].relocation_count = 1;
+	} else {
+		/* ignore first execbuf offset */
+		obj[0].offset = get_offset(ahnd, obj[0].handle, 4096, 0);
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+	}
 
 	for (int i = 0; i < 1024; i++) {
 		uint64_t offset;
 		uint32_t buf[16];
 		int b;
 
-		obj[1].handle = gem_create(fd, 4096);
-
 		reloc.delta = i * sizeof(uint32_t);
-		offset = reloc.presumed_offset + reloc.delta;
+
+		obj[1].handle = gem_create(fd, 4096);
+		if (ahnd) {
+			obj[1].offset = get_offset(ahnd, obj[1].handle, 4096, 0);
+			obj[1].flags |= EXEC_OBJECT_PINNED;
+			offset = obj[0].offset + reloc.delta;
+		} else {
+			offset = reloc.presumed_offset + reloc.delta;
+		}
 
 		b = 0;
 		buf[b] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
@@ -165,7 +178,7 @@ static void run_test(int fd, const intel_ctx_t *ctx,
 	}
 
 	if (flags & HANG)
-		spin = igt_spin_new(fd, .engine = engine);
+		spin = igt_spin_new(fd, .ahnd = ahnd, .engine = engine);
 
 	switch (mode(flags)) {
 	case NOSLEEP:
@@ -201,6 +214,7 @@ static void run_test(int fd, const intel_ctx_t *ctx,
 
 	check_bo(fd, obj[0].handle);
 	gem_close(fd, obj[0].handle);
+	put_ahnd(ahnd);
 
 	gem_quiescent_gpu(fd);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 24/52] tests/gem_exec_parallel: Adopt to use alloctor
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (22 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 23/52] tests/gem_exec_suspend: Adopt to use allocator Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-08-06  4:39   ` Dixit, Ashutosh
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 25/52] tests/gem_exec_params: Support gens without relocations Zbigniew Kempczyński
                   ` (29 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_parallel.c | 33 +++++++++++++++++++++++++++------
 1 file changed, 27 insertions(+), 6 deletions(-)

diff --git a/tests/i915/gem_exec_parallel.c b/tests/i915/gem_exec_parallel.c
index 5920ac730..36bf5f742 100644
--- a/tests/i915/gem_exec_parallel.c
+++ b/tests/i915/gem_exec_parallel.c
@@ -49,6 +49,7 @@ static inline uint32_t hash32(uint32_t val)
 #define USERPTR 0x4
 
 #define NUMOBJ 16
+#define NUMTHREADS 1024
 
 struct thread {
 	pthread_t thread;
@@ -56,11 +57,13 @@ struct thread {
 	pthread_cond_t *cond;
 	unsigned flags;
 	uint32_t *scratch;
+	uint64_t *offsets;
 	unsigned id;
 	const intel_ctx_t *ctx;
 	unsigned engine;
 	uint32_t used;
 	int fd, gen, *go;
+	uint64_t ahnd;
 };
 
 static void *thread(void *data)
@@ -70,6 +73,7 @@ static void *thread(void *data)
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	const intel_ctx_t *tmp_ctx = NULL;
+	uint64_t offset;
 	uint32_t batch[16];
 	uint16_t used;
 	int fd, i;
@@ -112,7 +116,7 @@ static void *thread(void *data)
 	reloc.delta = 4*t->id;
 	obj[1].handle = gem_create(fd, 4096);
 	obj[1].relocs_ptr = to_user_pointer(&reloc);
-	obj[1].relocation_count = 1;
+	obj[1].relocation_count = !t->ahnd ? 1 : 0;
 	gem_write(fd, obj[1].handle, 0, batch, sizeof(batch));
 
 	memset(&execbuf, 0, sizeof(execbuf));
@@ -140,6 +144,18 @@ static void *thread(void *data)
 		if (t->flags & FDS)
 			obj[0].handle = gem_open(fd, obj[0].handle);
 
+		if (t->ahnd) {
+			offset = t->offsets[x];
+			i = 0;
+			batch[++i] = offset + 4*t->id;
+			batch[++i] = offset >> 32;
+			obj[0].offset = offset;
+			obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+			obj[1].offset = get_offset(t->ahnd, obj[1].handle, 4096, 0);
+			obj[1].flags |= EXEC_OBJECT_PINNED;
+			gem_write(fd, obj[1].handle, 0, batch, sizeof(batch));
+		}
+
 		gem_execbuf(fd, &execbuf);
 
 		if (t->flags & FDS)
@@ -158,7 +174,7 @@ static void *thread(void *data)
 
 static void check_bo(int fd, uint32_t *data, uint32_t handle, int pass, struct thread *threads)
 {
-	uint32_t x = hash32(handle * pass) % 1024;
+	uint32_t x = hash32(handle * pass) % NUMTHREADS;
 	uint32_t result;
 
 	if (!(threads[x].used & (1 << pass)))
@@ -213,6 +229,7 @@ static void all(int fd, const intel_ctx_t *ctx,
 	void *arg[NUMOBJ];
 	int go;
 	int i;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0), offsets[NUMOBJ];
 
 	if (flags & CONTEXTS)
 		gem_require_contexts(fd);
@@ -238,9 +255,11 @@ static void all(int fd, const intel_ctx_t *ctx,
 		scratch[i] = handle[i] = handle_create(fd, flags, &arg[i]);
 		if (flags & FDS)
 			scratch[i] = gem_flink(fd, handle[i]);
+		offsets[i] = get_offset(ahnd, scratch[i], 4096, 0);
+
 	}
 
-	threads = calloc(1024, sizeof(struct thread));
+	threads = calloc(NUMTHREADS, sizeof(struct thread));
 	igt_assert(threads);
 
 	intel_detect_and_clear_missed_interrupts(fd);
@@ -248,7 +267,7 @@ static void all(int fd, const intel_ctx_t *ctx,
 	pthread_cond_init(&cond, 0);
 	go = 0;
 
-	for (i = 0; i < 1024; i++) {
+	for (i = 0; i < NUMTHREADS; i++) {
 		threads[i].id = i;
 		threads[i].fd = fd;
 		threads[i].gen = gen;
@@ -256,19 +275,21 @@ static void all(int fd, const intel_ctx_t *ctx,
 		threads[i].engine = engines[i % nengine];
 		threads[i].flags = flags;
 		threads[i].scratch = scratch;
+		threads[i].offsets = ahnd ? offsets : NULL;
 		threads[i].mutex = &mutex;
 		threads[i].cond = &cond;
 		threads[i].go = &go;
+		threads[i].ahnd = ahnd;
 
 		pthread_create(&threads[i].thread, 0, thread, &threads[i]);
 	}
 
 	pthread_mutex_lock(&mutex);
-	go = 1024;
+	go = NUMTHREADS;
 	pthread_cond_broadcast(&cond);
 	pthread_mutex_unlock(&mutex);
 
-	for (i = 0; i < 1024; i++)
+	for (i = 0; i < NUMTHREADS; i++)
 		pthread_join(threads[i].thread, NULL);
 
 	for (i = 0; i < NUMOBJ; i++) {
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 25/52] tests/gem_exec_params: Support gens without relocations
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (23 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 24/52] tests/gem_exec_parallel: Adopt to use alloctor Zbigniew Kempczyński
@ 2021-07-26 19:59 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 26/52] tests/gem_mmap: Add allocator support Zbigniew Kempczyński
                   ` (28 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 19:59 UTC (permalink / raw)
  To: igt-dev; +Cc: Sai Gowtham, Petri Latvala

From: Sai Gowtham <sai.gowtham.ch@intel.com>

When relocations are not available tests must assign addresses to objects
by themselves instead of relying on the driver. We use allocator for
that purpose.

Signed-off-by: Sai Gowtham <sai.gowtham.ch@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_exec_params.c | 74 ++++++++++++++++++++++++++----------
 1 file changed, 53 insertions(+), 21 deletions(-)

diff --git a/tests/i915/gem_exec_params.c b/tests/i915/gem_exec_params.c
index 729d38a43..ba79791a1 100644
--- a/tests/i915/gem_exec_params.c
+++ b/tests/i915/gem_exec_params.c
@@ -45,6 +45,8 @@
 #include "igt_device.h"
 #include "sw_sync.h"
 
+#define ALIGNMENT (1 << 22)
+
 static bool has_exec_batch_first(int fd)
 {
 	int val = -1;
@@ -74,24 +76,45 @@ static void test_batch_first(int fd)
 	struct drm_i915_gem_exec_object2 obj[3];
 	struct drm_i915_gem_relocation_entry reloc[2];
 	uint32_t *map, value;
+	uint64_t ahnd;
+	bool do_relocs = !gem_uses_ppgtt(fd);
 	int i;
 
 	igt_require(gem_can_store_dword(fd, 0));
 	igt_require(has_exec_batch_first(fd));
 
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
 	memset(obj, 0, sizeof(obj));
 	memset(reloc, 0, sizeof(reloc));
 
 	obj[0].handle = gem_create(fd, 4096);
+	obj[0].offset = intel_allocator_alloc(ahnd, obj[0].handle,
+						4096, ALIGNMENT);
+	obj[0].offset = CANONICAL(obj[0].offset);
+	obj[0].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 	obj[1].handle = gem_create(fd, 4096);
+	obj[1].offset = intel_allocator_alloc(ahnd, obj[1].handle,
+						4096, ALIGNMENT);
+	obj[1].offset = CANONICAL(obj[1].offset);
+	obj[1].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 	obj[2].handle = gem_create(fd, 4096);
-
-	reloc[0].target_handle = obj[1].handle;
-	reloc[0].offset = sizeof(uint32_t);
-	reloc[0].read_domains = I915_GEM_DOMAIN_INSTRUCTION;
-	reloc[0].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
-	obj[0].relocs_ptr = to_user_pointer(&reloc[0]);
-	obj[0].relocation_count = 1;
+	obj[2].offset = intel_allocator_alloc(ahnd, obj[2].handle,
+						4096, ALIGNMENT);
+	obj[2].offset = CANONICAL(obj[2].offset);
+	obj[2].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+	if (do_relocs) {
+		reloc[0].target_handle = obj[1].handle;
+		reloc[0].offset = sizeof(uint32_t);
+		reloc[0].read_domains = I915_GEM_DOMAIN_INSTRUCTION;
+		reloc[0].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
+		obj[0].relocs_ptr = to_user_pointer(&reloc[0]);
+		obj[0].relocation_count = 1;
+	} else {
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[1].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+	}
 
 	i = 0;
 	map = gem_mmap__cpu(fd, obj[0].handle, 0, 4096, PROT_WRITE);
@@ -99,26 +122,31 @@ static void test_batch_first(int fd)
 			I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
 	map[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		map[++i] = 0;
-		map[++i] = 0;
+		map[++i] = obj[1].offset;
+		map[++i] = obj[1].offset >> 32;
 	} else if (gen >= 4) {
 		map[++i] = 0;
-		map[++i] = 0;
+		map[++i] = obj[1].offset;
 		reloc[0].offset += sizeof(uint32_t);
 	} else {
 		map[i]--;
-		map[++i] = 0;
+		map[++i] = obj[1].offset;
 	}
 	map[++i] = 1;
 	map[++i] = MI_BATCH_BUFFER_END;
 	munmap(map, 4096);
 
-	reloc[1].target_handle = obj[1].handle;
-	reloc[1].offset = sizeof(uint32_t);
-	reloc[1].read_domains = I915_GEM_DOMAIN_INSTRUCTION;
-	reloc[1].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
-	obj[2].relocs_ptr = to_user_pointer(&reloc[1]);
-	obj[2].relocation_count = 1;
+	if (do_relocs) {
+		reloc[1].target_handle = obj[1].handle;
+		reloc[1].offset = sizeof(uint32_t);
+		reloc[1].read_domains = I915_GEM_DOMAIN_INSTRUCTION;
+		reloc[1].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
+		obj[2].relocs_ptr = to_user_pointer(&reloc[1]);
+		obj[2].relocation_count = 1;
+	} else {
+		obj[1].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[2].flags |= EXEC_OBJECT_PINNED;
+	}
 
 	i = 0;
 	map = gem_mmap__cpu(fd, obj[2].handle, 0, 4096, PROT_WRITE);
@@ -126,15 +154,15 @@ static void test_batch_first(int fd)
 			I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
 	map[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		map[++i] = 0;
-		map[++i] = 0;
+		map[++i] = obj[1].offset;
+		map[++i] = obj[1].offset >> 32;
 	} else if (gen >= 4) {
 		map[++i] = 0;
-		map[++i] = 0;
+		map[++i] = obj[1].offset;
 		reloc[1].offset += sizeof(uint32_t);
 	} else {
 		map[i]--;
-		map[++i] = 0;
+		map[++i] = obj[1].offset;
 	}
 	map[++i] = 2;
 	map[++i] = MI_BATCH_BUFFER_END;
@@ -158,8 +186,12 @@ static void test_batch_first(int fd)
 	igt_assert_eq_u32(value, 1);
 
 	gem_close(fd, obj[2].handle);
+	intel_allocator_free(ahnd, obj[2].handle);
 	gem_close(fd, obj[1].handle);
+	intel_allocator_free(ahnd, obj[1].handle);
 	gem_close(fd, obj[0].handle);
+	intel_allocator_free(ahnd, obj[0].handle);
+	intel_allocator_close(ahnd);
 }
 
 static int has_secure_batches(const int fd)
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 26/52] tests/gem_mmap: Add allocator support
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (24 preceding siblings ...)
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 25/52] tests/gem_exec_params: Support gens without relocations Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 27/52] tests/gem_mmap_gtt: " Zbigniew Kempczyński
                   ` (27 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Sai Gowtham, Petri Latvala

From: Sai Gowtham <sai.gowtham.ch@intel.com>

Signed-off-by: Sai Gowtham <sai.gowtham.ch@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_mmap.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_mmap.c b/tests/i915/gem_mmap.c
index a77c0ad60..61c9c5c19 100644
--- a/tests/i915/gem_mmap.c
+++ b/tests/i915/gem_mmap.c
@@ -123,8 +123,9 @@ test_pf_nonblock(int i915)
 {
 	igt_spin_t *spin;
 	uint32_t *ptr;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
-	spin = igt_spin_new(i915);
+	spin = igt_spin_new(i915, .ahnd = ahnd);
 
 	igt_set_timeout(1, "initial pagefaulting did not complete within 1s");
 
@@ -135,6 +136,7 @@ test_pf_nonblock(int i915)
 	igt_reset_timeout();
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static int mmap_ioctl(int i915, struct drm_i915_gem_mmap *arg)
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 27/52] tests/gem_mmap_gtt: Add allocator support
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (25 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 26/52] tests/gem_mmap: Add allocator support Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 28/52] tests/gem_mmap_offset: " Zbigniew Kempczyński
                   ` (26 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Ch Sai Gowtham, Petri Latvala

From: Ch Sai Gowtham <sai.gowtham.ch@intel.com>

Signed-off-by: Ch Sai Gowtham <sai.gowtham.ch@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_mmap_gtt.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/tests/i915/gem_mmap_gtt.c b/tests/i915/gem_mmap_gtt.c
index 60282699e..92bbb5d2a 100644
--- a/tests/i915/gem_mmap_gtt.c
+++ b/tests/i915/gem_mmap_gtt.c
@@ -335,10 +335,12 @@ test_pf_nonblock(int i915)
 {
 	igt_spin_t *spin;
 	uint32_t *ptr;
+	uint64_t ahnd;
 
 	igt_require(mmap_gtt_version(i915) >= 3);
 
-	spin = igt_spin_new(i915);
+	ahnd = get_reloc_ahnd(i915, 0);
+	spin = igt_spin_new(i915, .ahnd = ahnd);
 
 	igt_set_timeout(1, "initial pagefaulting did not complete within 1s");
 
@@ -349,6 +351,7 @@ test_pf_nonblock(int i915)
 	igt_reset_timeout();
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void
@@ -741,11 +744,13 @@ test_hang_busy(int i915)
 	igt_spin_t *spin;
 	igt_hang_t hang;
 	uint32_t handle;
+	uint64_t ahnd;
 
 	hang = igt_allow_hang(i915, ctx->id, 0);
 	igt_require(igt_params_set(i915, "reset", "1")); /* global */
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	ahnd = get_reloc_ahnd(i915, ctx->id);
+	spin = igt_spin_new(i915, .ctx = ctx, .ahnd = ahnd,
 			    .flags = IGT_SPIN_POLL_RUN |
 				     IGT_SPIN_FENCE_OUT |
 				     IGT_SPIN_NO_PREEMPTION);
@@ -788,6 +793,7 @@ test_hang_busy(int i915)
 	munmap(ptr, 4096);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 	igt_disallow_hang(i915, hang);
 	intel_ctx_destroy(i915, ctx);
 }
@@ -800,11 +806,13 @@ test_hang_user(int i915)
 	igt_spin_t *spin;
 	igt_hang_t hang;
 	uint32_t handle;
+	uint64_t ahnd;
 
 	hang = igt_allow_hang(i915, ctx->id, 0);
 	igt_require(igt_params_set(i915, "reset", "1")); /* global */
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	ahnd = get_reloc_ahnd(i915, ctx->id);
+	spin = igt_spin_new(i915, .ctx = ctx, .ahnd = ahnd,
 			    .flags = IGT_SPIN_POLL_RUN |
 				     IGT_SPIN_FENCE_OUT |
 				     IGT_SPIN_NO_PREEMPTION);
@@ -843,6 +851,7 @@ test_hang_user(int i915)
 	munmap(ptr, 4096);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 	igt_disallow_hang(i915, hang);
 	intel_ctx_destroy(i915, ctx);
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 28/52] tests/gem_mmap_offset: Add allocator support
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (26 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 27/52] tests/gem_mmap_gtt: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 29/52] tests/gem_mmap_wc: Adopt to use allocator Zbigniew Kempczyński
                   ` (25 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Sai Gowtham, Petri Latvala

From: Sai Gowtham <sai.gowtham.ch@intel.com>

Signed-off-by: Sai Gowtham <sai.gowtham.ch@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_mmap_offset.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_mmap_offset.c b/tests/i915/gem_mmap_offset.c
index f1ba67b7c..8148f0a2d 100644
--- a/tests/i915/gem_mmap_offset.c
+++ b/tests/i915/gem_mmap_offset.c
@@ -248,7 +248,8 @@ static void isolation(int i915)
 
 static void pf_nonblock(int i915)
 {
-	igt_spin_t *spin = igt_spin_new(i915);
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
+	igt_spin_t *spin = igt_spin_new(i915, .ahnd = ahnd);
 
 	for_each_mmap_offset_type(i915, t) {
 		uint32_t *ptr;
@@ -268,6 +269,7 @@ static void pf_nonblock(int i915)
 	}
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void *memchr_inv(const void *s, int c, size_t n)
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 29/52] tests/gem_mmap_wc: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (27 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 28/52] tests/gem_mmap_offset: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-08-06  4:51   ` Dixit, Ashutosh
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 30/52] tests/gem_request_retire: Add allocator support Zbigniew Kempczyński
                   ` (24 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_mmap_wc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_mmap_wc.c b/tests/i915/gem_mmap_wc.c
index abb89b8eb..6dc7bae49 100644
--- a/tests/i915/gem_mmap_wc.c
+++ b/tests/i915/gem_mmap_wc.c
@@ -459,8 +459,9 @@ test_pf_nonblock(int i915)
 {
 	igt_spin_t *spin;
 	uint32_t *ptr;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
-	spin = igt_spin_new(i915);
+	spin = igt_spin_new(i915, .ahnd = ahnd);
 
 	igt_set_timeout(1, "initial pagefaulting did not complete within 1s");
 
@@ -471,6 +472,7 @@ test_pf_nonblock(int i915)
 	igt_reset_timeout();
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static int mmap_ioctl(int i915, struct drm_i915_gem_mmap *arg)
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 30/52] tests/gem_request_retire: Add allocator support
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (28 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 29/52] tests/gem_mmap_wc: Adopt to use allocator Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 31/52] tests/gem_ringfill: Adopt to use allocator Zbigniew Kempczyński
                   ` (23 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Sai Gowtham, Petri Latvala

From: Sai Gowtham <sai.gowtham.ch@intel.com>

Signed-off-by: Sai Gowtham <sai.gowtham.ch@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_request_retire.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tests/i915/gem_request_retire.c b/tests/i915/gem_request_retire.c
index 3df54f2a5..da9d405ed 100644
--- a/tests/i915/gem_request_retire.c
+++ b/tests/i915/gem_request_retire.c
@@ -63,21 +63,29 @@ test_retire_vma_not_inactive(int fd)
 {
 	struct intel_execution_engine2 *e;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd, ahndN;
 	igt_spin_t *bg = NULL;
 
 	ctx = intel_ctx_create_all_physical(fd);
+	ahnd = get_reloc_ahnd(fd, ctx->id);
 
 	for_each_ctx_engine(fd, ctx, e) {
 		igt_spin_t *spin;
 		const intel_ctx_t *spin_ctx;
 
 		if (!bg) {
-			bg = igt_spin_new(fd, .ctx = ctx, .engine = e->flags);
+			bg = igt_spin_new(fd,
+					  .ahnd = ahnd,
+					  .ctx = ctx,
+					  .engine = e->flags);
 			continue;
 		}
 
 		spin_ctx = intel_ctx_create(fd, &ctx->cfg);
-		spin = igt_spin_new(fd, .ctx = spin_ctx,
+		ahndN = get_reloc_ahnd(fd, spin_ctx->id);
+		spin = igt_spin_new(fd,
+				    .ahnd = ahndN,
+				    .ctx = spin_ctx,
 				    .engine = e->flags,
 				    .dependency = bg->handle,
 				    .flags = IGT_SPIN_SOFTDEP);
@@ -86,11 +94,13 @@ test_retire_vma_not_inactive(int fd)
 
 		gem_sync(fd, spin->handle);
 		igt_spin_free(fd, spin);
+		put_ahnd(ahndN);
 	}
 
 	igt_drop_caches_set(fd, DROP_RETIRE);
 	igt_spin_free(fd, bg);
 	intel_ctx_destroy(fd, ctx);
+	put_ahnd(ahnd);
 }
 
 int fd;
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 31/52] tests/gem_ringfill: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (29 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 30/52] tests/gem_request_retire: Add allocator support Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-08-06  5:04   ` Dixit, Ashutosh
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 32/52] tests/gem_softpin: Exercise eviction with softpinning Zbigniew Kempczyński
                   ` (22 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_ringfill.c | 36 ++++++++++++++++++++++++++++--------
 1 file changed, 28 insertions(+), 8 deletions(-)

diff --git a/tests/i915/gem_ringfill.c b/tests/i915/gem_ringfill.c
index d32d47994..1e2c8d2d9 100644
--- a/tests/i915/gem_ringfill.c
+++ b/tests/i915/gem_ringfill.c
@@ -94,6 +94,7 @@ static void fill_ring(int fd,
 	}
 }
 
+#define NUMSTORES 1024
 static void setup_execbuf(int fd, const intel_ctx_t *ctx,
 			  struct drm_i915_gem_execbuffer2 *execbuf,
 			  struct drm_i915_gem_exec_object2 *obj,
@@ -104,10 +105,11 @@ static void setup_execbuf(int fd, const intel_ctx_t *ctx,
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	uint32_t *batch, *b;
 	int i;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 
 	memset(execbuf, 0, sizeof(*execbuf));
 	memset(obj, 0, 2*sizeof(*obj));
-	memset(reloc, 0, 1024*sizeof(*reloc));
+	memset(reloc, 0, NUMSTORES * sizeof(*reloc));
 
 	execbuf->buffers_ptr = to_user_pointer(obj);
 	execbuf->flags = ring | (1 << 11) | (1 << 12);
@@ -118,23 +120,33 @@ static void setup_execbuf(int fd, const intel_ctx_t *ctx,
 	execbuf->rsvd1 = ctx->id;
 
 	obj[0].handle = gem_create(fd, 4096);
+	if (ahnd) {
+		obj[0].offset = get_offset(ahnd, obj[0].handle, 4096, 0);
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+	}
+
 	gem_write(fd, obj[0].handle, 0, &bbe, sizeof(bbe));
 	execbuf->buffer_count = 1;
 	gem_execbuf(fd, execbuf);
 
 	obj[0].flags |= EXEC_OBJECT_WRITE;
-	obj[1].handle = gem_create(fd, 1024*16 + 4096);
-
+	obj[1].handle = gem_create(fd, NUMSTORES * 16 + 4096);
 	obj[1].relocs_ptr = to_user_pointer(reloc);
-	obj[1].relocation_count = 1024;
+	obj[1].relocation_count = !ahnd ? NUMSTORES : 0;
+
+	if (ahnd) {
+		obj[1].offset = get_offset(ahnd, obj[1].handle,
+				NUMSTORES * 16 + 4096, 0);
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+	}
 
-	batch = gem_mmap__cpu(fd, obj[1].handle, 0, 16*1024 + 4096,
+	batch = gem_mmap__cpu(fd, obj[1].handle, 0, NUMSTORES * 16 + 4096,
 			      PROT_WRITE | PROT_READ);
 	gem_set_domain(fd, obj[1].handle,
 		       I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
 
 	b = batch;
-	for (i = 0; i < 1024; i++) {
+	for (i = 0; i < NUMSTORES; i++) {
 		uint64_t offset;
 
 		reloc[i].presumed_offset = obj[0].offset;
@@ -162,10 +174,11 @@ static void setup_execbuf(int fd, const intel_ctx_t *ctx,
 		*b++ = i;
 	}
 	*b++ = MI_BATCH_BUFFER_END;
-	munmap(batch, 16*1024+4096);
+	munmap(batch, NUMSTORES * 16 + 4096);
 
 	execbuf->buffer_count = 2;
 	gem_execbuf(fd, execbuf);
+	put_ahnd(ahnd);
 
 	check_bo(fd, obj[0].handle);
 }
@@ -177,6 +190,7 @@ static void run_test(int fd, const intel_ctx_t *ctx, unsigned ring,
 	struct drm_i915_gem_relocation_entry reloc[1024];
 	struct drm_i915_gem_execbuffer2 execbuf;
 	igt_hang_t hang;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 
 	if (flags & (SUSPEND | HIBERNATE)) {
 		run_test(fd, ctx, ring, 0, 0);
@@ -187,7 +201,7 @@ static void run_test(int fd, const intel_ctx_t *ctx, unsigned ring,
 
 	memset(&hang, 0, sizeof(hang));
 	if (flags & HANG)
-		hang = igt_hang_ctx(fd, ctx->id, ring & ~(3<<13), 0);
+		hang = igt_hang_ring_with_ahnd(fd, ring & ~(3<<13), ahnd);
 
 	if (flags & (CHILD | FORKED | BOMB)) {
 		int nchild;
@@ -321,11 +335,13 @@ igt_main
 			for_each_ring(e, fd) {
 				igt_dynamic_f("%s", e->name) {
 					igt_require(gem_can_store_dword(fd, eb_ring(e)));
+					intel_allocator_multiprocess_start();
 					run_test(fd, intel_ctx_0(fd),
 						 eb_ring(e),
 						 m->flags,
 						 m->timeout);
 					gem_quiescent_gpu(fd);
+					intel_allocator_multiprocess_stop();
 				}
 			}
 		}
@@ -342,11 +358,13 @@ igt_main
 					continue;
 
 				igt_dynamic_f("%s", e->name) {
+					intel_allocator_multiprocess_start();
 					run_test(fd, ctx,
 						 e->flags,
 						 m->flags,
 						 m->timeout);
 					gem_quiescent_gpu(fd);
+					intel_allocator_multiprocess_stop();
 				}
 			}
 		}
@@ -354,6 +372,7 @@ igt_main
 
 	igt_subtest("basic-all") {
 		const struct intel_execution_engine2 *e;
+		intel_allocator_multiprocess_start();
 
 		for_each_ctx_engine(fd, ctx, e) {
 			if (!gem_class_can_store_dword(fd, e->class))
@@ -364,6 +383,7 @@ igt_main
 		}
 
 		igt_waitchildren();
+		intel_allocator_multiprocess_stop();
 	}
 
 	igt_fixture {
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 32/52] tests/gem_softpin: Exercise eviction with softpinning
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (30 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 31/52] tests/gem_ringfill: Adopt to use allocator Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 33/52] tests/gem_spin_batch: Adopt to use allocator Zbigniew Kempczyński
                   ` (21 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

From: Andrzej Turko <andrzej.turko@linux.intel.com>

Exercise eviction of many gem objects. The added tests are analogous
to gem_exec_gttfill, but they use softpin and do not require relocation
support.

Signed-off-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_softpin.c | 213 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 212 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_softpin.c b/tests/i915/gem_softpin.c
index bdb04821d..82d8a2861 100644
--- a/tests/i915/gem_softpin.c
+++ b/tests/i915/gem_softpin.c
@@ -29,6 +29,7 @@
 #include "i915/gem.h"
 #include "i915/gem_create.h"
 #include "igt.h"
+#include "igt_rand.h"
 #include "intel_allocator.h"
 
 #define EXEC_OBJECT_PINNED	(1<<4)
@@ -877,9 +878,209 @@ static void test_allocator_fork(int fd)
 	intel_allocator_multiprocess_stop();
 }
 
+#define BATCH_SIZE (4096<<10)
+/* We don't have alignment detection yet, so assume the worst-case scenario. */
+#define BATCH_ALIGNMENT (1 << 21)
+
+struct batch {
+	uint32_t handle;
+	void *ptr;
+};
+
+static void xchg_batch(void *array, unsigned int i, unsigned int j)
+{
+	struct batch *batches = array;
+	struct batch tmp;
+
+	tmp = batches[i];
+	batches[i] = batches[j];
+	batches[j] = tmp;
+}
+
+static void submit(int fd, int gen,
+		   struct drm_i915_gem_execbuffer2 *eb,
+		   struct batch *batches, unsigned int count,
+		   uint64_t ahnd)
+{
+	struct drm_i915_gem_exec_object2 obj;
+	uint32_t batch[16];
+	uint64_t address;
+	unsigned n;
+
+	memset(&obj, 0, sizeof(obj));
+	obj.flags = EXEC_OBJECT_PINNED;
+
+	for (unsigned i = 0; i < count; i++) {
+		obj.handle = batches[i].handle;
+		obj.offset = intel_allocator_alloc(ahnd, obj.handle,
+						   BATCH_SIZE,
+						   BATCH_ALIGNMENT);
+		address = obj.offset + BATCH_SIZE - eb->batch_start_offset - 8;
+		n = 0;
+		batch[n] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
+		if (gen >= 8) {
+			batch[n] |= 1 << 21;
+			batch[n]++;
+			batch[++n] = address;
+			batch[++n] = address >> 32;
+		} else if (gen >= 4) {
+			batch[++n] = 0;
+			batch[++n] = address;
+		} else {
+			batch[n]--;
+			batch[++n] = address;
+		}
+		batch[++n] = obj.offset; /* lower_32_bits(value) */
+		batch[++n] = obj.offset >> 32; /* upper_32_bits(value) / nop */
+		batch[++n] = MI_BATCH_BUFFER_END;
+		eb->buffers_ptr = to_user_pointer(&obj);
+
+		memcpy(batches[i].ptr + eb->batch_start_offset,
+		       batch, sizeof(batch));
+
+		gem_execbuf(fd, eb);
+	}
+	/* As we have been lying about the write_domain, we need to do a sync */
+	gem_sync(fd, obj.handle);
+}
+
+static void test_allocator_evict(int fd, const intel_ctx_t *ctx,
+				 unsigned ring, int timeout)
+{
+	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
+	struct drm_i915_gem_execbuffer2 execbuf;
+	unsigned engines[I915_EXEC_RING_MASK + 1];
+	volatile uint64_t *shared;
+	struct timespec tv = {};
+	struct batch *batches;
+	unsigned nengine;
+	unsigned count;
+	uint64_t size, ahnd;
+
+	shared = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+	igt_assert(shared != MAP_FAILED);
+
+	nengine = 0;
+	if (ring == ALL_ENGINES) {
+		struct intel_execution_engine2 *e;
+
+		for_each_ctx_engine(fd, ctx, e) {
+			if (!gem_class_can_store_dword(fd, e->class))
+				continue;
+
+			engines[nengine++] = e->flags;
+		}
+	} else {
+		engines[nengine++] = ring;
+	}
+	igt_require(nengine);
+	igt_assert(nengine * 64 <= BATCH_SIZE);
+
+	size = gem_aperture_size(fd);
+	if (!gem_uses_full_ppgtt(fd))
+		size /= 2;
+	if (size > 1ull<<32) /* Limit to 4GiB as we do not use allow-48b */
+		size = 1ull << 32;
+	igt_require(size < (1ull<<32) * BATCH_SIZE);
+
+	count = size / BATCH_SIZE + 1;
+	igt_debug("Using %'d batches to fill %'llu aperture on %d engines\n",
+		  count, (long long)size, nengine);
+
+	intel_allocator_multiprocess_start();
+	/* Avoid allocating on the last page */
+	ahnd = intel_allocator_open_full(fd, 0, 0, size - 4096,
+					 INTEL_ALLOCATOR_RELOC,
+					 ALLOC_STRATEGY_HIGH_TO_LOW);
+
+	intel_require_memory(count, BATCH_SIZE, CHECK_RAM);
+	intel_detect_and_clear_missed_interrupts(fd);
+
+	igt_nsec_elapsed(&tv);
+
+	memset(&execbuf, 0, sizeof(execbuf));
+	execbuf.buffer_count = 1;
+	if (gen < 6)
+		execbuf.flags |= I915_EXEC_SECURE;
+
+	batches = calloc(count, sizeof(*batches));
+	igt_assert(batches);
+	for (unsigned i = 0; i < count; i++) {
+		batches[i].handle = gem_create(fd, BATCH_SIZE);
+		batches[i].ptr =
+			gem_mmap__device_coherent(fd, batches[i].handle,
+						  0, BATCH_SIZE, PROT_WRITE);
+	}
+
+	/* Flush all memory before we start the timer */
+	submit(fd, gen, &execbuf, batches, count, ahnd);
+
+	igt_info("Setup %u batches in %.2fms\n",
+		 count, 1e-6 * igt_nsec_elapsed(&tv));
+
+	igt_fork(child, nengine) {
+		uint64_t dst, src, dst_offset, src_offset;
+		uint64_t cycles = 0;
+
+		hars_petruska_f54_1_random_perturb(child);
+		igt_permute_array(batches, count, xchg_batch);
+		execbuf.batch_start_offset = child * 64;
+		execbuf.flags |= engines[child];
+
+		dst_offset = BATCH_SIZE - child*64 - 8;
+		if (gen >= 8)
+			src_offset = child*64 + 3*sizeof(uint32_t);
+		else if (gen >= 4)
+			src_offset = child*64 + 4*sizeof(uint32_t);
+		else
+			src_offset = child*64 + 2*sizeof(uint32_t);
+
+		/* We need to open the allocator again in the new process */
+		ahnd = intel_allocator_open_full(fd, 0, 0, size - 4096,
+						 INTEL_ALLOCATOR_RELOC,
+						 ALLOC_STRATEGY_HIGH_TO_LOW);
+
+		igt_until_timeout(timeout) {
+			submit(fd, gen, &execbuf, batches, count, ahnd);
+			for (unsigned i = 0; i < count; i++) {
+				dst = *(uint64_t *)(batches[i].ptr + dst_offset);
+				src = *(uint64_t *)(batches[i].ptr + src_offset);
+				igt_assert_eq_u64(dst, src);
+			}
+			cycles++;
+		}
+		shared[child] = cycles;
+		igt_info("engine[%d]: %llu cycles\n", child, (long long)cycles);
+		intel_allocator_close(ahnd);
+	}
+	igt_waitchildren();
+
+	intel_allocator_close(ahnd);
+	intel_allocator_multiprocess_stop();
+
+	for (unsigned i = 0; i < count; i++) {
+		munmap(batches[i].ptr, BATCH_SIZE);
+		gem_close(fd, batches[i].handle);
+	}
+	free(batches);
+
+	shared[nengine] = 0;
+	for (unsigned i = 0; i < nengine; i++)
+		shared[nengine] += shared[i];
+	igt_info("Total: %llu cycles\n", (long long)shared[nengine]);
+
+	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
+}
+
+#define test_each_engine(T, i915, ctx, e) \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
+		igt_dynamic_f("%s", e->name)
+
 igt_main
 {
+	const struct intel_execution_engine2 *e;
 	int fd = -1;
+	const intel_ctx_t *ctx;
 
 	igt_fixture {
 		fd = drm_open_driver_master(DRIVER_INTEL);
@@ -887,6 +1088,8 @@ igt_main
 		gem_require_blitter(fd);
 		igt_require(gem_has_softpin(fd));
 		igt_require(gem_can_store_dword(fd, 0));
+
+		ctx = intel_ctx_create_all_physical(fd);
 	}
 
 	igt_subtest("invalid")
@@ -922,6 +1125,12 @@ igt_main
 
 		igt_subtest("allocator-fork")
 			test_allocator_fork(fd);
+
+		test_each_engine("allocator-evict", fd, ctx, e)
+			test_allocator_evict(fd, ctx, e->flags, 20);
+
+		igt_subtest("allocator-evict-all-engines")
+			test_allocator_evict(fd, ctx, ALL_ENGINES, 20);
 	}
 
 	igt_subtest("softpin")
@@ -949,6 +1158,8 @@ igt_main
 	igt_subtest("evict-hang")
 		test_evict_hang(fd);
 
-	igt_fixture
+	igt_fixture {
+		intel_ctx_destroy(fd, ctx);
 		close(fd);
+	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 33/52] tests/gem_spin_batch: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (31 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 32/52] tests/gem_softpin: Exercise eviction with softpinning Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 34/52] tests/gem_tiled_fence_blits: " Zbigniew Kempczyński
                   ` (20 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_spin_batch.c | 37 +++++++++++++++++++++++++++----------
 1 file changed, 27 insertions(+), 10 deletions(-)

diff --git a/tests/i915/gem_spin_batch.c b/tests/i915/gem_spin_batch.c
index 4a9d6c2df..653812c7a 100644
--- a/tests/i915/gem_spin_batch.c
+++ b/tests/i915/gem_spin_batch.c
@@ -45,13 +45,14 @@ static void spin(int fd, const intel_ctx_t *ctx_id,
 	struct timespec tv = { };
 	struct timespec itv = { };
 	uint64_t elapsed;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx_id->id);
 
-	spin = __igt_spin_new(fd, .ctx = ctx_id, .engine = engine,
-			      .flags = flags);
+	spin = __igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx_id,
+			      .engine = engine, .flags = flags);
 	while ((elapsed = igt_nsec_elapsed(&tv)) >> 30 < timeout_sec) {
 		igt_spin_t *next =
-			__igt_spin_new(fd, .ctx = ctx_id, .engine = engine,
-				       .flags = flags);
+			__igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx_id,
+				       .engine = engine, .flags = flags);
 
 		igt_spin_set_timeout(spin,
 				     timeout_100ms - igt_nsec_elapsed(&itv));
@@ -67,6 +68,7 @@ static void spin(int fd, const intel_ctx_t *ctx_id,
 		loops++;
 	}
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 
 	igt_info("Completed %ld loops in %lld ns, target %ld\n",
 		 loops, (long long)elapsed, (long)(elapsed / timeout_100ms));
@@ -82,11 +84,12 @@ static void spin_resubmit(int fd, const intel_ctx_t *ctx,
 {
 	const intel_ctx_t *new_ctx = NULL;
 	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 
 	if (flags & RESUBMIT_NEW_CTX)
 		igt_require(gem_has_contexts(fd));
 
-	spin = __igt_spin_new(fd, .ctx = ctx, .engine = engine);
+	spin = __igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx, .engine = engine);
 	if (flags & RESUBMIT_NEW_CTX) {
 		new_ctx = intel_ctx_create(fd, &ctx->cfg);
 		spin->execbuf.rsvd1 = new_ctx->id;
@@ -110,6 +113,7 @@ static void spin_resubmit(int fd, const intel_ctx_t *ctx,
 		intel_ctx_destroy(fd, new_ctx);
 
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 }
 
 static void spin_exit_handler(int sig)
@@ -139,6 +143,7 @@ static void spin_all(int i915, const intel_ctx_t *ctx, unsigned int flags)
 	const struct intel_execution_engine2 *e;
 	intel_ctx_cfg_t cfg = ctx->cfg;
 	struct igt_spin *spin, *n;
+	uint64_t ahnd;
 	IGT_LIST_HEAD(list);
 
 	for_each_ctx_cfg_engine(i915, &cfg, e) {
@@ -147,9 +152,11 @@ static void spin_all(int i915, const intel_ctx_t *ctx, unsigned int flags)
 
 		if (flags & PARALLEL_SPIN_NEW_CTX)
 			ctx = intel_ctx_create(i915, &cfg);
+		ahnd = get_reloc_ahnd(i915, ctx->id);
 
 		/* Prevent preemption so only one is allowed on each engine */
 		spin = igt_spin_new(i915,
+				    .ahnd = ahnd,
 				    .ctx = ctx,
 				    .engine = e->flags,
 				    .flags = (IGT_SPIN_POLL_RUN |
@@ -163,9 +170,11 @@ static void spin_all(int i915, const intel_ctx_t *ctx, unsigned int flags)
 
 	igt_list_for_each_entry_safe(spin, n, &list, link) {
 		igt_assert(gem_bo_busy(i915, spin->handle));
+		ahnd = spin->ahnd;
 		igt_spin_end(spin);
 		gem_sync(i915, spin->handle);
 		igt_spin_free(i915, spin);
+		put_ahnd(ahnd);
 	}
 }
 
@@ -249,12 +258,20 @@ igt_main
 
 #undef test_each_engine
 
-	igt_subtest("spin-each")
-		spin_on_all_engines(fd, ctx, 0, 3);
+	igt_subtest_group {
+		igt_fixture
+			intel_allocator_multiprocess_start();
 
-	igt_subtest("user-each") {
-		igt_require(has_userptr(fd));
-		spin_on_all_engines(fd, ctx, IGT_SPIN_USERPTR, 3);
+		igt_subtest("spin-each")
+			spin_on_all_engines(fd, ctx, 0, 3);
+
+		igt_subtest("user-each") {
+			igt_require(has_userptr(fd));
+			spin_on_all_engines(fd, ctx, IGT_SPIN_USERPTR, 3);
+		}
+
+		igt_fixture
+			intel_allocator_multiprocess_stop();
 	}
 
 	igt_fixture {
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 34/52] tests/gem_tiled_fence_blits: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (32 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 33/52] tests/gem_spin_batch: Adopt to use allocator Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 35/52] tests/gem_unfence_active_buffers: " Zbigniew Kempczyński
                   ` (19 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_tiled_fence_blits.c | 65 ++++++++++++++++++++++--------
 1 file changed, 48 insertions(+), 17 deletions(-)

diff --git a/tests/i915/gem_tiled_fence_blits.c b/tests/i915/gem_tiled_fence_blits.c
index 6ce3a38d9..9ea61f110 100644
--- a/tests/i915/gem_tiled_fence_blits.c
+++ b/tests/i915/gem_tiled_fence_blits.c
@@ -86,18 +86,18 @@ static void check_bo(int fd, uint32_t handle, uint32_t start_val)
 	}
 }
 
-static uint32_t
-create_batch(int fd, struct drm_i915_gem_relocation_entry *reloc)
+static void
+update_batch(int fd, uint32_t bb_handle,
+	     struct drm_i915_gem_relocation_entry *reloc,
+	     uint64_t dst_offset, uint64_t src_offset)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const bool has_64b_reloc = gen >= 8;
 	uint32_t *batch;
-	uint32_t handle;
 	uint32_t pitch;
 	int i = 0;
 
-	handle = gem_create(fd, 4096);
-	batch = gem_mmap__cpu(fd, handle, 0, 4096, PROT_WRITE);
+	batch = gem_mmap__cpu(fd, bb_handle, 0, 4096, PROT_WRITE);
 
 	batch[i] = (XY_SRC_COPY_BLT_CMD |
 		    XY_SRC_COPY_BLT_WRITE_ALPHA |
@@ -117,22 +117,20 @@ create_batch(int fd, struct drm_i915_gem_relocation_entry *reloc)
 	reloc[0].offset = sizeof(*batch) * i;
 	reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
 	reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
-	batch[i++] = 0;
+	batch[i++] = dst_offset;
 	if (has_64b_reloc)
-		batch[i++] = 0;
+		batch[i++] = dst_offset >> 32;
 
 	batch[i++] = 0; /* src (x1, y1) */
 	batch[i++] = pitch;
 	reloc[1].offset = sizeof(*batch) * i;
 	reloc[1].read_domains = I915_GEM_DOMAIN_RENDER;
-	batch[i++] = 0;
+	batch[i++] = src_offset;
 	if (has_64b_reloc)
-		batch[i++] = 0;
+		batch[i++] = src_offset >> 32;
 
 	batch[i++] = MI_BATCH_BUFFER_END;
 	munmap(batch, 4096);
-
-	return handle;
 }
 
 static void xchg_u32(void *array, unsigned i, unsigned j)
@@ -144,7 +142,7 @@ static void xchg_u32(void *array, unsigned i, unsigned j)
 	base[j] = tmp;
 }
 
-static void run_test(int fd, int count)
+static void run_test(int fd, int count, uint64_t end)
 {
 	struct drm_i915_gem_relocation_entry reloc[2];
 	struct drm_i915_gem_exec_object2 obj[3];
@@ -152,14 +150,27 @@ static void run_test(int fd, int count)
 	uint32_t *src_order, *dst_order;
 	uint32_t *bo, *bo_start_val;
 	uint32_t start = 0;
+	uint64_t ahnd = 0;
 
+	if (!gem_has_relocations(fd))
+		ahnd = intel_allocator_open_full(fd, 0, 0, end,
+						 INTEL_ALLOCATOR_RELOC,
+						 ALLOC_STRATEGY_LOW_TO_HIGH);
 	memset(reloc, 0, sizeof(reloc));
 	memset(obj, 0, sizeof(obj));
 	obj[0].flags = EXEC_OBJECT_NEEDS_FENCE;
 	obj[1].flags = EXEC_OBJECT_NEEDS_FENCE;
-	obj[2].handle = create_batch(fd, reloc);
+	obj[2].handle = gem_create(fd, 4096);
+	obj[2].offset = get_offset(ahnd, obj[2].handle, 4096, 0);
+	if (ahnd) {
+		obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		obj[2].flags |= EXEC_OBJECT_PINNED;
+	}
 	obj[2].relocs_ptr = to_user_pointer(reloc);
-	obj[2].relocation_count = ARRAY_SIZE(reloc);
+	obj[2].relocation_count = !ahnd ? ARRAY_SIZE(reloc) : 0;
+	update_batch(fd, obj[2].handle, reloc,
+		     obj[0].offset, obj[1].offset);
 
 	memset(&eb, 0, sizeof(eb));
 	eb.buffers_ptr = to_user_pointer(obj);
@@ -198,7 +209,23 @@ static void run_test(int fd, int count)
 			reloc[0].target_handle = obj[0].handle = bo[dst];
 			reloc[1].target_handle = obj[1].handle = bo[src];
 
+			if (ahnd) {
+				obj[0].offset = get_offset(ahnd, obj[0].handle,
+						sizeof(linear), 0);
+				obj[1].offset = get_offset(ahnd, obj[1].handle,
+						sizeof(linear), 0);
+				obj[2].offset = get_offset(ahnd, obj[2].handle,
+						4096, 0);
+				update_batch(fd, obj[2].handle, reloc,
+					     obj[0].offset, obj[1].offset);
+			}
+
 			gem_execbuf(fd, &eb);
+			if (ahnd) {
+				gem_close(fd, obj[2].handle);
+				obj[2].handle = gem_create(fd, 4096);
+			}
+
 			bo_start_val[dst] = bo_start_val[src];
 		}
 	}
@@ -210,6 +237,7 @@ static void run_test(int fd, int count)
 	free(bo);
 
 	gem_close(fd, obj[2].handle);
+	put_ahnd(ahnd);
 }
 
 #define MAX_32b ((1ull << 32) - 4096)
@@ -217,7 +245,7 @@ static void run_test(int fd, int count)
 igt_main
 {
 	const int ncpus = sysconf(_SC_NPROCESSORS_ONLN);
-	uint64_t count = 0;
+	uint64_t count = 0, end;
 	int fd;
 
 	igt_fixture {
@@ -229,6 +257,7 @@ igt_main
 		count = gem_mappable_aperture_size(fd); /* thrash fences! */
 		if (count >> 32)
 			count = MAX_32b;
+		end = count;
 		count = 3 + count / (1024 * 1024);
 		igt_require(count > 1);
 		intel_require_memory(count, 1024 * 1024 , CHECK_RAM);
@@ -238,12 +267,14 @@ igt_main
 	}
 
 	igt_subtest("basic")
-		run_test (fd, 2);
+		run_test(fd, 2, end);
 
 	igt_subtest("normal") {
+		intel_allocator_multiprocess_start();
 		igt_fork(child, ncpus)
-			run_test(fd, count);
+			run_test(fd, count, end);
 		igt_waitchildren();
+		intel_allocator_multiprocess_stop();
 	}
 
 	igt_fixture
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 35/52] tests/gem_unfence_active_buffers: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (33 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 34/52] tests/gem_tiled_fence_blits: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-08-06  5:44   ` Dixit, Ashutosh
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 36/52] tests/gem_unref_active_buffers: " Zbigniew Kempczyński
                   ` (18 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_unfence_active_buffers.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_unfence_active_buffers.c b/tests/i915/gem_unfence_active_buffers.c
index 2c9cebb6d..532eed2e7 100644
--- a/tests/i915/gem_unfence_active_buffers.c
+++ b/tests/i915/gem_unfence_active_buffers.c
@@ -69,11 +69,13 @@ igt_simple_main
 {
 	int i915, num_fences;
 	igt_spin_t *spin;
+	uint64_t ahnd;
 
 	i915 = drm_open_driver(DRIVER_INTEL);
 	igt_require_gem(i915);
 
-	spin = igt_spin_new(i915);
+	ahnd = get_reloc_ahnd(i915, 0);
+	spin = igt_spin_new(i915, .ahnd = ahnd);
 
 	num_fences = gem_available_fences(i915);
 	igt_require(num_fences);
@@ -96,4 +98,5 @@ igt_simple_main
 	}
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 36/52] tests/gem_unref_active_buffers: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (34 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 35/52] tests/gem_unfence_active_buffers: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-08-06  5:46   ` Dixit, Ashutosh
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 37/52] tests/gem_wait: " Zbigniew Kempczyński
                   ` (17 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_unref_active_buffers.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_unref_active_buffers.c b/tests/i915/gem_unref_active_buffers.c
index 731190b39..3b8c981da 100644
--- a/tests/i915/gem_unref_active_buffers.c
+++ b/tests/i915/gem_unref_active_buffers.c
@@ -69,11 +69,13 @@ igt_simple_main
 	struct itimerval itv;
 	igt_spin_t *spin;
 	int i915;
+	uint64_t ahnd;
 
 	i915 = drm_open_driver(DRIVER_INTEL);
 	igt_require_gem(i915);
 
-	spin = igt_spin_new(i915);
+	ahnd = get_reloc_ahnd(i915, 0);
+	spin = igt_spin_new(i915, .ahnd = ahnd);
 	fcntl(i915, F_SETFL, fcntl(i915, F_GETFL) | O_NONBLOCK);
 
 	sigaction(SIGALRM, &sa, &old_sa);
@@ -118,4 +120,5 @@ igt_simple_main
 	sigaction(SIGALRM, &old_sa, NULL);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 37/52] tests/gem_wait: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (35 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 36/52] tests/gem_unref_active_buffers: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-08-06  5:48   ` Dixit, Ashutosh
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 38/52] tests/gem_watchdog: Adopt to use no-reloc Zbigniew Kempczyński
                   ` (16 subsequent siblings)
  53 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_wait.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tests/i915/gem_wait.c b/tests/i915/gem_wait.c
index d56707eda..0d1fea996 100644
--- a/tests/i915/gem_wait.c
+++ b/tests/i915/gem_wait.c
@@ -78,11 +78,13 @@ static void invalid_buf(int fd)
 static void basic(int fd, const intel_ctx_t *ctx, unsigned engine,
 		  unsigned flags)
 {
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	IGT_CORK_HANDLE(cork);
 	uint32_t plug =
 		flags & (WRITE | AWAIT) ? igt_cork_plug(&cork, fd) : 0;
 	igt_spin_t *spin =
 		igt_spin_new(fd,
+			     .ahnd = ahnd,
 			     .ctx = ctx,
 			     .engine = engine,
 			     .dependency = plug,
@@ -147,6 +149,7 @@ static void basic(int fd, const intel_ctx_t *ctx, unsigned engine,
 	if (plug)
 		gem_close(fd, plug);
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_all_engines(const char *name, int i915, const intel_ctx_t *ctx,
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 38/52] tests/gem_watchdog: Adopt to use no-reloc
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (36 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 37/52] tests/gem_wait: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 39/52] tests/gem_workarounds: Adopt to use allocator Zbigniew Kempczyński
                   ` (15 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_watchdog.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_watchdog.c b/tests/i915/gem_watchdog.c
index 4d4aaee48..db562335a 100644
--- a/tests/i915/gem_watchdog.c
+++ b/tests/i915/gem_watchdog.c
@@ -111,10 +111,13 @@ static void physical(int i915, const intel_ctx_t *ctx)
 	unsigned int num_engines, i, count;
 	const struct intel_execution_engine2 *e;
 	igt_spin_t *spin[GEM_MAX_ENGINES];
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	i = 0;
 	for_each_ctx_engine(i915, ctx, e) {
-		spin[i] = igt_spin_new(i915, .ctx = ctx,
+		spin[i] = igt_spin_new(i915,
+				       .ahnd = ahnd,
+				       .ctx = ctx,
 				       .engine = e->flags,
 				       .flags = spin_flags());
 		i++;
@@ -125,6 +128,7 @@ static void physical(int i915, const intel_ctx_t *ctx)
 
 	for (i = 0; i < num_engines; i++)
 		igt_spin_free(i915, spin[i]);
+	put_ahnd(ahnd);
 
 	igt_assert_eq(count, num_engines);
 }
@@ -216,6 +220,7 @@ static void virtual(int i915, const intel_ctx_cfg_t *base_cfg)
 	unsigned int expect = num_engines;
 	intel_ctx_cfg_t cfg = {};
 	const intel_ctx_t *ctx[num_engines];
+	uint64_t ahnd;
 
 	igt_require(gem_has_execlists(i915));
 
@@ -239,10 +244,12 @@ static void virtual(int i915, const intel_ctx_cfg_t *base_cfg)
 			igt_assert(i < num_engines);
 
 			ctx[i] = intel_ctx_create(i915, &cfg);
+			ahnd = get_reloc_ahnd(i915, ctx[i]->id);
 
 			set_load_balancer(i915, ctx[i]->id, ci, count, NULL);
 
 			spin[i] = igt_spin_new(i915,
+					       .ahnd = ahnd,
 					       .ctx = ctx[i],
 					       .flags = spin_flags());
 			i++;
@@ -254,8 +261,10 @@ static void virtual(int i915, const intel_ctx_cfg_t *base_cfg)
 	count = wait_timeout(i915, spin, num_engines, wait_us, expect);
 
 	for (i = 0; i < num_engines && spin[i]; i++) {
+		ahnd = spin[i]->ahnd;
 		igt_spin_free(i915, spin[i]);
 		intel_ctx_destroy(i915, ctx[i]);
+		put_ahnd(ahnd);
 	}
 
 	igt_assert_eq(count, expect);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 39/52] tests/gem_workarounds: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (37 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 38/52] tests/gem_watchdog: Adopt to use no-reloc Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 40/52] tests/i915_hangman: " Zbigniew Kempczyński
                   ` (14 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/gem_workarounds.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/tests/i915/gem_workarounds.c b/tests/i915/gem_workarounds.c
index e240901c4..3d1851279 100644
--- a/tests/i915/gem_workarounds.c
+++ b/tests/i915/gem_workarounds.c
@@ -94,6 +94,7 @@ static int workaround_fail_count(int i915, const intel_ctx_t *ctx)
 	uint32_t *base, *out;
 	igt_spin_t *spin;
 	int fw, fail = 0;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	reloc = calloc(num_wa_regs, sizeof(*reloc));
 	igt_assert(reloc);
@@ -109,7 +110,13 @@ static int workaround_fail_count(int i915, const intel_ctx_t *ctx)
 	gem_set_caching(i915, obj[0].handle, I915_CACHING_CACHED);
 	obj[1].handle = gem_create(i915, batch_sz);
 	obj[1].relocs_ptr = to_user_pointer(reloc);
-	obj[1].relocation_count = num_wa_regs;
+	obj[1].relocation_count = !ahnd ? num_wa_regs : 0;
+	if (ahnd) {
+		obj[0].offset = get_offset(ahnd, obj[0].handle, result_sz, 0);
+		obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[1].offset = get_offset(ahnd, obj[1].handle, batch_sz, 0);
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+	}
 
 	out = base =
 		gem_mmap__cpu(i915, obj[1].handle, 0, batch_sz, PROT_WRITE);
@@ -121,9 +128,9 @@ static int workaround_fail_count(int i915, const intel_ctx_t *ctx)
 		reloc[i].delta = i * sizeof(uint32_t);
 		reloc[i].read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 		reloc[i].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
-		*out++ = reloc[i].delta;
+		*out++ = obj[0].offset + reloc[i].delta;
 		if (gen >= 8)
-			*out++ = 0;
+			*out++ = (obj[0].offset + reloc[i].delta) >> 32;
 	}
 	*out++ = MI_BATCH_BUFFER_END;
 	munmap(base, batch_sz);
@@ -136,7 +143,8 @@ static int workaround_fail_count(int i915, const intel_ctx_t *ctx)
 
 	gem_set_domain(i915, obj[0].handle, I915_GEM_DOMAIN_CPU, 0);
 
-	spin = igt_spin_new(i915, .ctx = ctx, .flags = IGT_SPIN_POLL_RUN);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
+			    .flags = IGT_SPIN_POLL_RUN);
 	igt_spin_busywait_until_started(spin);
 
 	fw = igt_open_forcewake_handle(i915);
@@ -172,6 +180,7 @@ static int workaround_fail_count(int i915, const intel_ctx_t *ctx)
 
 	close(fw);
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 
 	gem_close(i915, obj[1].handle);
 	gem_close(i915, obj[0].handle);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 40/52] tests/i915_hangman: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (38 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 39/52] tests/gem_workarounds: Adopt to use allocator Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 41/52] tests/i915_module_load: " Zbigniew Kempczyński
                   ` (13 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/i915_hangman.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/tests/i915/i915_hangman.c b/tests/i915/i915_hangman.c
index ddead9493..4c18c22db 100644
--- a/tests/i915/i915_hangman.c
+++ b/tests/i915/i915_hangman.c
@@ -211,10 +211,12 @@ static void test_error_state_capture(const intel_ctx_t *ctx, unsigned ring_id,
 	uint32_t *batch;
 	igt_hang_t hang;
 	uint64_t offset;
+	uint64_t ahnd = get_reloc_ahnd(device, ctx->id);
 
 	clear_error_state();
 
-	hang = igt_hang_ctx(device, ctx->id, ring_id, HANG_ALLOW_CAPTURE);
+	hang = igt_hang_ctx_with_ahnd(device, ahnd, ctx->id, ring_id,
+				      HANG_ALLOW_CAPTURE);
 	offset = hang.spin->obj[IGT_SPIN_BATCH].offset;
 
 	batch = gem_mmap__cpu(device, hang.spin->handle, 0, 4096, PROT_READ);
@@ -224,6 +226,7 @@ static void test_error_state_capture(const intel_ctx_t *ctx, unsigned ring_id,
 
 	check_error_state(ring_name, offset, batch);
 	munmap(batch, 4096);
+	put_ahnd(ahnd);
 }
 
 static void
@@ -234,6 +237,7 @@ test_engine_hang(const intel_ctx_t *ctx,
 	const intel_ctx_t *tmp_ctx;
 	igt_spin_t *spin, *next;
 	IGT_LIST_HEAD(list);
+	uint64_t ahnd = get_reloc_ahnd(device, ctx->id), ahndN;
 
 	igt_skip_on(flags & IGT_SPIN_INVALID_CS &&
 		    gem_engine_has_cmdparser(device, &ctx->cfg, e->flags));
@@ -244,7 +248,10 @@ test_engine_hang(const intel_ctx_t *ctx,
 			continue;
 
 		tmp_ctx = intel_ctx_create(device, &ctx->cfg);
-		spin = __igt_spin_new(device, .ctx = tmp_ctx,
+		ahndN = get_reloc_ahnd(device, tmp_ctx->id);
+		spin = __igt_spin_new(device,
+				      .ahnd = ahndN,
+				      .ctx = tmp_ctx,
 				      .engine = other->flags,
 				      .flags = IGT_SPIN_FENCE_OUT);
 		intel_ctx_destroy(device, tmp_ctx);
@@ -254,6 +261,7 @@ test_engine_hang(const intel_ctx_t *ctx,
 
 	/* And on the target engine, we hang */
 	spin = igt_spin_new(device,
+			    .ahnd = ahnd,
 			    .ctx = ctx,
 			    .engine = e->flags,
 			    .flags = (IGT_SPIN_FENCE_OUT |
@@ -267,13 +275,16 @@ test_engine_hang(const intel_ctx_t *ctx,
 
 	/* But no other engines/clients should be affected */
 	igt_list_for_each_entry_safe(spin, next, &list, link) {
+		ahndN = spin->ahnd;
 		igt_assert(sync_fence_wait(spin->out_fence, 0) == -ETIME);
 		igt_spin_end(spin);
 
 		igt_assert(sync_fence_wait(spin->out_fence, 500) == 0);
 		igt_assert_eq(sync_fence_status(spin->out_fence), 1);
 		igt_spin_free(device, spin);
+		put_ahnd(ahndN);
 	}
+	put_ahnd(ahnd);
 }
 
 /* This test covers the case where we end up in an uninitialised area of the
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 41/52] tests/i915_module_load: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (39 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 40/52] tests/i915_hangman: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 42/52] tests/i915_pm_rc6_residency: " Zbigniew Kempczyński
                   ` (12 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/i915_module_load.c | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/tests/i915/i915_module_load.c b/tests/i915/i915_module_load.c
index 98ceb5d85..993c78325 100644
--- a/tests/i915/i915_module_load.c
+++ b/tests/i915/i915_module_load.c
@@ -42,6 +42,7 @@ static void store_all(int i915)
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	uint32_t engines[I915_EXEC_RING_MASK + 1];
 	uint32_t batch[16];
+	uint64_t ahnd, offset, bb_offset;
 	unsigned int sz = ALIGN(sizeof(batch) * ARRAY_SIZE(engines), 4096);
 	struct drm_i915_gem_relocation_entry reloc = {
 		.offset = sizeof(uint32_t),
@@ -91,6 +92,12 @@ static void store_all(int i915)
 	cs = gem_mmap__device_coherent(i915, obj[1].handle, 0, sz, PROT_WRITE);
 
 	ctx = intel_ctx_create_all_physical(i915);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
+	if (ahnd)
+		obj[1].relocation_count = 0;
+	bb_offset = get_offset(ahnd, obj[1].handle, sz, 4096);
+	offset = get_offset(ahnd, obj[0].handle, sizeof(engines), 0);
+
 	for_each_ctx_engine(i915, ctx, e) {
 		uint64_t addr;
 
@@ -100,6 +107,16 @@ static void store_all(int i915)
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
+		if (ahnd) {
+			i = 1;
+			batch[i++] = offset + reloc.delta;
+			batch[i++] = offset >> 32;
+			obj[0].offset = offset;
+			obj[0].flags |= EXEC_OBJECT_PINNED;
+			obj[1].offset = bb_offset;
+			obj[1].flags |= EXEC_OBJECT_PINNED;
+		}
+
 		batch[value] = nengine;
 
 		execbuf.flags = e->flags;
@@ -109,7 +126,8 @@ static void store_all(int i915)
 		execbuf.rsvd1 = ctx->id;
 
 		memcpy(cs + execbuf.batch_start_offset, batch, sizeof(batch));
-		memcpy(cs + reloc.offset, &addr, reloc_sz);
+		if (!ahnd)
+			memcpy(cs + reloc.offset, &addr, reloc_sz);
 		gem_execbuf(i915, &execbuf);
 
 		if (++nengine == ARRAY_SIZE(engines))
@@ -126,6 +144,9 @@ static void store_all(int i915)
 	gem_read(i915, obj[0].handle, 0, engines, nengine * sizeof(engines[0]));
 	gem_close(i915, obj[0].handle);
 	intel_ctx_destroy(i915, ctx);
+	put_offset(ahnd, obj[0].handle);
+	put_offset(ahnd, obj[1].handle);
+	put_ahnd(ahnd);
 
 	for (i = 0; i < nengine; i++)
 		igt_assert_eq_u32(engines[i], i);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 42/52] tests/i915_pm_rc6_residency: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (40 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 41/52] tests/i915_module_load: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 43/52] tests/i915_pm_rpm: Adopt to use no-reloc Zbigniew Kempczyński
                   ` (11 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/i915_pm_rc6_residency.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/tests/i915/i915_pm_rc6_residency.c b/tests/i915/i915_pm_rc6_residency.c
index d1cce474e..96a951406 100644
--- a/tests/i915/i915_pm_rc6_residency.c
+++ b/tests/i915/i915_pm_rc6_residency.c
@@ -458,7 +458,7 @@ static void rc6_fence(int i915)
 	const intel_ctx_t *ctx;
 	struct power_sample sample[2];
 	unsigned long slept;
-	uint64_t rc6, ts[2];
+	uint64_t rc6, ts[2], ahnd;
 	struct rapl rapl;
 	int fd;
 
@@ -486,6 +486,7 @@ static void rc6_fence(int i915)
 
 	/* Submit but delay execution, we should be idle and conserving power */
 	ctx = intel_ctx_create_all_physical(i915);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 	for_each_ctx_engine(i915, ctx, e) {
 		igt_spin_t *spin;
 		int timeline;
@@ -493,7 +494,9 @@ static void rc6_fence(int i915)
 
 		timeline = sw_sync_timeline_create();
 		fence = sw_sync_timeline_create_fence(timeline, 1);
-		spin = igt_spin_new(i915, .ctx = ctx,
+		spin = igt_spin_new(i915,
+				    .ahnd = ahnd,
+				    .ctx = ctx,
 				    .engine = e->flags,
 				    .fence = fence,
 				    .flags = IGT_SPIN_FENCE_IN);
@@ -522,6 +525,7 @@ static void rc6_fence(int i915)
 		gem_quiescent_gpu(i915);
 	}
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 
 	rapl_close(&rapl);
 	close(fd);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 43/52] tests/i915_pm_rpm: Adopt to use no-reloc
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (41 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 42/52] tests/i915_pm_rc6_residency: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 44/52] tests/i915_pm_rps: Alter " Zbigniew Kempczyński
                   ` (10 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/i915_pm_rpm.c | 27 +++++++++++++++++++++------
 1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/tests/i915/i915_pm_rpm.c b/tests/i915/i915_pm_rpm.c
index 39e0064a1..62720d02b 100644
--- a/tests/i915/i915_pm_rpm.c
+++ b/tests/i915/i915_pm_rpm.c
@@ -1178,7 +1178,8 @@ static void gem_pread_subtest(void)
 
 /* Paints a square of color $color, size $width x $height, at position $x x $y
  * of $dst_handle, which contains pitch $pitch. */
-static void submit_blt_cmd(uint32_t dst_handle, uint16_t x, uint16_t y,
+static void submit_blt_cmd(uint32_t dst_handle, int dst_size,
+			   uint16_t x, uint16_t y,
 			   uint16_t width, uint16_t height, uint32_t pitch,
 			   uint32_t color, uint32_t *presumed_dst_offset)
 {
@@ -1190,6 +1191,12 @@ static void submit_blt_cmd(uint32_t dst_handle, uint16_t x, uint16_t y,
 	struct drm_i915_gem_exec_object2 objs[2] = {{}, {}};
 	struct drm_i915_gem_relocation_entry relocs[1] = {{}};
 	struct drm_i915_gem_wait gem_wait;
+	uint64_t ahnd = get_reloc_ahnd(drm_fd, 0), dst_offset;
+
+	if (ahnd)
+		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
+	else
+		dst_offset = *presumed_dst_offset;
 
 	i = 0;
 
@@ -1205,9 +1212,9 @@ static void submit_blt_cmd(uint32_t dst_handle, uint16_t x, uint16_t y,
 	batch_buf[i++] = (y << 16) | x;
 	batch_buf[i++] = ((y + height) << 16) | (x + width);
 	reloc_pos = i;
-	batch_buf[i++] = *presumed_dst_offset;
+	batch_buf[i++] = dst_offset;
 	if (intel_gen(ms_data.devid) >= 8)
-		batch_buf[i++] = 0;
+		batch_buf[i++] = dst_offset >> 32;
 	batch_buf[i++] = color;
 
 	batch_buf[i++] = MI_BATCH_BUFFER_END;
@@ -1230,9 +1237,16 @@ static void submit_blt_cmd(uint32_t dst_handle, uint16_t x, uint16_t y,
 	objs[0].alignment = 64;
 
 	objs[1].handle = batch_handle;
-	objs[1].relocation_count = 1;
+	objs[1].relocation_count = !ahnd ? 1 : 0;
 	objs[1].relocs_ptr = (uintptr_t)relocs;
 
+	if (ahnd) {
+		objs[0].offset = dst_offset;
+		objs[0].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		objs[1].offset = get_offset(ahnd, batch_handle, batch_size, 0);
+		objs[1].flags = EXEC_OBJECT_PINNED;
+	}
+
 	execbuf.buffers_ptr = (uintptr_t)objs;
 	execbuf.buffer_count = 2;
 	execbuf.batch_len = batch_size;
@@ -1253,6 +1267,7 @@ static void submit_blt_cmd(uint32_t dst_handle, uint16_t x, uint16_t y,
 	do_ioctl(drm_fd, DRM_IOCTL_I915_GEM_WAIT, &gem_wait);
 
 	gem_close(drm_fd, batch_handle);
+	put_ahnd(ahnd);
 }
 
 /* Make sure we can submit a batch buffer and verify its result. */
@@ -1285,7 +1300,7 @@ static void gem_execbuf_subtest(void)
 	disable_all_screens_and_wait(&ms_data);
 
 	color = 0x12345678;
-	submit_blt_cmd(handle, sq_x, sq_y, sq_w, sq_h, pitch, color,
+	submit_blt_cmd(handle, dst_size, sq_x, sq_y, sq_w, sq_h, pitch, color,
 		       &presumed_offset);
 	igt_assert(wait_for_suspended());
 
@@ -1324,7 +1339,7 @@ static void gem_execbuf_subtest(void)
 	 * suspended. We use the same spot, but a different color. As a bonus,
 	 * we're testing the presumed_offset from the previous command. */
 	color = 0x87654321;
-	submit_blt_cmd(handle, sq_x, sq_y, sq_w, sq_h, pitch, color,
+	submit_blt_cmd(handle, dst_size, sq_x, sq_y, sq_w, sq_h, pitch, color,
 		       &presumed_offset);
 
 	disable_all_screens_and_wait(&ms_data);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 44/52] tests/i915_pm_rps: Alter to use no-reloc
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (42 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 43/52] tests/i915_pm_rpm: Adopt to use no-reloc Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 45/52] tests/kms_busy: Adopt to use allocator Zbigniew Kempczyński
                   ` (9 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/i915_pm_rps.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/tests/i915/i915_pm_rps.c b/tests/i915/i915_pm_rps.c
index f51a47479..ada06aa94 100644
--- a/tests/i915/i915_pm_rps.c
+++ b/tests/i915/i915_pm_rps.c
@@ -250,6 +250,10 @@ static void load_helper_run(enum load load)
 		igt_spin_t *spin[2] = {};
 		bool prev_load;
 		uint32_t handle;
+		uint64_t ahnd;
+
+		intel_allocator_init();
+		ahnd = get_reloc_ahnd(drm_fd, 0);
 
 		signal(SIGTERM, load_helper_signal_handler);
 		signal(SIGUSR2, load_helper_signal_handler);
@@ -257,9 +261,9 @@ static void load_helper_run(enum load load)
 		igt_debug("Applying %s load...\n", lh.load ? "high" : "low");
 
 		prev_load = lh.load == HIGH;
-		spin[0] = __igt_spin_new(drm_fd);
+		spin[0] = __igt_spin_new(drm_fd, .ahnd = ahnd);
 		if (prev_load)
-			spin[1] = __igt_spin_new(drm_fd);
+			spin[1] = __igt_spin_new(drm_fd, .ahnd = ahnd);
 		prev_load = !prev_load; /* send the initial signal */
 		while (!lh.exit) {
 			bool high_load;
@@ -279,7 +283,7 @@ static void load_helper_run(enum load load)
 			} else {
 				spin[0] = spin[1];
 			}
-			spin[high_load] = __igt_spin_new(drm_fd);
+			spin[high_load] = __igt_spin_new(drm_fd, .ahnd = ahnd);
 
 			if (lh.signal && high_load != prev_load) {
 				write(lh.link, &lh.signal, sizeof(lh.signal));
@@ -310,6 +314,7 @@ static void load_helper_run(enum load load)
 
 		igt_spin_free(drm_fd, spin[1]);
 		igt_spin_free(drm_fd, spin[0]);
+		put_ahnd(ahnd);
 	}
 
 	close(lh.link);
@@ -542,11 +547,14 @@ static void boost_freq(int fd, int *boost_freqs)
 {
 	int64_t timeout = 1;
 	igt_spin_t *load;
+	/* We need to keep dependency spin offset for load->handle */
+	uint64_t ahnd = get_simple_l2h_ahnd(fd, 0);
 
-	load = igt_spin_new(fd);
+	//get_offset(ahnd, 1000, 0x1000000, 0);
+	load = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Strip off extra fences from the object, and keep it from starting */
-	igt_spin_free(fd, igt_spin_new(fd, .dependency = load->handle));
+	igt_spin_free(fd, igt_spin_new(fd, .ahnd = ahnd, .dependency = load->handle));
 
 	/* Waiting will grant us a boost to maximum */
 	gem_wait(fd, load->handle, &timeout);
@@ -558,6 +566,7 @@ static void boost_freq(int fd, int *boost_freqs)
 	igt_spin_end(load);
 	gem_sync(fd, load->handle);
 	igt_spin_free(fd, load);
+	put_ahnd(ahnd);
 }
 
 static void waitboost(int fd, bool reset)
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 45/52] tests/kms_busy: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (43 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 44/52] tests/i915_pm_rps: Alter " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 46/52] tests/kms_cursor_legacy: " Zbigniew Kempczyński
                   ` (8 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/kms_busy.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tests/kms_busy.c b/tests/kms_busy.c
index 9722aadcd..e31862ec5 100644
--- a/tests/kms_busy.c
+++ b/tests/kms_busy.c
@@ -76,11 +76,13 @@ static void flip_to_fb(igt_display_t *dpy, int pipe,
 	struct pollfd pfd = { .fd = dpy->drm_fd, .events = POLLIN };
 	struct drm_event_vblank ev;
 	IGT_CORK_FENCE(cork);
+	uint64_t ahnd = get_reloc_ahnd(dpy->drm_fd, 0);
 	igt_spin_t *t;
 	int fence;
 
 	fence = igt_cork_plug(&cork, dpy->drm_fd);
 	t = igt_spin_new(dpy->drm_fd,
+			 .ahnd = ahnd,
 			 .fence = fence,
 			 .dependency = fb->gem_handle,
 			 .flags = IGT_SPIN_FENCE_IN);
@@ -128,6 +130,7 @@ static void flip_to_fb(igt_display_t *dpy, int pipe,
 	}
 
 	igt_spin_free(dpy->drm_fd, t);
+	put_ahnd(ahnd);
 }
 
 static void test_flip(igt_display_t *dpy, int pipe, bool modeset)
@@ -181,7 +184,9 @@ static void test_flip(igt_display_t *dpy, int pipe, bool modeset)
 static void test_atomic_commit_hang(igt_display_t *dpy, igt_plane_t *primary,
 				    struct igt_fb *busy_fb)
 {
+	uint64_t ahnd = get_reloc_ahnd(dpy->drm_fd, 0);
 	igt_spin_t *t = igt_spin_new(dpy->drm_fd,
+				     .ahnd = ahnd,
 				     .dependency = busy_fb->gem_handle,
 				     .flags = IGT_SPIN_NO_PREEMPTION);
 	struct pollfd pfd = { .fd = dpy->drm_fd, .events = POLLIN };
@@ -212,6 +217,7 @@ static void test_atomic_commit_hang(igt_display_t *dpy, igt_plane_t *primary,
 	igt_assert(read(dpy->drm_fd, &ev, sizeof(ev)) == sizeof(ev));
 
 	igt_spin_end(t);
+	put_ahnd(ahnd);
 }
 
 static void test_hang(igt_display_t *dpy,
@@ -263,6 +269,7 @@ static void test_pageflip_modeset_hang(igt_display_t *dpy, enum pipe pipe)
 	igt_output_t *output;
 	igt_plane_t *primary;
 	igt_spin_t *t;
+	uint64_t ahnd = get_reloc_ahnd(dpy->drm_fd, 0);
 
 	output = set_fb_on_crtc(dpy, pipe, &fb);
 	primary = igt_output_get_plane_type(output, DRM_PLANE_TYPE_PRIMARY);
@@ -270,6 +277,7 @@ static void test_pageflip_modeset_hang(igt_display_t *dpy, enum pipe pipe)
 	igt_display_commit2(dpy, dpy->is_atomic ? COMMIT_ATOMIC : COMMIT_LEGACY);
 
 	t = igt_spin_new(dpy->drm_fd,
+			 .ahnd = ahnd,
 			 .dependency = fb.gem_handle,
 			 .flags = IGT_SPIN_NO_PREEMPTION);
 
@@ -283,6 +291,7 @@ static void test_pageflip_modeset_hang(igt_display_t *dpy, enum pipe pipe)
 	igt_assert(read(dpy->drm_fd, &ev, sizeof(ev)) == sizeof(ev));
 
 	igt_spin_end(t);
+	put_ahnd(ahnd);
 
 	igt_remove_fb(dpy->drm_fd, &fb);
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 46/52] tests/kms_cursor_legacy: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (44 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 45/52] tests/kms_busy: Adopt to use allocator Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 47/52] tests/kms_flip: " Zbigniew Kempczyński
                   ` (7 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
---
 tests/kms_cursor_legacy.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tests/kms_cursor_legacy.c b/tests/kms_cursor_legacy.c
index 75a822c4e..4f96c880e 100644
--- a/tests/kms_cursor_legacy.c
+++ b/tests/kms_cursor_legacy.c
@@ -517,6 +517,7 @@ static void basic_flip_cursor(igt_display_t *display,
 	struct igt_fb fb_info, cursor_fb, cursor_fb2, argb_fb;
 	unsigned vblank_start;
 	enum pipe pipe = find_connected_pipe(display, false);
+	uint64_t ahnd = get_reloc_ahnd(display->drm_fd, 0);
 	igt_spin_t *spin;
 	int i, miss1 = 0, miss2 = 0, delta;
 
@@ -548,6 +549,7 @@ static void basic_flip_cursor(igt_display_t *display,
 		spin = NULL;
 		if (flags & BASIC_BUSY)
 			spin = igt_spin_new(display->drm_fd,
+					    .ahnd = ahnd,
 					    .dependency = fb_info.gem_handle);
 
 		/* Start with a synchronous query to align with the vblank */
@@ -631,6 +633,7 @@ static void basic_flip_cursor(igt_display_t *display,
 		igt_remove_fb(display->drm_fd, &argb_fb);
 	if (cursor_fb2.gem_handle)
 		igt_remove_fb(display->drm_fd, &cursor_fb2);
+	put_ahnd(ahnd);
 }
 
 static int
@@ -1319,6 +1322,7 @@ static void flip_vs_cursor_busy_crc(igt_display_t *display, bool atomic)
 	igt_pipe_t *pipe_connected = &display->pipes[pipe];
 	igt_plane_t *plane_primary = igt_pipe_get_plane_type(pipe_connected, DRM_PLANE_TYPE_PRIMARY);
 	igt_crc_t crcs[2], test_crc;
+	uint64_t ahnd = get_reloc_ahnd(display->drm_fd, 0);
 
 	if (atomic)
 		igt_require(display->is_atomic);
@@ -1366,6 +1370,7 @@ static void flip_vs_cursor_busy_crc(igt_display_t *display, bool atomic)
 		igt_spin_t *spin;
 
 		spin = igt_spin_new(display->drm_fd,
+				    .ahnd = ahnd,
 				    .dependency = fb_info[1].gem_handle);
 
 		vblank_start = kmstest_get_vblank(display->drm_fd, pipe, DRM_VBLANK_NEXTONMISS);
@@ -1394,6 +1399,7 @@ static void flip_vs_cursor_busy_crc(igt_display_t *display, bool atomic)
 	igt_remove_fb(display->drm_fd, &fb_info[1]);
 	igt_remove_fb(display->drm_fd, &fb_info[0]);
 	igt_remove_fb(display->drm_fd, &cursor_fb);
+	put_ahnd(ahnd);
 }
 
 igt_main
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 47/52] tests/kms_flip: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (45 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 46/52] tests/kms_cursor_legacy: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 48/52] tests/kms_vblank: " Zbigniew Kempczyński
                   ` (6 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

From: Bhanuprakash Modem <bhanuprakash.modem@intel.com>

For newer gens kernel will reject relocations returning -EINVAL
so we should just provide the allocator handle to inject the hang.

Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/kms_flip.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/tests/kms_flip.c b/tests/kms_flip.c
index 09762ae8e..d63b5f16a 100755
--- a/tests/kms_flip.c
+++ b/tests/kms_flip.c
@@ -614,9 +614,9 @@ static void recreate_fb(struct test_output *o)
 	o->fb_info[o->current_fb_id].fb_id = new_fb_id;
 }
 
-static igt_hang_t hang_gpu(int fd)
+static igt_hang_t hang_gpu(int fd, uint64_t ahnd)
 {
-	return igt_hang_ring(fd, I915_EXEC_DEFAULT);
+	return igt_hang_ring_with_ahnd(fd, I915_EXEC_DEFAULT, ahnd);
 }
 
 static void unhang_gpu(int fd, igt_hang_t hang)
@@ -673,6 +673,7 @@ static bool run_test_step(struct test_output *o, unsigned int *events)
 	struct vblank_reply vbl_reply;
 	unsigned int target_seq;
 	igt_hang_t hang;
+	uint64_t ahnd = 0;
 
 	target_seq = o->vblank_state.seq_step;
 	/* Absolute waits only works once we have a frame counter. */
@@ -774,8 +775,11 @@ static bool run_test_step(struct test_output *o, unsigned int *events)
 	igt_print_activity();
 
 	memset(&hang, 0, sizeof(hang));
-	if (do_flip && (o->flags & TEST_HANG))
-		hang = hang_gpu(drm_fd);
+	if (do_flip && (o->flags & TEST_HANG)) {
+		if (is_i915_device(drm_fd))
+			ahnd = get_reloc_ahnd(drm_fd, 0);
+		hang = hang_gpu(drm_fd, ahnd);
+	}
 
 	/* try to make sure we can issue two flips during the same frame */
 	if (do_flip && (o->flags & TEST_EBUSY)) {
@@ -845,6 +849,8 @@ static bool run_test_step(struct test_output *o, unsigned int *events)
 		igt_assert(do_page_flip(o, new_fb_id, false) == expected_einval);
 
 	unhang_gpu(drm_fd, hang);
+	if (is_i915_device(drm_fd))
+		put_ahnd(ahnd);
 
 	*events = completed_events;
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 48/52] tests/kms_vblank: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (46 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 47/52] tests/kms_flip: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 49/52] tests/perf_pmu: " Zbigniew Kempczyński
                   ` (5 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

From: Bhanuprakash Modem <bhanuprakash.modem@intel.com>

For newer gens kernel will reject relocations returning -EINVAL
so we should just provide the allocator handle to inject the hang.

Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/kms_vblank.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tests/kms_vblank.c b/tests/kms_vblank.c
index 83d5629f1..4597f9827 100644
--- a/tests/kms_vblank.c
+++ b/tests/kms_vblank.c
@@ -118,6 +118,7 @@ static void run_test(data_t *data, void (*testfunc)(data_t *, int, int))
 	igt_output_t *output = data->output;
 	int fd = display->drm_fd;
 	igt_hang_t hang;
+	uint64_t ahnd = 0;
 
 	prepare_crtc(data, fd, output);
 
@@ -128,8 +129,11 @@ static void run_test(data_t *data, void (*testfunc)(data_t *, int, int))
 		 igt_subtest_name(), kmstest_pipe_name(data->pipe),
 		 igt_output_name(output));
 
-	if (!(data->flags & NOHANG))
-		hang = igt_hang_ring(fd, I915_EXEC_DEFAULT);
+	if (!(data->flags & NOHANG)) {
+		if (is_i915_device(fd))
+			ahnd = get_reloc_ahnd(fd, 0);
+		hang = igt_hang_ring_with_ahnd(fd, I915_EXEC_DEFAULT, ahnd);
+	}
 
 	if (data->flags & BUSY) {
 		union drm_wait_vblank vbl;
@@ -166,6 +170,9 @@ static void run_test(data_t *data, void (*testfunc)(data_t *, int, int))
 	igt_info("\n%s on pipe %s, connector %s: PASSED\n\n",
 		 igt_subtest_name(), kmstest_pipe_name(data->pipe), igt_output_name(output));
 
+	if (is_i915_device(fd))
+		put_ahnd(ahnd);
+
 	/* cleanup what prepare_crtc() has done */
 	cleanup_crtc(data, fd, output);
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 49/52] tests/perf_pmu: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (47 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 48/52] tests/kms_vblank: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 50/52] tests/sysfs_heartbeat_interval: " Zbigniew Kempczyński
                   ` (4 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/perf_pmu.c | 147 ++++++++++++++++++++++++++++++------------
 1 file changed, 107 insertions(+), 40 deletions(-)

diff --git a/tests/i915/perf_pmu.c b/tests/i915/perf_pmu.c
index 10dc3bf2f..924f39d1a 100644
--- a/tests/i915/perf_pmu.c
+++ b/tests/i915/perf_pmu.c
@@ -174,10 +174,11 @@ static unsigned int measured_usleep(unsigned int usec)
 #define FLAG_HANG (32)
 #define TEST_S3 (64)
 
-static igt_spin_t * __spin_poll(int fd, const intel_ctx_t *ctx,
-				const struct intel_execution_engine2 *e)
+static igt_spin_t *__spin_poll(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			       const struct intel_execution_engine2 *e)
 {
 	struct igt_spin_factory opts = {
+		.ahnd = ahnd,
 		.ctx = ctx,
 		.engine = e->flags,
 	};
@@ -217,25 +218,26 @@ static unsigned long __spin_wait(int fd, igt_spin_t *spin)
 	return igt_nsec_elapsed(&start);
 }
 
-static igt_spin_t * __spin_sync(int fd, const intel_ctx_t *ctx,
-				const struct intel_execution_engine2 *e)
+static igt_spin_t *__spin_sync(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			       const struct intel_execution_engine2 *e)
 {
-	igt_spin_t *spin = __spin_poll(fd, ctx, e);
+	igt_spin_t *spin = __spin_poll(fd, ahnd, ctx, e);
 
 	__spin_wait(fd, spin);
 
 	return spin;
 }
 
-static igt_spin_t * spin_sync(int fd, const intel_ctx_t *ctx,
-			      const struct intel_execution_engine2 *e)
+static igt_spin_t *spin_sync(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			     const struct intel_execution_engine2 *e)
 {
 	igt_require_gem(fd);
 
-	return __spin_sync(fd, ctx, e);
+	return __spin_sync(fd, ahnd, ctx, e);
 }
 
-static igt_spin_t * spin_sync_flags(int fd, const intel_ctx_t *ctx, unsigned int flags)
+static igt_spin_t *spin_sync_flags(int fd, uint64_t ahnd,
+				   const intel_ctx_t *ctx, unsigned int flags)
 {
 	struct intel_execution_engine2 e = { };
 
@@ -244,7 +246,7 @@ static igt_spin_t * spin_sync_flags(int fd, const intel_ctx_t *ctx, unsigned int
 		     (I915_EXEC_BSD | I915_EXEC_BSD_RING2) ? 1 : 0;
 	e.flags = flags;
 
-	return spin_sync(fd, ctx, &e);
+	return spin_sync(fd, ahnd, ctx, &e);
 }
 
 static void end_spin(int fd, igt_spin_t *spin, unsigned int flags)
@@ -286,11 +288,12 @@ single(int gem_fd, const intel_ctx_t *ctx,
 	igt_spin_t *spin;
 	uint64_t val;
 	int fd;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
 
 	fd = open_pmu(gem_fd, I915_PMU_ENGINE_BUSY(e->class, e->instance));
 
 	if (flags & TEST_BUSY)
-		spin = spin_sync(gem_fd, ctx, e);
+		spin = spin_sync(gem_fd, ahnd, ctx, e);
 	else
 		spin = NULL;
 
@@ -321,6 +324,7 @@ single(int gem_fd, const intel_ctx_t *ctx,
 
 	igt_spin_free(gem_fd, spin);
 	close(fd);
+	put_ahnd(ahnd);
 
 	gem_quiescent_gpu(gem_fd);
 }
@@ -333,6 +337,7 @@ busy_start(int gem_fd, const intel_ctx_t *ctx,
 	uint64_t val, ts[2];
 	igt_spin_t *spin;
 	int fd;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
 
 	/*
 	 * Defeat the busy stats delayed disable, we need to guarantee we are
@@ -340,7 +345,7 @@ busy_start(int gem_fd, const intel_ctx_t *ctx,
 	 */
 	sleep(2);
 
-	spin = __spin_sync(gem_fd, ctx, e);
+	spin = __spin_sync(gem_fd, ahnd, ctx, e);
 
 	fd = open_pmu(gem_fd, I915_PMU_ENGINE_BUSY(e->class, e->instance));
 
@@ -351,6 +356,7 @@ busy_start(int gem_fd, const intel_ctx_t *ctx,
 
 	igt_spin_free(gem_fd, spin);
 	close(fd);
+	put_ahnd(ahnd);
 
 	assert_within_epsilon(val, ts[1] - ts[0], tolerance);
 	gem_quiescent_gpu(gem_fd);
@@ -370,8 +376,10 @@ busy_double_start(int gem_fd, const intel_ctx_t *ctx,
 	igt_spin_t *spin[2];
 	const intel_ctx_t *tmp_ctx;
 	int fd;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id), ahndN;
 
 	tmp_ctx = intel_ctx_create(gem_fd, &ctx->cfg);
+	ahndN = get_reloc_ahnd(gem_fd, tmp_ctx->id);
 
 	/*
 	 * Defeat the busy stats delayed disable, we need to guarantee we are
@@ -384,9 +392,10 @@ busy_double_start(int gem_fd, const intel_ctx_t *ctx,
 	 * re-submission in execlists mode. Make sure busyness is correctly
 	 * reported with the engine busy, and after the engine went idle.
 	 */
-	spin[0] = __spin_sync(gem_fd, ctx, e);
+	spin[0] = __spin_sync(gem_fd, ahnd, ctx, e);
 	usleep(500e3);
 	spin[1] = __igt_spin_new(gem_fd,
+				 .ahnd = ahndN,
 				 .ctx = tmp_ctx,
 				 .engine = e->flags);
 
@@ -419,6 +428,8 @@ busy_double_start(int gem_fd, const intel_ctx_t *ctx,
 	close(fd);
 
 	intel_ctx_destroy(gem_fd, tmp_ctx);
+	put_ahnd(ahnd);
+	put_ahnd(ahndN);
 
 	assert_within_epsilon(val, ts[1] - ts[0], tolerance);
 	igt_assert_eq(val2, 0);
@@ -457,6 +468,7 @@ busy_check_all(int gem_fd, const intel_ctx_t *ctx,
 	int fd[num_engines];
 	unsigned long slept;
 	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
 
 	i = 0;
 	fd[0] = -1;
@@ -472,7 +484,7 @@ busy_check_all(int gem_fd, const intel_ctx_t *ctx,
 
 	igt_assert_eq(i, num_engines);
 
-	spin = spin_sync(gem_fd, ctx, e);
+	spin = spin_sync(gem_fd, ahnd, ctx, e);
 	pmu_read_multi(fd[0], num_engines, tval[0]);
 	slept = measured_usleep(batch_duration_ns / 1000);
 	if (flags & TEST_TRAILING_IDLE)
@@ -483,6 +495,7 @@ busy_check_all(int gem_fd, const intel_ctx_t *ctx,
 	igt_spin_free(gem_fd, spin);
 	for (i = 0; i < num_engines; i++)
 		close(fd[i]);
+	put_ahnd(ahnd);
 
 	for (i = 0; i < num_engines; i++)
 		val[i] = tval[1][i] - tval[0][i];
@@ -524,6 +537,7 @@ most_busy_check_all(int gem_fd, const intel_ctx_t *ctx,
 	unsigned long slept;
 	igt_spin_t *spin = NULL;
 	unsigned int idle_idx, i;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
 
 	i = 0;
 	for_each_ctx_engine(gem_fd, ctx, e_) {
@@ -532,7 +546,7 @@ most_busy_check_all(int gem_fd, const intel_ctx_t *ctx,
 		else if (spin)
 			__submit_spin(gem_fd, spin, e_, 64);
 		else
-			spin = __spin_poll(gem_fd, ctx, e_);
+			spin = __spin_poll(gem_fd, ahnd, ctx, e_);
 
 		val[i++] = I915_PMU_ENGINE_BUSY(e_->class, e_->instance);
 	}
@@ -556,6 +570,7 @@ most_busy_check_all(int gem_fd, const intel_ctx_t *ctx,
 	igt_spin_free(gem_fd, spin);
 	for (i = 0; i < num_engines; i++)
 		close(fd[i]);
+	put_ahnd(ahnd);
 
 	for (i = 0; i < num_engines; i++)
 		val[i] = tval[1][i] - tval[0][i];
@@ -583,13 +598,14 @@ all_busy_check_all(int gem_fd, const intel_ctx_t *ctx,
 	unsigned long slept;
 	igt_spin_t *spin = NULL;
 	unsigned int i;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
 
 	i = 0;
 	for_each_ctx_engine(gem_fd, ctx, e) {
 		if (spin)
 			__submit_spin(gem_fd, spin, e, 64);
 		else
-			spin = __spin_poll(gem_fd, ctx, e);
+			spin = __spin_poll(gem_fd, ahnd, ctx, e);
 
 		val[i++] = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	}
@@ -612,6 +628,7 @@ all_busy_check_all(int gem_fd, const intel_ctx_t *ctx,
 	igt_spin_free(gem_fd, spin);
 	for (i = 0; i < num_engines; i++)
 		close(fd[i]);
+	put_ahnd(ahnd);
 
 	for (i = 0; i < num_engines; i++)
 		val[i] = tval[1][i] - tval[0][i];
@@ -631,6 +648,7 @@ no_sema(int gem_fd, const intel_ctx_t *ctx,
 	igt_spin_t *spin;
 	uint64_t val[2][2];
 	int fd[2];
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
 
 	fd[0] = open_group(gem_fd, I915_PMU_ENGINE_SEMA(e->class, e->instance),
 			   -1);
@@ -638,7 +656,7 @@ no_sema(int gem_fd, const intel_ctx_t *ctx,
 			   fd[0]);
 
 	if (flags & TEST_BUSY)
-		spin = spin_sync(gem_fd, ctx, e);
+		spin = spin_sync(gem_fd, ahnd, ctx, e);
 	else
 		spin = NULL;
 
@@ -657,6 +675,7 @@ no_sema(int gem_fd, const intel_ctx_t *ctx,
 	}
 	close(fd[0]);
 	close(fd[1]);
+	put_ahnd(ahnd);
 
 	assert_within_epsilon(val[0][0], 0.0f, tolerance);
 	assert_within_epsilon(val[0][1], 0.0f, tolerance);
@@ -682,6 +701,8 @@ sema_wait(int gem_fd, const intel_ctx_t *ctx,
 	uint32_t batch[16];
 	uint64_t val[2], ts[2];
 	int fd;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
+	uint64_t obj_offset, bb_offset;
 
 	igt_require(intel_gen(intel_get_drm_devid(gem_fd)) >= 8);
 
@@ -693,19 +714,21 @@ sema_wait(int gem_fd, const intel_ctx_t *ctx,
 
 	bb_handle = gem_create(gem_fd, 4096);
 	obj_handle = gem_create(gem_fd, 4096);
+	bb_offset = get_offset(ahnd, bb_handle, 4096, 0);
+	obj_offset = get_offset(ahnd, obj_handle, 4096, 0);
 
 	obj_ptr = gem_mmap__wc(gem_fd, obj_handle, 0, 4096, PROT_WRITE);
 
 	batch[0] = MI_STORE_DWORD_IMM;
-	batch[1] = sizeof(*obj_ptr);
-	batch[2] = 0;
+	batch[1] = obj_offset + sizeof(*obj_ptr);
+	batch[2] = (obj_offset + sizeof(*obj_ptr)) >> 32;
 	batch[3] = 1;
 	batch[4] = MI_SEMAPHORE_WAIT |
 		   MI_SEMAPHORE_POLL |
 		   MI_SEMAPHORE_SAD_GTE_SDD;
 	batch[5] = 1;
-	batch[6] = 0x0;
-	batch[7] = 0x0;
+	batch[6] = obj_offset;
+	batch[7] = obj_offset >> 32;
 	batch[8] = MI_BATCH_BUFFER_END;
 
 	gem_write(gem_fd, bb_handle, 0, batch, sizeof(batch));
@@ -723,7 +746,7 @@ sema_wait(int gem_fd, const intel_ctx_t *ctx,
 	obj[0].handle = obj_handle;
 
 	obj[1].handle = bb_handle;
-	obj[1].relocation_count = 2;
+	obj[1].relocation_count = !ahnd ? 2 : 0;
 	obj[1].relocs_ptr = to_user_pointer(reloc);
 
 	eb.buffer_count = 2;
@@ -731,6 +754,13 @@ sema_wait(int gem_fd, const intel_ctx_t *ctx,
 	eb.flags = e->flags;
 	eb.rsvd1 = ctx->id;
 
+	if (ahnd) {
+		obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[0].offset = obj_offset;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		obj[1].offset = bb_offset;
+	}
+
 	/**
 	 * Start the semaphore wait PMU and after some known time let the above
 	 * semaphore wait command finish. Then check that the PMU is reporting
@@ -766,12 +796,14 @@ sema_wait(int gem_fd, const intel_ctx_t *ctx,
 	gem_close(gem_fd, obj_handle);
 	gem_close(gem_fd, bb_handle);
 	close(fd);
+	put_ahnd(ahnd);
 
 	assert_within_epsilon(val[1] - val[0], slept, tolerance);
 }
 
 static uint32_t
-create_sema(int gem_fd, struct drm_i915_gem_relocation_entry *reloc)
+create_sema(int gem_fd, uint64_t ahnd,
+	    struct drm_i915_gem_relocation_entry *reloc, __u64 *poffset)
 {
 	uint32_t cs[] = {
 		/* Reset our semaphore wait */
@@ -788,7 +820,12 @@ create_sema(int gem_fd, struct drm_i915_gem_relocation_entry *reloc)
 
 		MI_BATCH_BUFFER_END
 	};
-	uint32_t handle = gem_create(gem_fd, 4096);
+	uint32_t handle;
+
+	igt_assert(poffset);
+
+	handle = gem_create(gem_fd, 4096);
+	*poffset = get_offset(ahnd, handle, 4096, 0);
 
 	memset(reloc, 0, 2 * sizeof(*reloc));
 	reloc[0].target_handle = handle;
@@ -796,12 +833,19 @@ create_sema(int gem_fd, struct drm_i915_gem_relocation_entry *reloc)
 	reloc[1].target_handle = handle;
 	reloc[1].offset = 64 + 6 * sizeof(uint32_t);
 
+	if (ahnd) {
+		cs[1] = *poffset;
+		cs[2] = *poffset >> 32;
+		cs[6] = *poffset;
+		cs[7] = *poffset >> 32;
+	}
+
 	gem_write(gem_fd, handle, 64, cs, sizeof(cs));
 	return handle;
 }
 
 static void
-__sema_busy(int gem_fd, int pmu, const intel_ctx_t *ctx,
+__sema_busy(int gem_fd, uint64_t ahnd, int pmu, const intel_ctx_t *ctx,
 	    const struct intel_execution_engine2 *e,
 	    int sema_pct,
 	    int busy_pct)
@@ -814,8 +858,8 @@ __sema_busy(int gem_fd, int pmu, const intel_ctx_t *ctx,
 	uint64_t start[2], val[2];
 	struct drm_i915_gem_relocation_entry reloc[2];
 	struct drm_i915_gem_exec_object2 obj = {
-		.handle = create_sema(gem_fd, reloc),
-		.relocation_count = 2,
+		.handle = create_sema(gem_fd, ahnd, reloc, &obj.offset),
+		.relocation_count = !ahnd ? 2 : 0,
 		.relocs_ptr = to_user_pointer(reloc),
 	};
 	struct drm_i915_gem_execbuffer2 eb = {
@@ -835,7 +879,7 @@ __sema_busy(int gem_fd, int pmu, const intel_ctx_t *ctx,
 
 	map = gem_mmap__wc(gem_fd, obj.handle, 0, 4096, PROT_WRITE);
 	gem_execbuf(gem_fd, &eb);
-	spin = igt_spin_new(gem_fd, .ctx = ctx, .engine = e->flags);
+	spin = igt_spin_new(gem_fd, .ahnd = ahnd, .ctx = ctx, .engine = e->flags);
 
 	/* Wait until the batch is executed and the semaphore is busy-waiting */
 	while (!READ_ONCE(*map) && gem_bo_busy(gem_fd, obj.handle))
@@ -880,6 +924,7 @@ sema_busy(int gem_fd, const intel_ctx_t *ctx,
 	  unsigned int flags)
 {
 	int fd[2];
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
 
 	igt_require(intel_gen(intel_get_drm_devid(gem_fd)) >= 8);
 
@@ -888,12 +933,13 @@ sema_busy(int gem_fd, const intel_ctx_t *ctx,
 	fd[1] = open_group(gem_fd, I915_PMU_ENGINE_BUSY(e->class, e->instance),
 			   fd[0]);
 
-	__sema_busy(gem_fd, fd[0], ctx, e, 50, 100);
-	__sema_busy(gem_fd, fd[0], ctx, e, 25, 50);
-	__sema_busy(gem_fd, fd[0], ctx, e, 75, 75);
+	__sema_busy(gem_fd, ahnd, fd[0], ctx, e, 50, 100);
+	__sema_busy(gem_fd, ahnd, fd[0], ctx, e, 25, 50);
+	__sema_busy(gem_fd, ahnd, fd[0], ctx, e, 75, 75);
 
 	close(fd[0]);
 	close(fd[1]);
+	put_ahnd(ahnd);
 }
 
 static void test_awake(int i915, const intel_ctx_t *ctx)
@@ -902,13 +948,14 @@ static void test_awake(int i915, const intel_ctx_t *ctx)
 	unsigned long slept;
 	uint64_t val;
 	int fd;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	fd = perf_i915_open(i915, I915_PMU_SOFTWARE_GT_AWAKE_TIME);
 	igt_skip_on(fd < 0);
 
 	/* Check that each engine is captured by the GT wakeref */
 	for_each_ctx_engine(i915, ctx, e) {
-		igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
+		igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx, .engine = e->flags);
 
 		val = pmu_read_single(fd);
 		slept = measured_usleep(batch_duration_ns / 1000);
@@ -920,7 +967,7 @@ static void test_awake(int i915, const intel_ctx_t *ctx)
 
 	/* And that the total GT wakeref matches walltime not summation */
 	for_each_ctx_engine(i915, ctx, e)
-		igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
+		igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx, .engine = e->flags);
 
 	val = pmu_read_single(fd);
 	slept = measured_usleep(batch_duration_ns / 1000);
@@ -931,6 +978,7 @@ static void test_awake(int i915, const intel_ctx_t *ctx)
 
 	igt_free_spins(i915);
 	close(fd);
+	put_ahnd(ahnd);
 }
 
 #define   MI_WAIT_FOR_PIPE_C_VBLANK (1<<21)
@@ -1147,6 +1195,7 @@ multi_client(int gem_fd, const intel_ctx_t *ctx,
 	uint64_t val[2], ts[2], perf_slept[2];
 	igt_spin_t *spin;
 	int fd[2];
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id);
 
 	gem_quiescent_gpu(gem_fd);
 
@@ -1159,7 +1208,7 @@ multi_client(int gem_fd, const intel_ctx_t *ctx,
 	 */
 	fd[1] = open_pmu(gem_fd, config);
 
-	spin = spin_sync(gem_fd, ctx, e);
+	spin = spin_sync(gem_fd, ahnd, ctx, e);
 
 	val[0] = val[1] = __pmu_read_single(fd[0], &ts[0]);
 	slept[1] = measured_usleep(batch_duration_ns / 1000);
@@ -1177,6 +1226,7 @@ multi_client(int gem_fd, const intel_ctx_t *ctx,
 	gem_sync(gem_fd, spin->handle);
 	igt_spin_free(gem_fd, spin);
 	close(fd[0]);
+	put_ahnd(ahnd);
 
 	assert_within_epsilon(val[0], perf_slept[0], tolerance);
 	assert_within_epsilon(val[1], perf_slept[1], tolerance);
@@ -1240,6 +1290,7 @@ static void cpu_hotplug(int gem_fd)
 	int fd, ret;
 	int cur = 0;
 	char buf;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, 0);
 
 	igt_require(cpu0_hotplug_support());
 
@@ -1250,8 +1301,10 @@ static void cpu_hotplug(int gem_fd)
 	 * Create two spinners so test can ensure shorter gaps in engine
 	 * busyness as it is terminating one and re-starting the other.
 	 */
-	spin[0] = igt_spin_new(gem_fd, .engine = I915_EXEC_DEFAULT);
-	spin[1] = __igt_spin_new(gem_fd, .engine = I915_EXEC_DEFAULT);
+	spin[0] = igt_spin_new(gem_fd, .ahnd = ahnd,
+			       .engine = I915_EXEC_DEFAULT);
+	spin[1] = __igt_spin_new(gem_fd, .ahnd = ahnd,
+				 .engine = I915_EXEC_DEFAULT);
 
 	val = __pmu_read_single(fd, &ts[0]);
 
@@ -1334,7 +1387,7 @@ static void cpu_hotplug(int gem_fd)
 			break;
 
 		igt_spin_free(gem_fd, spin[cur]);
-		spin[cur] = __igt_spin_new(gem_fd,
+		spin[cur] = __igt_spin_new(gem_fd, .ahnd = ahnd,
 					   .engine = I915_EXEC_DEFAULT);
 		cur ^= 1;
 	}
@@ -1348,6 +1401,7 @@ static void cpu_hotplug(int gem_fd)
 	igt_waitchildren();
 	close(fd);
 	close(link[0]);
+	put_ahnd(ahnd);
 
 	/* Skip if child signals a problem with offlining a CPU. */
 	igt_skip_on(buf == 's');
@@ -1372,6 +1426,7 @@ test_interrupts(int gem_fd)
 	uint64_t idle, busy;
 	int fence_fd;
 	int fd;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, 0);
 
 	gem_quiescent_gpu(gem_fd);
 
@@ -1380,6 +1435,7 @@ test_interrupts(int gem_fd)
 	/* Queue spinning batches. */
 	for (int i = 0; i < target; i++) {
 		spin[i] = __igt_spin_new(gem_fd,
+					 .ahnd = ahnd,
 					 .engine = I915_EXEC_DEFAULT,
 					 .flags = IGT_SPIN_FENCE_OUT);
 		if (i == 0) {
@@ -1418,6 +1474,7 @@ test_interrupts(int gem_fd)
 	/* Free batches. */
 	for (int i = 0; i < target; i++)
 		igt_spin_free(gem_fd, spin[i]);
+	put_ahnd(ahnd);
 
 	/* Check at least as many interrupts has been generated. */
 	busy = pmu_read_single(fd) - idle;
@@ -1435,6 +1492,7 @@ test_interrupts_sync(int gem_fd)
 	struct pollfd pfd;
 	uint64_t idle, busy;
 	int fd;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, 0);
 
 	gem_quiescent_gpu(gem_fd);
 
@@ -1443,6 +1501,7 @@ test_interrupts_sync(int gem_fd)
 	/* Queue spinning batches. */
 	for (int i = 0; i < target; i++)
 		spin[i] = __igt_spin_new(gem_fd,
+					 .ahnd = ahnd,
 					 .flags = IGT_SPIN_FENCE_OUT);
 
 	/* Wait for idle state. */
@@ -1467,6 +1526,7 @@ test_interrupts_sync(int gem_fd)
 	/* Check at least as many interrupts has been generated. */
 	busy = pmu_read_single(fd) - idle;
 	close(fd);
+	put_ahnd(ahnd);
 
 	igt_assert_lte(target, busy);
 }
@@ -1479,6 +1539,7 @@ test_frequency(int gem_fd)
 	double min[2], max[2];
 	igt_spin_t *spin;
 	int fd[2], sysfs;
+	uint64_t ahnd = get_reloc_ahnd(gem_fd, 0);
 
 	sysfs = igt_sysfs_open(gem_fd);
 	igt_require(sysfs >= 0);
@@ -1506,7 +1567,7 @@ test_frequency(int gem_fd)
 	igt_require(igt_sysfs_get_u32(sysfs, "gt_boost_freq_mhz") == min_freq);
 
 	gem_quiescent_gpu(gem_fd); /* Idle to be sure the change takes effect */
-	spin = spin_sync_flags(gem_fd, 0, I915_EXEC_DEFAULT);
+	spin = spin_sync_flags(gem_fd, ahnd, 0, I915_EXEC_DEFAULT);
 
 	slept = pmu_read_multi(fd[0], 2, start);
 	measured_usleep(batch_duration_ns / 1000);
@@ -1532,7 +1593,7 @@ test_frequency(int gem_fd)
 	igt_require(igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz") == max_freq);
 
 	gem_quiescent_gpu(gem_fd);
-	spin = spin_sync_flags(gem_fd, 0, I915_EXEC_DEFAULT);
+	spin = spin_sync_flags(gem_fd, ahnd, 0, I915_EXEC_DEFAULT);
 
 	slept = pmu_read_multi(fd[0], 2, start);
 	measured_usleep(batch_duration_ns / 1000);
@@ -1553,6 +1614,7 @@ test_frequency(int gem_fd)
 			 min_freq, igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz"));
 	close(fd[0]);
 	close(fd[1]);
+	put_ahnd(ahnd);
 
 	igt_info("Min frequency: requested %.1f, actual %.1f\n",
 		 min[0], min[1]);
@@ -1839,9 +1901,13 @@ accuracy(int gem_fd, const intel_ctx_t *ctx,
 		};
 		uint64_t total_busy_ns = 0, total_ns = 0;
 		igt_spin_t *spin;
+		uint64_t ahnd;
+
+		intel_allocator_init();
+		ahnd = get_reloc_ahnd(gem_fd, 0);
 
 		/* Allocate our spin batch and idle it. */
-		spin = igt_spin_new(gem_fd, .ctx = ctx, .engine = e->flags);
+		spin = igt_spin_new(gem_fd, .ahnd = ahnd, .ctx = ctx, .engine = e->flags);
 		igt_spin_end(spin);
 		gem_sync(gem_fd, spin->handle);
 
@@ -1912,6 +1978,7 @@ accuracy(int gem_fd, const intel_ctx_t *ctx,
 		}
 
 		igt_spin_free(gem_fd, spin);
+		put_ahnd(ahnd);
 	}
 
 	fd = open_pmu(gem_fd, I915_PMU_ENGINE_BUSY(e->class, e->instance));
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 50/52] tests/sysfs_heartbeat_interval: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (48 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 49/52] tests/perf_pmu: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 51/52] tests/sysfs_preempt_timeout: " Zbigniew Kempczyński
                   ` (3 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/sysfs_heartbeat_interval.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/tests/i915/sysfs_heartbeat_interval.c b/tests/i915/sysfs_heartbeat_interval.c
index b70b653b1..bc8d1b3c0 100644
--- a/tests/i915/sysfs_heartbeat_interval.c
+++ b/tests/i915/sysfs_heartbeat_interval.c
@@ -36,6 +36,7 @@
 #include "i915/gem.h"
 #include "i915/gem_context.h"
 #include "i915/gem_engine_topology.h"
+#include "intel_allocator.h"
 #include "igt_debugfs.h"
 #include "igt_dummyload.h"
 #include "igt_sysfs.h"
@@ -149,6 +150,7 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 	igt_spin_t *spin[2];
 	uint64_t elapsed;
 	const intel_ctx_t *ctx[2];
+	uint64_t ahnd[2];
 
 	igt_assert(igt_sysfs_scanf(engine, "class", "%u", &class) == 1);
 	igt_assert(igt_sysfs_scanf(engine, "instance", "%u", &inst) == 1);
@@ -156,15 +158,18 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 	set_heartbeat(engine, timeout);
 
 	ctx[0] = create_ctx(i915, class, inst, 1023);
-	spin[0] = igt_spin_new(i915, .ctx = ctx[0],
+	ahnd[0] = get_reloc_ahnd(i915, ctx[0]->id);
+	spin[0] = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx[0],
 			       .flags = (IGT_SPIN_NO_PREEMPTION |
 					 IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT));
 	igt_spin_busywait_until_started(spin[0]);
 
 	ctx[1] = create_ctx(i915, class, inst, -1023);
+	ahnd[1] = get_reloc_ahnd(i915, ctx[1]->id);
 	igt_nsec_elapsed(&ts);
-	spin[1] = igt_spin_new(i915, .ctx = ctx[1], .flags = IGT_SPIN_POLL_RUN);
+	spin[1] = igt_spin_new(i915, .ahnd = ahnd[1], .ctx = ctx[1],
+				.flags = IGT_SPIN_POLL_RUN);
 	igt_spin_busywait_until_started(spin[1]);
 	elapsed = igt_nsec_elapsed(&ts);
 
@@ -177,6 +182,8 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 
 	intel_ctx_destroy(i915, ctx[1]);
 	intel_ctx_destroy(i915, ctx[0]);
+	put_ahnd(ahnd[1]);
+	put_ahnd(ahnd[0]);
 	gem_quiescent_gpu(i915);
 
 	return elapsed;
@@ -292,17 +299,19 @@ static void client(int i915, int engine, int *ctl, int duration, int expect)
 	unsigned int class, inst;
 	unsigned long count = 0;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	igt_assert(igt_sysfs_scanf(engine, "class", "%u", &class) == 1);
 	igt_assert(igt_sysfs_scanf(engine, "instance", "%u", &inst) == 1);
 
 	ctx = create_ctx(i915, class, inst, 0);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	while (!READ_ONCE(*ctl)) {
 		unsigned int elapsed;
 		igt_spin_t *spin;
 
-		spin = igt_spin_new(i915, .ctx = ctx,
+		spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				    .flags = (IGT_SPIN_NO_PREEMPTION |
 					      IGT_SPIN_POLL_RUN |
 					      IGT_SPIN_FENCE_OUT));
@@ -331,6 +340,7 @@ static void client(int i915, int engine, int *ctl, int duration, int expect)
 	}
 
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 	igt_info("%s client completed %lu spins\n",
 		 expect < 0 ? "Bad" : "Good", count);
 }
@@ -354,6 +364,8 @@ static void __test_mixed(int i915, int engine,
 	 * terminate the hog leaving the good client to run.
 	 */
 
+	intel_allocator_multiprocess_start();
+
 	igt_assert(igt_sysfs_scanf(engine, ATTR, "%u", &saved) == 1);
 	igt_debug("Initial %s:%u\n", ATTR, saved);
 	gem_quiescent_gpu(i915);
@@ -378,6 +390,7 @@ static void __test_mixed(int i915, int engine,
 
 	gem_quiescent_gpu(i915);
 	set_heartbeat(engine, saved);
+	intel_allocator_multiprocess_stop();
 }
 
 static void test_mixed(int i915, int engine)
@@ -414,6 +427,7 @@ static void test_off(int i915, int engine)
 	unsigned int saved;
 	igt_spin_t *spin;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * Some other clients request that there is never any interruption
@@ -433,8 +447,9 @@ static void test_off(int i915, int engine)
 	set_heartbeat(engine, 0);
 
 	ctx = create_ctx(i915, class, inst, 0);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			    .flags = (IGT_SPIN_POLL_RUN |
 				      IGT_SPIN_NO_PREEMPTION |
 				      IGT_SPIN_FENCE_OUT));
@@ -455,6 +470,7 @@ static void test_off(int i915, int engine)
 	gem_quiescent_gpu(i915);
 	set_heartbeat(engine, saved);
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 }
 
 igt_main
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 51/52] tests/sysfs_preempt_timeout: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (49 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 50/52] tests/sysfs_heartbeat_interval: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 52/52] tests/sysfs_timeslice_duration: " Zbigniew Kempczyński
                   ` (2 subsequent siblings)
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/sysfs_preempt_timeout.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/tests/i915/sysfs_preempt_timeout.c b/tests/i915/sysfs_preempt_timeout.c
index 9f00093ea..d176ae72e 100644
--- a/tests/i915/sysfs_preempt_timeout.c
+++ b/tests/i915/sysfs_preempt_timeout.c
@@ -37,6 +37,7 @@
 #include "igt_debugfs.h"
 #include "igt_dummyload.h"
 #include "igt_sysfs.h"
+#include "intel_allocator.h"
 #include "sw_sync.h"
 
 #define ATTR "preempt_timeout_ms"
@@ -143,6 +144,7 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 	igt_spin_t *spin[2];
 	uint64_t elapsed;
 	const intel_ctx_t *ctx[2];
+	uint64_t ahnd[2];
 
 	igt_assert(igt_sysfs_scanf(engine, "class", "%u", &class) == 1);
 	igt_assert(igt_sysfs_scanf(engine, "instance", "%u", &inst) == 1);
@@ -150,15 +152,18 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 	set_preempt_timeout(engine, timeout);
 
 	ctx[0] = create_ctx(i915, class, inst, -1023);
-	spin[0] = igt_spin_new(i915, .ctx = ctx[0],
+	ahnd[0] = get_reloc_ahnd(i915, ctx[0]->id);
+	spin[0] = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx[0],
 			       .flags = (IGT_SPIN_NO_PREEMPTION |
 					 IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT));
 	igt_spin_busywait_until_started(spin[0]);
 
 	ctx[1] = create_ctx(i915, class, inst, 1023);
+	ahnd[1] = get_reloc_ahnd(i915, ctx[1]->id);
 	igt_nsec_elapsed(&ts);
-	spin[1] = igt_spin_new(i915, .ctx = ctx[1], .flags = IGT_SPIN_POLL_RUN);
+	spin[1] = igt_spin_new(i915, .ahnd = ahnd[1], .ctx = ctx[1],
+				.flags = IGT_SPIN_POLL_RUN);
 	igt_spin_busywait_until_started(spin[1]);
 	elapsed = igt_nsec_elapsed(&ts);
 
@@ -171,6 +176,8 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 
 	intel_ctx_destroy(i915, ctx[1]);
 	intel_ctx_destroy(i915, ctx[0]);
+	put_ahnd(ahnd[1]);
+	put_ahnd(ahnd[0]);
 	gem_quiescent_gpu(i915);
 
 	return elapsed;
@@ -231,6 +238,7 @@ static void test_off(int i915, int engine)
 	igt_spin_t *spin[2];
 	unsigned int saved;
 	const intel_ctx_t *ctx[2];
+	uint64_t ahnd[2];
 
 	/*
 	 * We support setting the timeout to 0 to disable the reset on
@@ -252,14 +260,17 @@ static void test_off(int i915, int engine)
 	set_preempt_timeout(engine, 0);
 
 	ctx[0] = create_ctx(i915, class, inst, -1023);
-	spin[0] = igt_spin_new(i915, .ctx = ctx[0],
+	ahnd[0] = get_reloc_ahnd(i915, ctx[0]->id);
+	spin[0] = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx[0],
 			       .flags = (IGT_SPIN_NO_PREEMPTION |
 					 IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT));
 	igt_spin_busywait_until_started(spin[0]);
 
 	ctx[1] = create_ctx(i915, class, inst, 1023);
-	spin[1] = igt_spin_new(i915, .ctx = ctx[1], .flags = IGT_SPIN_POLL_RUN);
+	ahnd[1] = get_reloc_ahnd(i915, ctx[1]->id);
+	spin[1] = igt_spin_new(i915, .ahnd = ahnd[1], .ctx = ctx[1],
+				.flags = IGT_SPIN_POLL_RUN);
 
 	for (int i = 0; i < 150; i++) {
 		igt_assert_eq(sync_fence_status(spin[0]->out_fence), 0);
@@ -278,6 +289,8 @@ static void test_off(int i915, int engine)
 
 	intel_ctx_destroy(i915, ctx[1]);
 	intel_ctx_destroy(i915, ctx[0]);
+	put_ahnd(ahnd[1]);
+	put_ahnd(ahnd[0]);
 
 	igt_assert(enable_hangcheck(i915, true));
 	gem_quiescent_gpu(i915);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] [PATCH i-g-t v3 52/52] tests/sysfs_timeslice_duration: Adopt to use allocator
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (50 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 51/52] tests/sysfs_preempt_timeout: " Zbigniew Kempczyński
@ 2021-07-26 20:00 ` Zbigniew Kempczyński
  2021-07-26 21:33 ` [igt-dev] ✓ Fi.CI.BAT: success for Add allocator support in IGT (rev3) Patchwork
  2021-07-27  2:14 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
  53 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-07-26 20:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Petri Latvala

For newer gens we're not able to rely on relocations. Adopt to use
offsets acquired from the allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
---
 tests/i915/sysfs_timeslice_duration.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/tests/i915/sysfs_timeslice_duration.c b/tests/i915/sysfs_timeslice_duration.c
index b73ee3889..8a2f1c2f2 100644
--- a/tests/i915/sysfs_timeslice_duration.c
+++ b/tests/i915/sysfs_timeslice_duration.c
@@ -37,6 +37,7 @@
 #include "i915/gem_mman.h"
 #include "igt_dummyload.h"
 #include "igt_sysfs.h"
+#include "intel_allocator.h"
 #include "ioctl_wrappers.h"
 #include "intel_chipset.h"
 #include "intel_reg.h"
@@ -371,6 +372,7 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 	igt_spin_t *spin[2];
 	uint64_t elapsed;
 	const intel_ctx_t *ctx[2];
+	uint64_t ahnd[2];
 
 	igt_assert(igt_sysfs_scanf(engine, "class", "%u", &class) == 1);
 	igt_assert(igt_sysfs_scanf(engine, "instance", "%u", &inst) == 1);
@@ -378,15 +380,18 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 	set_timeslice(engine, timeout);
 
 	ctx[0] = create_ctx(i915, class, inst, 0);
-	spin[0] = igt_spin_new(i915, .ctx = ctx[0],
+	ahnd[0] = get_reloc_ahnd(i915, ctx[0]->id);
+	spin[0] = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx[0],
 			       .flags = (IGT_SPIN_NO_PREEMPTION |
 					 IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT));
 	igt_spin_busywait_until_started(spin[0]);
 
 	ctx[1] = create_ctx(i915, class, inst, 0);
+	ahnd[1] = get_reloc_ahnd(i915, ctx[1]->id);
 	igt_nsec_elapsed(&ts);
-	spin[1] = igt_spin_new(i915, .ctx = ctx[1], .flags = IGT_SPIN_POLL_RUN);
+	spin[1] = igt_spin_new(i915, .ahnd = ahnd[1], .ctx = ctx[1],
+				.flags = IGT_SPIN_POLL_RUN);
 	igt_spin_busywait_until_started(spin[1]);
 	elapsed = igt_nsec_elapsed(&ts);
 
@@ -399,6 +404,8 @@ static uint64_t __test_timeout(int i915, int engine, unsigned int timeout)
 
 	intel_ctx_destroy(i915, ctx[1]);
 	intel_ctx_destroy(i915, ctx[0]);
+	put_ahnd(ahnd[1]);
+	put_ahnd(ahnd[0]);
 	gem_quiescent_gpu(i915);
 
 	return elapsed;
@@ -460,6 +467,7 @@ static void test_off(int i915, int engine)
 	unsigned int saved;
 	igt_spin_t *spin[2];
 	const intel_ctx_t *ctx[2];
+	uint64_t ahnd[2];
 
 	/*
 	 * As always, there are some who must run uninterrupted and simply do
@@ -482,14 +490,17 @@ static void test_off(int i915, int engine)
 	set_timeslice(engine, 0);
 
 	ctx[0] = create_ctx(i915, class, inst, 0);
-	spin[0] = igt_spin_new(i915, .ctx = ctx[0],
+	ahnd[0] = get_reloc_ahnd(i915, ctx[0]->id);
+	spin[0] = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx[0],
 			       .flags = (IGT_SPIN_NO_PREEMPTION |
 					 IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT));
 	igt_spin_busywait_until_started(spin[0]);
 
 	ctx[1] = create_ctx(i915, class, inst, 0);
-	spin[1] = igt_spin_new(i915, .ctx = ctx[1], .flags = IGT_SPIN_POLL_RUN);
+	ahnd[1] = get_reloc_ahnd(i915, ctx[1]->id);
+	spin[1] = igt_spin_new(i915, .ahnd = ahnd[1], .ctx = ctx[1],
+				.flags = IGT_SPIN_POLL_RUN);
 
 	for (int i = 0; i < 150; i++) {
 		igt_assert_eq(sync_fence_status(spin[0]->out_fence), 0);
@@ -508,6 +519,8 @@ static void test_off(int i915, int engine)
 
 	intel_ctx_destroy(i915, ctx[1]);
 	intel_ctx_destroy(i915, ctx[0]);
+	put_ahnd(ahnd[1]);
+	put_ahnd(ahnd[0]);
 
 	igt_assert(enable_hangcheck(i915, true));
 	gem_quiescent_gpu(i915);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 102+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for Add allocator support in IGT (rev3)
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (51 preceding siblings ...)
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 52/52] tests/sysfs_timeslice_duration: " Zbigniew Kempczyński
@ 2021-07-26 21:33 ` Patchwork
  2021-07-27  2:14 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
  53 siblings, 0 replies; 102+ messages in thread
From: Patchwork @ 2021-07-26 21:33 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 1545 bytes --]

== Series Details ==

Series: Add allocator support in IGT (rev3)
URL   : https://patchwork.freedesktop.org/series/92881/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10398 -> IGTPW_6060
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/index.html

Known issues
------------

  Here are the changes found in IGTPW_6060 that come from known issues:

### IGT changes ###

  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#1886]: https://gitlab.freedesktop.org/drm/intel/issues/1886
  [i915#3717]: https://gitlab.freedesktop.org/drm/intel/issues/3717


Participating hosts (39 -> 35)
------------------------------

  Missing    (4): fi-ilk-m540 fi-bsw-cyan fi-bdw-samus fi-hsw-4200u 


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_6151 -> IGTPW_6060

  CI-20190529: 20190529
  CI_DRM_10398: 64169b4181952b6b1dbbb79dea3a5d85d1d2a85c @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_6060: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/index.html
  IGT_6151: c3170c2d3744521b8351a4b9c579792bc9a5f835 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git



== Testlist changes ==

+igt@gem_softpin@allocator-evict
+igt@gem_softpin@allocator-evict-all-engines

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/index.html

[-- Attachment #1.2: Type: text/html, Size: 1993 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 102+ messages in thread

* [igt-dev] ✗ Fi.CI.IGT: failure for Add allocator support in IGT (rev3)
  2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
                   ` (52 preceding siblings ...)
  2021-07-26 21:33 ` [igt-dev] ✓ Fi.CI.BAT: success for Add allocator support in IGT (rev3) Patchwork
@ 2021-07-27  2:14 ` Patchwork
  53 siblings, 0 replies; 102+ messages in thread
From: Patchwork @ 2021-07-27  2:14 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 30253 bytes --]

== Series Details ==

Series: Add allocator support in IGT (rev3)
URL   : https://patchwork.freedesktop.org/series/92881/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_10398_full -> IGTPW_6060_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with IGTPW_6060_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in IGTPW_6060_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_6060_full:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_exec_capture@capture@rcs0:
    - shard-snb:          NOTRUN -> [FAIL][1] +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-snb2/igt@gem_exec_capture@capture@rcs0.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-glk:          [PASS][2] -> [INCOMPLETE][3]
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-glk6/igt@gem_exec_fair@basic-none-share@rcs0.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk3/igt@gem_exec_fair@basic-none-share@rcs0.html

  * {igt@gem_softpin@allocator-evict-all-engines} (NEW):
    - shard-kbl:          NOTRUN -> [INCOMPLETE][4]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl3/igt@gem_softpin@allocator-evict-all-engines.html

  * igt@kms_frontbuffer_tracking@psr-slowdraw:
    - shard-tglb:         [PASS][5] -> [INCOMPLETE][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-tglb1/igt@kms_frontbuffer_tracking@psr-slowdraw.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb7/igt@kms_frontbuffer_tracking@psr-slowdraw.html

  
New tests
---------

  New tests have been introduced between CI_DRM_10398_full and IGTPW_6060_full:

### New IGT tests (7) ###

  * igt@gem_softpin@allocator-evict:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_softpin@allocator-evict-all-engines:
    - Statuses : 1 incomplete(s) 4 pass(s)
    - Exec time: [0.0, 44.31] s

  * igt@gem_softpin@allocator-evict@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [26.48, 43.45] s

  * igt@gem_softpin@allocator-evict@rcs0:
    - Statuses : 5 pass(s)
    - Exec time: [25.87, 40.75] s

  * igt@gem_softpin@allocator-evict@vcs0:
    - Statuses : 5 pass(s)
    - Exec time: [27.17, 43.26] s

  * igt@gem_softpin@allocator-evict@vcs1:
    - Statuses : 2 pass(s)
    - Exec time: [26.27, 28.99] s

  * igt@gem_softpin@allocator-evict@vecs0:
    - Statuses : 5 pass(s)
    - Exec time: [27.00, 44.42] s

  

Known issues
------------

  Here are the changes found in IGTPW_6060_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_persistence@legacy-engines-mixed-process:
    - shard-snb:          NOTRUN -> [SKIP][7] ([fdo#109271] / [i915#1099]) +4 similar issues
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-snb6/igt@gem_ctx_persistence@legacy-engines-mixed-process.html

  * igt@gem_ctx_sseu@mmap-args:
    - shard-tglb:         NOTRUN -> [SKIP][8] ([i915#280])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb7/igt@gem_ctx_sseu@mmap-args.html

  * igt@gem_eio@unwedge-stress:
    - shard-snb:          NOTRUN -> [FAIL][9] ([i915#3354])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-snb2/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-kbl:          NOTRUN -> [FAIL][10] ([i915#2846])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl3/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-flow@rcs0:
    - shard-tglb:         [PASS][11] -> [FAIL][12] ([i915#2842])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-tglb3/igt@gem_exec_fair@basic-flow@rcs0.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb7/igt@gem_exec_fair@basic-flow@rcs0.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-kbl:          NOTRUN -> [FAIL][13] ([i915#2842]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl1/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_exec_fair@basic-none-vip@rcs0:
    - shard-tglb:         NOTRUN -> [FAIL][14] ([i915#2842]) +5 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb6/igt@gem_exec_fair@basic-none-vip@rcs0.html

  * igt@gem_exec_fair@basic-none@rcs0:
    - shard-glk:          NOTRUN -> [FAIL][15] ([i915#2842])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk1/igt@gem_exec_fair@basic-none@rcs0.html

  * igt@gem_exec_fair@basic-none@vecs0:
    - shard-iclb:         NOTRUN -> [FAIL][16] ([i915#2842]) +4 similar issues
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb3/igt@gem_exec_fair@basic-none@vecs0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-tglb:         [PASS][17] -> [SKIP][18] ([i915#2190])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-tglb7/igt@gem_huc_copy@huc-copy.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb6/igt@gem_huc_copy@huc-copy.html

  * igt@gem_render_copy@yf-tiled-to-vebox-linear:
    - shard-iclb:         NOTRUN -> [SKIP][19] ([i915#768]) +2 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb1/igt@gem_render_copy@yf-tiled-to-vebox-linear.html

  * igt@gem_userptr_blits@input-checking:
    - shard-apl:          NOTRUN -> [DMESG-WARN][20] ([i915#3002])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl8/igt@gem_userptr_blits@input-checking.html
    - shard-snb:          NOTRUN -> [DMESG-WARN][21] ([i915#3002])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-snb5/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-apl:          NOTRUN -> [FAIL][22] ([i915#3318])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl8/igt@gem_userptr_blits@vma-merge.html

  * igt@gen7_exec_parse@basic-allocation:
    - shard-tglb:         NOTRUN -> [SKIP][23] ([fdo#109289])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb3/igt@gen7_exec_parse@basic-allocation.html

  * igt@gen9_exec_parse@batch-without-end:
    - shard-iclb:         NOTRUN -> [SKIP][24] ([i915#2856])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb5/igt@gen9_exec_parse@batch-without-end.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-tglb:         NOTRUN -> [FAIL][25] ([i915#454])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb6/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_pm_dc@dc9-dpms:
    - shard-apl:          NOTRUN -> [FAIL][26] ([i915#3343])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl3/igt@i915_pm_dc@dc9-dpms.html

  * igt@i915_pm_rpm@pm-caching:
    - shard-tglb:         NOTRUN -> [SKIP][27] ([i915#579]) +2 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb5/igt@i915_pm_rpm@pm-caching.html

  * igt@i915_pm_rpm@sysfs-read:
    - shard-iclb:         NOTRUN -> [SKIP][28] ([i915#579]) +1 similar issue
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb4/igt@i915_pm_rpm@sysfs-read.html

  * igt@i915_selftest@live@hangcheck:
    - shard-snb:          NOTRUN -> [INCOMPLETE][29] ([i915#2782])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-snb2/igt@i915_selftest@live@hangcheck.html

  * igt@i915_suspend@sysfs-reader:
    - shard-apl:          [PASS][30] -> [DMESG-WARN][31] ([i915#180]) +1 similar issue
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-apl8/igt@i915_suspend@sysfs-reader.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl2/igt@i915_suspend@sysfs-reader.html

  * igt@kms_addfb_basic@invalid-smem-bo-on-discrete:
    - shard-iclb:         NOTRUN -> [SKIP][32] ([i915#3826])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb8/igt@kms_addfb_basic@invalid-smem-bo-on-discrete.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing:
    - shard-iclb:         NOTRUN -> [SKIP][33] ([i915#1769])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb3/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html
    - shard-tglb:         NOTRUN -> [SKIP][34] ([i915#1769])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb3/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html

  * igt@kms_big_fb@linear-32bpp-rotate-0:
    - shard-glk:          [PASS][35] -> [DMESG-WARN][36] ([i915#118] / [i915#95])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-glk6/igt@kms_big_fb@linear-32bpp-rotate-0.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk8/igt@kms_big_fb@linear-32bpp-rotate-0.html

  * igt@kms_big_fb@x-tiled-8bpp-rotate-270:
    - shard-iclb:         NOTRUN -> [SKIP][37] ([fdo#110725] / [fdo#111614])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb7/igt@kms_big_fb@x-tiled-8bpp-rotate-270.html
    - shard-tglb:         NOTRUN -> [SKIP][38] ([fdo#111614])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb1/igt@kms_big_fb@x-tiled-8bpp-rotate-270.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip:
    - shard-kbl:          NOTRUN -> [SKIP][39] ([fdo#109271] / [i915#3777]) +1 similar issue
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl3/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-glk:          NOTRUN -> [SKIP][40] ([fdo#109271] / [i915#3777])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk2/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-180:
    - shard-tglb:         NOTRUN -> [SKIP][41] ([fdo#111615]) +2 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb7/igt@kms_big_fb@yf-tiled-8bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-apl:          NOTRUN -> [SKIP][42] ([fdo#109271] / [i915#3777]) +1 similar issue
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl7/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-apl:          NOTRUN -> [SKIP][43] ([fdo#109271]) +264 similar issues
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl6/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-iclb:         NOTRUN -> [SKIP][44] ([fdo#110723]) +1 similar issue
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb7/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_joiner@basic:
    - shard-tglb:         NOTRUN -> [SKIP][45] ([i915#2705])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb1/igt@kms_big_joiner@basic.html

  * igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_rc_ccs:
    - shard-iclb:         NOTRUN -> [SKIP][46] ([fdo#109278]) +21 similar issues
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb6/igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_rc_ccs.html

  * igt@kms_ccs@pipe-b-crc-sprite-planes-basic-y_tiled_gen12_mc_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([i915#3689]) +7 similar issues
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb2/igt@kms_ccs@pipe-b-crc-sprite-planes-basic-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-d-bad-pixel-format-y_tiled_ccs:
    - shard-snb:          NOTRUN -> [SKIP][48] ([fdo#109271]) +364 similar issues
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-snb7/igt@kms_ccs@pipe-d-bad-pixel-format-y_tiled_ccs.html

  * igt@kms_chamelium@dp-hpd-storm:
    - shard-iclb:         NOTRUN -> [SKIP][49] ([fdo#109284] / [fdo#111827]) +6 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb6/igt@kms_chamelium@dp-hpd-storm.html

  * igt@kms_chamelium@hdmi-crc-nonplanar-formats:
    - shard-glk:          NOTRUN -> [SKIP][50] ([fdo#109271] / [fdo#111827]) +10 similar issues
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk2/igt@kms_chamelium@hdmi-crc-nonplanar-formats.html

  * igt@kms_chamelium@hdmi-edid-change-during-suspend:
    - shard-apl:          NOTRUN -> [SKIP][51] ([fdo#109271] / [fdo#111827]) +19 similar issues
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl8/igt@kms_chamelium@hdmi-edid-change-during-suspend.html

  * igt@kms_color@pipe-d-ctm-red-to-blue:
    - shard-iclb:         NOTRUN -> [SKIP][52] ([fdo#109278] / [i915#1149])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb4/igt@kms_color@pipe-d-ctm-red-to-blue.html

  * igt@kms_color_chamelium@pipe-a-ctm-0-75:
    - shard-tglb:         NOTRUN -> [SKIP][53] ([fdo#109284] / [fdo#111827]) +6 similar issues
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb2/igt@kms_color_chamelium@pipe-a-ctm-0-75.html

  * igt@kms_color_chamelium@pipe-a-ctm-blue-to-red:
    - shard-kbl:          NOTRUN -> [SKIP][54] ([fdo#109271] / [fdo#111827]) +25 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl7/igt@kms_color_chamelium@pipe-a-ctm-blue-to-red.html

  * igt@kms_color_chamelium@pipe-b-ctm-negative:
    - shard-snb:          NOTRUN -> [SKIP][55] ([fdo#109271] / [fdo#111827]) +14 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-snb6/igt@kms_color_chamelium@pipe-b-ctm-negative.html

  * igt@kms_color_chamelium@pipe-d-ctm-negative:
    - shard-iclb:         NOTRUN -> [SKIP][56] ([fdo#109278] / [fdo#109284] / [fdo#111827])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb6/igt@kms_color_chamelium@pipe-d-ctm-negative.html

  * igt@kms_content_protection@legacy:
    - shard-apl:          NOTRUN -> [TIMEOUT][57] ([i915#1319])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl6/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@uevent:
    - shard-kbl:          NOTRUN -> [FAIL][58] ([i915#2105])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl6/igt@kms_content_protection@uevent.html
    - shard-apl:          NOTRUN -> [FAIL][59] ([i915#2105])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl8/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-a-cursor-512x170-sliding:
    - shard-kbl:          NOTRUN -> [SKIP][60] ([fdo#109271]) +259 similar issues
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl2/igt@kms_cursor_crc@pipe-a-cursor-512x170-sliding.html

  * igt@kms_cursor_crc@pipe-a-cursor-512x512-rapid-movement:
    - shard-iclb:         NOTRUN -> [SKIP][61] ([fdo#109278] / [fdo#109279])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb8/igt@kms_cursor_crc@pipe-a-cursor-512x512-rapid-movement.html

  * igt@kms_cursor_crc@pipe-c-cursor-32x32-onscreen:
    - shard-tglb:         NOTRUN -> [SKIP][62] ([i915#3319])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb5/igt@kms_cursor_crc@pipe-c-cursor-32x32-onscreen.html

  * igt@kms_cursor_crc@pipe-d-cursor-512x512-onscreen:
    - shard-tglb:         NOTRUN -> [SKIP][63] ([fdo#109279] / [i915#3359]) +1 similar issue
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb5/igt@kms_cursor_crc@pipe-d-cursor-512x512-onscreen.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-legacy:
    - shard-iclb:         NOTRUN -> [SKIP][64] ([fdo#109274] / [fdo#109278]) +3 similar issues
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb6/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-atomic:
    - shard-tglb:         NOTRUN -> [SKIP][65] ([fdo#111825]) +17 similar issues
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb3/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic.html

  * igt@kms_flip@2x-nonexisting-fb:
    - shard-iclb:         NOTRUN -> [SKIP][66] ([fdo#109274]) +1 similar issue
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb5/igt@kms_flip@2x-nonexisting-fb.html

  * igt@kms_flip@flip-vs-suspend@c-dp1:
    - shard-kbl:          [PASS][67] -> [DMESG-WARN][68] ([i915#180]) +7 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-kbl6/igt@kms_flip@flip-vs-suspend@c-dp1.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl7/igt@kms_flip@flip-vs-suspend@c-dp1.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt:
    - shard-iclb:         NOTRUN -> [SKIP][69] ([fdo#109280]) +14 similar issues
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-suspend:
    - shard-glk:          NOTRUN -> [SKIP][70] ([fdo#109271]) +88 similar issues
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk5/igt@kms_frontbuffer_tracking@fbcpsr-suspend.html

  * igt@kms_invalid_dotclock:
    - shard-tglb:         NOTRUN -> [SKIP][71] ([fdo#110577])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb5/igt@kms_invalid_dotclock.html
    - shard-iclb:         NOTRUN -> [SKIP][72] ([fdo#109310])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb8/igt@kms_invalid_dotclock.html

  * igt@kms_pipe_b_c_ivb@enable-pipe-c-while-b-has-3-lanes:
    - shard-iclb:         NOTRUN -> [SKIP][73] ([fdo#109289]) +1 similar issue
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb3/igt@kms_pipe_b_c_ivb@enable-pipe-c-while-b-has-3-lanes.html

  * igt@kms_pipe_crc_basic@read-crc-pipe-d:
    - shard-glk:          NOTRUN -> [SKIP][74] ([fdo#109271] / [i915#533])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk3/igt@kms_pipe_crc_basic@read-crc-pipe-d.html
    - shard-apl:          NOTRUN -> [SKIP][75] ([fdo#109271] / [i915#533]) +1 similar issue
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl6/igt@kms_pipe_crc_basic@read-crc-pipe-d.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d:
    - shard-kbl:          NOTRUN -> [SKIP][76] ([fdo#109271] / [i915#533]) +1 similar issue
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb:
    - shard-apl:          NOTRUN -> [FAIL][77] ([fdo#108145] / [i915#265]) +2 similar issues
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl6/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html
    - shard-glk:          NOTRUN -> [FAIL][78] ([fdo#108145] / [i915#265])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk7/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb:
    - shard-apl:          NOTRUN -> [FAIL][79] ([i915#265]) +2 similar issues
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl6/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb:
    - shard-kbl:          NOTRUN -> [FAIL][80] ([fdo#108145] / [i915#265]) +4 similar issues
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl7/igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb.html

  * igt@kms_plane_lowres@pipe-a-tiling-x:
    - shard-iclb:         NOTRUN -> [SKIP][81] ([i915#3536])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb2/igt@kms_plane_lowres@pipe-a-tiling-x.html
    - shard-tglb:         NOTRUN -> [SKIP][82] ([i915#3536])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb6/igt@kms_plane_lowres@pipe-a-tiling-x.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-1:
    - shard-glk:          NOTRUN -> [SKIP][83] ([fdo#109271] / [i915#658]) +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk6/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-1.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3:
    - shard-iclb:         NOTRUN -> [SKIP][84] ([i915#658]) +1 similar issue
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb5/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html
    - shard-kbl:          NOTRUN -> [SKIP][85] ([fdo#109271] / [i915#658]) +7 similar issues
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl7/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html
    - shard-tglb:         NOTRUN -> [SKIP][86] ([i915#2920]) +1 similar issue
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb7/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-2:
    - shard-apl:          NOTRUN -> [SKIP][87] ([fdo#109271] / [i915#658]) +5 similar issues
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl2/igt@kms_psr2_sf@plane-move-sf-dmg-area-2.html

  * igt@kms_psr@psr2_cursor_mmap_cpu:
    - shard-iclb:         NOTRUN -> [SKIP][88] ([fdo#109441]) +2 similar issues
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb1/igt@kms_psr@psr2_cursor_mmap_cpu.html

  * igt@kms_psr@psr2_sprite_mmap_cpu:
    - shard-tglb:         NOTRUN -> [FAIL][89] ([i915#132] / [i915#3467]) +1 similar issue
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb5/igt@kms_psr@psr2_sprite_mmap_cpu.html

  * igt@kms_psr@psr2_sprite_mmap_gtt:
    - shard-iclb:         [PASS][90] -> [SKIP][91] ([fdo#109441])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-iclb2/igt@kms_psr@psr2_sprite_mmap_gtt.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb5/igt@kms_psr@psr2_sprite_mmap_gtt.html

  * igt@kms_vblank@pipe-a-query-forked-hang:
    - shard-kbl:          [PASS][92] -> [SKIP][93] ([fdo#109271])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-kbl4/igt@kms_vblank@pipe-a-query-forked-hang.html
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl6/igt@kms_vblank@pipe-a-query-forked-hang.html
    - shard-glk:          [PASS][94] -> [SKIP][95] ([fdo#109271])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-glk5/igt@kms_vblank@pipe-a-query-forked-hang.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk6/igt@kms_vblank@pipe-a-query-forked-hang.html

  * igt@kms_vblank@pipe-a-ts-continuation-modeset-rpm:
    - shard-tglb:         NOTRUN -> [SKIP][96] ([i915#3841])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb7/igt@kms_vblank@pipe-a-ts-continuation-modeset-rpm.html

  * igt@kms_writeback@writeback-check-output:
    - shard-apl:          NOTRUN -> [SKIP][97] ([fdo#109271] / [i915#2437]) +1 similar issue
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl1/igt@kms_writeback@writeback-check-output.html

  * igt@nouveau_crc@ctx-flip-threshold-reset-after-capture:
    - shard-iclb:         NOTRUN -> [SKIP][98] ([i915#2530]) +1 similar issue
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb5/igt@nouveau_crc@ctx-flip-threshold-reset-after-capture.html

  * igt@nouveau_crc@pipe-d-ctx-flip-detection:
    - shard-tglb:         NOTRUN -> [SKIP][99] ([i915#2530]) +1 similar issue
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb6/igt@nouveau_crc@pipe-d-ctx-flip-detection.html
    - shard-iclb:         NOTRUN -> [SKIP][100] ([fdo#109278] / [i915#2530])
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb2/igt@nouveau_crc@pipe-d-ctx-flip-detection.html

  * igt@perf_pmu@rc6-runtime-pm:
    - shard-iclb:         NOTRUN -> [SKIP][101] ([i915#293])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb2/igt@perf_pmu@rc6-runtime-pm.html

  * igt@prime_nv_pcopy@test3_1:
    - shard-tglb:         NOTRUN -> [SKIP][102] ([fdo#109291]) +3 similar issues
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb5/igt@prime_nv_pcopy@test3_1.html

  * igt@prime_nv_pcopy@test3_3:
    - shard-iclb:         NOTRUN -> [SKIP][103] ([fdo#109291]) +2 similar issues
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb4/igt@prime_nv_pcopy@test3_3.html

  * igt@prime_vgem@sync@rcs0:
    - shard-iclb:         [PASS][104] -> [INCOMPLETE][105] ([i915#409])
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-iclb6/igt@prime_vgem@sync@rcs0.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb6/igt@prime_vgem@sync@rcs0.html

  * igt@sysfs_clients@fair-0:
    - shard-apl:          NOTRUN -> [SKIP][106] ([fdo#109271] / [i915#2994]) +1 similar issue
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl2/igt@sysfs_clients@fair-0.html

  * igt@sysfs_clients@fair-3:
    - shard-kbl:          NOTRUN -> [SKIP][107] ([fdo#109271] / [i915#2994]) +3 similar issues
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl6/igt@sysfs_clients@fair-3.html
    - shard-tglb:         NOTRUN -> [SKIP][108] ([i915#2994])
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb3/igt@sysfs_clients@fair-3.html

  * igt@sysfs_clients@split-50:
    - shard-glk:          NOTRUN -> [SKIP][109] ([fdo#109271] / [i915#2994])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk2/igt@sysfs_clients@split-50.html

  
#### Possible fixes ####

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [TIMEOUT][110] ([i915#2369] / [i915#3063] / [i915#3648]) -> [PASS][111]
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-tglb3/igt@gem_eio@unwedge-stress.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb1/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-glk:          [FAIL][112] ([i915#2846]) -> [PASS][113]
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-glk3/igt@gem_exec_fair@basic-deadline.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk1/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-tglb:         [FAIL][114] ([i915#2842]) -> [PASS][115] +1 similar issue
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-tglb6/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-tglb2/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
    - shard-glk:          [FAIL][116] ([i915#2842]) -> [PASS][117] +2 similar issues
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-glk4/igt@gem_exec_fair@basic-pace-solo@rcs0.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk5/igt@gem_exec_fair@basic-pace-solo@rcs0.html

  * igt@gem_mmap_gtt@cpuset-big-copy-odd:
    - shard-glk:          [FAIL][118] ([i915#1888] / [i915#307]) -> [PASS][119]
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-glk9/igt@gem_mmap_gtt@cpuset-big-copy-odd.html
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk5/igt@gem_mmap_gtt@cpuset-big-copy-odd.html

  * igt@kms_big_fb@linear-32bpp-rotate-180:
    - shard-glk:          [DMESG-WARN][120] ([i915#118] / [i915#95]) -> [PASS][121] +1 similar issue
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-glk1/igt@kms_big_fb@linear-32bpp-rotate-180.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-glk6/igt@kms_big_fb@linear-32bpp-rotate-180.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
    - shard-iclb:         [DMESG-WARN][122] ([i915#3621]) -> [PASS][123]
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-iclb1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-iclb4/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-kbl:          [DMESG-WARN][124] ([i915#180]) -> [PASS][125] +2 similar issues
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-kbl7/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl3/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-kbl:          [INCOMPLETE][126] ([i915#155] / [i915#180] / [i915#636]) -> [PASS][127]
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-kbl3/igt@kms_fbcon_fbt@fbc-suspend.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-kbl2/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_flip@flip-vs-suspend-interruptible@c-dp1:
    - shard-apl:          [DMESG-WARN][128] ([i915#180]) -> [PASS][129] +2 similar issues
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/shard-apl1/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-kbl:          [DMESG-WARN][130] ([i915#180] / [i915#1982]) -> [PASS][131]
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10398/sh

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6060/index.html

[-- Attachment #1.2: Type: text/html, Size: 34049 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use Zbigniew Kempczyński
@ 2021-08-03 21:01   ` Dixit, Ashutosh
  2021-08-04  0:18     ` Dixit, Ashutosh
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-03 21:01 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Mon, 26 Jul 2021 12:59:36 -0700, Zbigniew Kempczyński wrote:
>
> +static inline uint64_t get_simple_ahnd(int fd, uint32_t ctx)
> +{
> +	bool do_relocs = gem_has_relocations(fd);
> +
> +	return do_relocs ? 0 : intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_SIMPLE);

Should this function be e.g.

    return intel_allocator_open(fd, 0, do_relocs ?
                                INTEL_ALLOCATOR_RELOC : INTEL_ALLOCATOR_SIMPLE);

Similarly for others.

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 01/52] lib/igt_dummyload: Add support of using allocator in igt spinner
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 01/52] lib/igt_dummyload: Add support of using allocator in igt spinner Zbigniew Kempczyński
@ 2021-08-03 23:07   ` Dixit, Ashutosh
  2021-08-04  6:19     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-03 23:07 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Mon, 26 Jul 2021 12:59:35 -0700, Zbigniew Kempczyński wrote:
>
> @@ -164,16 +171,34 @@ emit_recursive_batch(igt_spin_t *spin,
>	execbuf->buffer_count++;
>	cs = spin->batch;
>
> -	obj[BATCH].offset = addr;
> +	if (ahnd)
> +		addr = intel_allocator_alloc_with_strategy(ahnd, obj[BATCH].handle,
> +							   BATCH_SIZE, 0,
> +							   ALLOC_STRATEGY_LOW_TO_HIGH);

Is the strategy argument just for debug, so that spin offsets look
different from offsets for other objects? Since everyone should be
allocating from the same allocator which is managing offsets this should
probably not be needed?

In any case, the patch is great, so this is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 03/52] lib/igt_gt: Add passing ahnd as an argument to igt_hang
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 03/52] lib/igt_gt: Add passing ahnd as an argument to igt_hang Zbigniew Kempczyński
@ 2021-08-03 23:15   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-03 23:15 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Mon, 26 Jul 2021 12:59:37 -0700, Zbigniew Kempczyński wrote:
>
> Required as spinner is used, see gem_ringfill.c

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 04/52] lib/intel_batchbuffer: Ensure relocation code will be called
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 04/52] lib/intel_batchbuffer: Ensure relocation code will be called Zbigniew Kempczyński
@ 2021-08-03 23:34   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-03 23:34 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Mon, 26 Jul 2021 12:59:38 -0700, Zbigniew Kempczyński wrote:
>
> Currently we're not sure relocations code will be called (presumed_offset
> == offset == 0) so enforce them. Passing presumed_offset and offset to
> auxiliary functions will prepare code to switch to no-reloc mode.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-08-03 21:01   ` Dixit, Ashutosh
@ 2021-08-04  0:18     ` Dixit, Ashutosh
  2021-08-04  6:13       ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-04  0:18 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Tue, 03 Aug 2021 14:01:24 -0700, Dixit, Ashutosh wrote:
>
> On Mon, 26 Jul 2021 12:59:36 -0700, Zbigniew Kempczyński wrote:
> >
> > +static inline uint64_t get_simple_ahnd(int fd, uint32_t ctx)
> > +{
> > +	bool do_relocs = gem_has_relocations(fd);
> > +
> > +	return do_relocs ? 0 : intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_SIMPLE);
>
> Should this function be e.g.
>
>     return intel_allocator_open(fd, 0, do_relocs ?
>                                 INTEL_ALLOCATOR_RELOC : INTEL_ALLOCATOR_SIMPLE);
>
> Similarly for others.

The patch is fine but there was the above code (which I wrote) in
gem_linear_blits.c, hence I was wondering.

> +static inline uint64_t get_offset(uint64_t ahnd, uint32_t handle,
> +				  uint64_t size, uint64_t alignment)
> +{
> +	if (!ahnd)
> +		return 0;
> +
> +	return intel_allocator_alloc(ahnd, handle, size, alignment);
> +}
> +
> +static inline bool put_offset(uint64_t ahnd, uint32_t handle)
> +{
> +	if (!ahnd)
> +		return 0;
> +
> +	return intel_allocator_free(ahnd, handle);
> +}
> +

Also for the function names are too generic with potential for namespace
conflicts, probably ahnd_get_offset/ahnd_put_offset?

In any case, this is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-08-04  0:18     ` Dixit, Ashutosh
@ 2021-08-04  6:13       ` Zbigniew Kempczyński
  2021-08-05 18:53         ` Dixit, Ashutosh
  0 siblings, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-04  6:13 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Tue, Aug 03, 2021 at 05:18:32PM -0700, Dixit, Ashutosh wrote:
> On Tue, 03 Aug 2021 14:01:24 -0700, Dixit, Ashutosh wrote:
> >
> > On Mon, 26 Jul 2021 12:59:36 -0700, Zbigniew Kempczyński wrote:
> > >
> > > +static inline uint64_t get_simple_ahnd(int fd, uint32_t ctx)
> > > +{
> > > +	bool do_relocs = gem_has_relocations(fd);
> > > +
> > > +	return do_relocs ? 0 : intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_SIMPLE);
> >
> > Should this function be e.g.
> >
> >     return intel_allocator_open(fd, 0, do_relocs ?
> >                                 INTEL_ALLOCATOR_RELOC : INTEL_ALLOCATOR_SIMPLE);
> >
> > Similarly for others.
> 
> The patch is fine but there was the above code (which I wrote) in
> gem_linear_blits.c, hence I was wondering.

On the beginning - I'm sorry for email length. It may give some light how
things were designed, why and what issues with that we got.

Regarding gem_linear_blits - in this case doesn't matter which allocator
you'll use. There's summary:

1. Reloc allocator just increments offsets but is doing this in multiprocess
   environment. It doesn't track which offsets are occupied.

2. Simple takes care of which offsets are occupied.

For gem_linear_blits on older gens kernel will propose its own offsets
if we pass something it don't like. For simple we're on newer gens 
and we got full ppgtt so we don't overlap with offsets.

Even if Simple is stateful we got some case in which its usage
is currently limited (so you can see using reloc in most of the
tests). Problem is with...  it is stateful. Most of tests creates batch
(gem object), use it and destroy it. From allocator perspective we alloc
offset, then we free it. In next round we got same offset for another batch
(gem object). So kernel serialize the execution until previous vma is freed.
This lead to non-pipelined execution.

You can see pattern in many tests - ahnd = get_reloc_ahnd(...), 
get offset for scratch surface, then pass scratch_offset to some execution
function. This allows to keep us same offset for scratch and get new
offsets for batches. The best would be to have something hybrid which would
propose new (and not busy) bb, but that should happen in multiprocess env
so I haven't found how to write this yet. Libdrm handles pools of objects
and reuses them if they were not busy. But doing this in multiprocess
requires synchronization so some additional mechanism should be added
to allocator to handle this case. 

I still wonder to introduce .dependency_offset in creating spinner when
.dependency handle is passed. Currently we have to use Simple to ensure
we got same offset for .dependency. As spinners keep batch handles until
they are freed this likely is not a problem. But it may be in the future.

> 
> > +static inline uint64_t get_offset(uint64_t ahnd, uint32_t handle,
> > +				  uint64_t size, uint64_t alignment)
> > +{
> > +	if (!ahnd)
> > +		return 0;
> > +
> > +	return intel_allocator_alloc(ahnd, handle, size, alignment);
> > +}
> > +
> > +static inline bool put_offset(uint64_t ahnd, uint32_t handle)
> > +{
> > +	if (!ahnd)
> > +		return 0;
> > +
> > +	return intel_allocator_free(ahnd, handle);
> > +}
> > +
> 
> Also for the function names are too generic with potential for namespace
> conflicts, probably ahnd_get_offset/ahnd_put_offset?

If there will be more voices to change it I'll do it. At the moment
I wanted to have few short functions. I thought about ahnd_get_offset(ahnd, ...) 
but when ahnd would be valid allocator handle, asserting otherwise. 
get_offset(ahnd, ...) would just return some offset, for relocations
it may be 0 (like currently is), allocating offset for valid ahnd.

--
Zbigniew 

> 
> In any case, this is:
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 01/52] lib/igt_dummyload: Add support of using allocator in igt spinner
  2021-08-03 23:07   ` Dixit, Ashutosh
@ 2021-08-04  6:19     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-04  6:19 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Tue, Aug 03, 2021 at 04:07:07PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:35 -0700, Zbigniew Kempczyński wrote:
> >
> > @@ -164,16 +171,34 @@ emit_recursive_batch(igt_spin_t *spin,
> >	execbuf->buffer_count++;
> >	cs = spin->batch;
> >
> > -	obj[BATCH].offset = addr;
> > +	if (ahnd)
> > +		addr = intel_allocator_alloc_with_strategy(ahnd, obj[BATCH].handle,
> > +							   BATCH_SIZE, 0,
> > +							   ALLOC_STRATEGY_LOW_TO_HIGH);
> 
> Is the strategy argument just for debug, so that spin offsets look
> different from offsets for other objects? Since everyone should be
> allocating from the same allocator which is managing offsets this should
> probably not be needed?

I wanted to be consisted with reloc version. Currently it randomize offsets
within first 32bit gtt space (see comment in emit_recursive_batch()).

Simple allocator starts from high->low from default so to be consisted 
I've added LOW_TO_HIGH flag to alloc offsets on low vm address space.

--
Zbigniew

> 
> In any case, the patch is great, so this is:
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy Zbigniew Kempczyński
@ 2021-08-04 23:26   ` Dixit, Ashutosh
  2021-08-04 23:44     ` Dixit, Ashutosh
  2021-08-05  7:28     ` Zbigniew Kempczyński
  0 siblings, 2 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-04 23:26 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Mon, 26 Jul 2021 12:59:40 -0700, Zbigniew Kempczyński wrote:
>
> @@ -808,9 +816,21 @@ void igt_blitter_src_copy(int fd,
>	uint32_t src_pitch, dst_pitch;
>	uint32_t dst_reloc_offset, src_reloc_offset;
>	uint32_t gen = intel_gen(intel_get_drm_devid(fd));
> +	uint64_t batch_offset, src_offset, dst_offset;
>	const bool has_64b_reloc = gen >= 8;
>	int i = 0;
>
> +	batch_handle = gem_create(fd, 4096);
> +	if (ahnd) {
> +		src_offset = get_offset(ahnd, src_handle, src_size, 0);
> +		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
> +		batch_offset = get_offset(ahnd, batch_handle, 4096, 0);
> +	} else {
> +		src_offset = 16 << 20;
> +		dst_offset = ALIGN(src_offset + src_size, 1 << 20);
> +		batch_offset = ALIGN(dst_offset + dst_size, 1 << 20);

For the !ahnd case, we are providing relocations right? We still need to
provide these offsets or they can all be 0?

> @@ -882,22 +902,29 @@ void igt_blitter_src_copy(int fd,
>
>	igt_assert(i <= ARRAY_SIZE(batch));
>
> -	batch_handle = gem_create(fd, 4096);
>	gem_write(fd, batch_handle, 0, batch, sizeof(batch));
>
> -	fill_relocation(&relocs[0], dst_handle, -1, dst_delta, dst_reloc_offset,
> +	fill_relocation(&relocs[0], dst_handle, dst_offset,
> +			dst_delta, dst_reloc_offset,
>			I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER);
> -	fill_relocation(&relocs[1], src_handle, -1, src_delta, src_reloc_offset,
> +	fill_relocation(&relocs[1], src_handle, src_offset,
> +			src_delta, src_reloc_offset,
>			I915_GEM_DOMAIN_RENDER, 0);
>
> -	fill_object(&objs[0], dst_handle, 0, NULL, 0);
> -	fill_object(&objs[1], src_handle, 0, NULL, 0);
> -	fill_object(&objs[2], batch_handle, 0, relocs, 2);
> +	fill_object(&objs[0], dst_handle, dst_offset, NULL, 0);
> +	fill_object(&objs[1], src_handle, src_offset, NULL, 0);
> +	fill_object(&objs[2], batch_handle, batch_offset, relocs, 2);
>
> -	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE;
> +	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE | EXEC_OBJECT_WRITE;
>	objs[1].flags |= EXEC_OBJECT_NEEDS_FENCE;
>
> -	exec_blit(fd, objs, 3, gen, 0);
> +	if (ahnd) {
> +		objs[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> +		objs[1].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> +		objs[2].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> +	}

Should be add an "else" here and pull the fill_relocation() and set the
relocation_count to 2 only if we have !ahnd? Maybe ok to leave as is too if
the kernel will ignore the reloc stuff when EXEC_OBJECT_PINNED is set.

> @@ -584,10 +601,17 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
>	obj[SCRATCH].handle = prime_fd_to_handle(i915, dmabuf);
>
>	obj[BATCH].handle = gem_create(i915, size);
> +	obj[BATCH].offset = get_offset(ahnd, obj[BATCH].handle, size, 0);
>	obj[BATCH].relocs_ptr = (uintptr_t)store;
> -	obj[BATCH].relocation_count = ARRAY_SIZE(store);
> +	obj[BATCH].relocation_count = !ahnd ? ARRAY_SIZE(store) : 0;
>	memset(store, 0, sizeof(store));
>
> +	if (ahnd) {
> +		obj[SCRATCH].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
> +		obj[SCRATCH].offset = scratch_offset;
> +		obj[BATCH].flags = EXEC_OBJECT_PINNED;
> +	}

Why don't we compute scratch_offset in work() itself (and rather pass it in
from the callers)?

> @@ -602,8 +626,8 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
>		store[count].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
>		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
>		if (gen >= 8) {
> -			batch[++i] = 0;
> -			batch[++i] = 0;
> +			batch[++i] = scratch_offset + store[count].delta;
> +			batch[++i] = (scratch_offset + store[count].delta) >> 32;
>		} else if (gen >= 4) {
>			batch[++i] = 0;
>			batch[++i] = 0;

Should we add the offset's even for previous gen's (gen < 8)? Because I am
thinking at present kernel is supporting reloc's for gen < 12 but maybe
later kernels will discontinue them completely so we'll need to fix the
previous gen's all over again? Maybe too much?

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 07/52] lib/intel_batchbuffer: Try to avoid relocations in blitting
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 07/52] lib/intel_batchbuffer: Try to avoid relocations in blitting Zbigniew Kempczyński
@ 2021-08-04 23:42   ` Dixit, Ashutosh
  2021-08-05  7:34     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-04 23:42 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Mon, 26 Jul 2021 12:59:41 -0700, Zbigniew Kempczyński wrote:
>
> We're proposing not overlapping offsets in both blitter copying functions
> so we can try to skip relocations.

OK, afaiu I915_EXEC_NO_RELOC is a hint so when I915_EXEC_IS_PINNED is not
specified relocations will be applied when needed (and we are providing
presumed_offset's even in the relocation case):

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Petri Latvala <petri.latvala@intel.com>
> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  lib/intel_batchbuffer.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index d4a59e508..bbf8e0da2 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -711,7 +711,7 @@ static void exec_blit(int fd,
>	struct drm_i915_gem_execbuffer2 exec = {
>		.buffers_ptr = to_user_pointer(objs),
>		.buffer_count = count,
> -		.flags = gen >= 6 ? I915_EXEC_BLT : 0,
> +		.flags = gen >= 6 ? I915_EXEC_BLT : 0 | I915_EXEC_NO_RELOC,
>		.rsvd1 = ctx,
>	};
>
> --
> 2.26.0
>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy
  2021-08-04 23:26   ` Dixit, Ashutosh
@ 2021-08-04 23:44     ` Dixit, Ashutosh
  2021-08-05  8:50       ` Zbigniew Kempczyński
  2021-08-05  7:28     ` Zbigniew Kempczyński
  1 sibling, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-04 23:44 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Wed, 04 Aug 2021 16:26:32 -0700, Dixit, Ashutosh wrote:
>
> On Mon, 26 Jul 2021 12:59:40 -0700, Zbigniew Kempczyński wrote:
> >
> > @@ -808,9 +816,21 @@ void igt_blitter_src_copy(int fd,
> >	uint32_t src_pitch, dst_pitch;
> >	uint32_t dst_reloc_offset, src_reloc_offset;
> >	uint32_t gen = intel_gen(intel_get_drm_devid(fd));
> > +	uint64_t batch_offset, src_offset, dst_offset;
> >	const bool has_64b_reloc = gen >= 8;
> >	int i = 0;
> >
> > +	batch_handle = gem_create(fd, 4096);
> > +	if (ahnd) {
> > +		src_offset = get_offset(ahnd, src_handle, src_size, 0);
> > +		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
> > +		batch_offset = get_offset(ahnd, batch_handle, 4096, 0);
> > +	} else {
> > +		src_offset = 16 << 20;
> > +		dst_offset = ALIGN(src_offset + src_size, 1 << 20);
> > +		batch_offset = ALIGN(dst_offset + dst_size, 1 << 20);
>
> For the !ahnd case, we are providing relocations right? We still need to
> provide these offsets or they can all be 0?

This is probably needed because of I915_EXEC_NO_RELOC added in the next
patch (Patch 07/20)?

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 08/52] lib/huc_copy: Extend huc copy prototype to pass allocator handle
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 08/52] lib/huc_copy: Extend huc copy prototype to pass allocator handle Zbigniew Kempczyński
@ 2021-08-05  0:31   ` Dixit, Ashutosh
  2021-08-05  7:44     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05  0:31 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:42 -0700, Zbigniew Kempczyński wrote:
>
> @@ -86,6 +90,21 @@ gen9_huc_copyfunc(int fd,
>	buf[i++] = MFX_WAIT;
>
>	memset(reloc, 0, sizeof(reloc));
> +
> +	if (ahnd) {
> +		obj[0].flags = EXEC_OBJECT_PINNED;
> +		obj[1].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
> +		obj[2].flags = EXEC_OBJECT_PINNED;

Don't need | EXEC_OBJECT_SUPPORTS_48B_ADDRESS here?

Otherwise this is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 09/52] tests/gem_bad_reloc: Skip on gens where relocations are not supported
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 09/52] tests/gem_bad_reloc: Skip on gens where relocations are not supported Zbigniew Kempczyński
@ 2021-08-05  0:33   ` Dixit, Ashutosh
  2021-08-05  7:46     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05  0:33 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:43 -0700, Zbigniew Kempczyński wrote:
>

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

Though looks like this is already merged.

> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Petri Latvala <petri.latvala@intel.com>
> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> ---
>  tests/i915/gem_bad_reloc.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/tests/i915/gem_bad_reloc.c b/tests/i915/gem_bad_reloc.c
> index 3ca0f3452..0e9c4c79c 100644
> --- a/tests/i915/gem_bad_reloc.c
> +++ b/tests/i915/gem_bad_reloc.c
> @@ -195,6 +195,7 @@ igt_main
>		/* Check if relocations supported by platform */
>		igt_require(gem_has_relocations(fd));
>		gem_require_blitter(fd);
> +		igt_require(gem_has_relocations(fd));
>	}
>
>	igt_subtest("negative-reloc")
> --
> 2.26.0
>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator Zbigniew Kempczyński
@ 2021-08-05  2:07   ` Dixit, Ashutosh
  2021-08-05  8:02     ` Zbigniew Kempczyński
  2021-08-05  8:14     ` Zbigniew Kempczyński
  0 siblings, 2 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05  2:07 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:44 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.
>
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Petri Latvala <petri.latvala@intel.com>
> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> ---
>  tests/i915/gem_busy.c | 35 +++++++++++++++++++++++++++++++----
>  1 file changed, 31 insertions(+), 4 deletions(-)
>
> diff --git a/tests/i915/gem_busy.c b/tests/i915/gem_busy.c
> index f0fca0e8a..51ec5ad04 100644
> --- a/tests/i915/gem_busy.c
> +++ b/tests/i915/gem_busy.c
> @@ -108,6 +108,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
>	uint32_t handle[3];
>	uint32_t read, write;
>	uint32_t active;
> +	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
>	unsigned i;
>
>	handle[TEST] = gem_create(fd, 4096);
> @@ -117,6 +118,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
>	/* Create a long running batch which we can use to hog the GPU */
>	handle[BUSY] = gem_create(fd, 4096);
>	spin = igt_spin_new(fd,
> +			    .ahnd = ahnd,
>			    .ctx = ctx,
>			    .engine = e->flags,
>			    .dependency = handle[BUSY]);

Missing put_ahnd.

> @@ -428,6 +442,7 @@ igt_main
>
>	igt_subtest_group {
>		igt_fixture {
> +			intel_allocator_multiprocess_start();
>			igt_fork_hang_detector(fd);
>		}
>
> @@ -445,6 +460,21 @@ igt_main
>			}
>		}
>

Just above here is basic() which doesn't have a fork. Is it ok to do
intel_allocator_multiprocess_start/stop when we don't have a fork? If yes,
then can we _always_ do intel_allocator_multiprocess_start/stop rather than
only when we have fork? Thanks.

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 11/52] tests/gem_create: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 11/52] tests/gem_create: " Zbigniew Kempczyński
@ 2021-08-05  2:14   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05  2:14 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:45 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 12/52] tests/gem_ctx_engines: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 12/52] tests/gem_ctx_engines: " Zbigniew Kempczyński
@ 2021-08-05  2:40   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05  2:40 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:46 -0700, Zbigniew Kempczyński wrote:
>
> @@ -344,9 +352,11 @@ static void execute_oneforall(int i915)
>		.size = sizeof(engines),
>	};
>	const struct intel_execution_engine2 *e;
> +	uint64_t ahnd;
>
>	for_each_physical_engine(i915, e) {
>		param.ctx_id = gem_context_create(i915);
> +		ahnd = get_reloc_ahnd(i915, param.ctx_id);

Ok, the allocator is tied to the context (and the context's vm):

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 13/52] tests/gem_ctx_exec: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 13/52] tests/gem_ctx_exec: " Zbigniew Kempczyński
@ 2021-08-05  3:06   ` Dixit, Ashutosh
  2021-08-05  8:46     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05  3:06 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:47 -0700, Zbigniew Kempczyński wrote:

With the couple of nits below addressed, this is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

> @@ -345,10 +354,14 @@ static void close_race(int i915)
>	const intel_ctx_t **ctx;
>	uint32_t *ctx_id;
>	igt_spin_t *spin;
> +	uint64_t ahnd;
>
>	/* Check we can execute a polling spinner */
>	base_ctx = intel_ctx_create(i915, NULL);
> -	igt_spin_free(i915, igt_spin_new(i915, .ctx = base_ctx,
> +	ahnd = get_reloc_ahnd(i915, base_ctx->id);
> +	igt_spin_free(i915, igt_spin_new(i915,
> +					 .ahnd = ahnd,
> +					 .ctx = base_ctx,
>					 .flags = IGT_SPIN_POLL_RUN));

Missing put_ahnd here I think.

> @@ -403,6 +419,7 @@ static void close_race(int i915)
>		}
>
>		igt_spin_free(i915, spin);
> +		put_ahnd(ahnd);

nit: prefer to move it next to intel_ctx_destroy.

> @@ -474,11 +491,22 @@ igt_main
>	igt_subtest("basic-nohangcheck")
>		nohangcheck_hostile(fd);
>
> -	igt_subtest("basic-close-race")
> -		close_race(fd);
> +	igt_subtest_group {
> +		igt_fixture {
> +			intel_allocator_multiprocess_start();
> +		}
> +
> +		igt_subtest("basic-close-race")
> +				close_race(fd);

indent

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 14/52] tests/gem_ctx_freq: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 14/52] tests/gem_ctx_freq: " Zbigniew Kempczyński
@ 2021-08-05  6:07   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05  6:07 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:48 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 16/52] tests/gem_ctx_param: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 16/52] tests/gem_ctx_param: " Zbigniew Kempczyński
@ 2021-08-05  7:18   ` Dixit, Ashutosh
  2021-08-05 10:19     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05  7:18 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Mon, 26 Jul 2021 12:59:50 -0700, Zbigniew Kempczyński wrote:
>
> diff --git a/tests/i915/gem_ctx_param.c b/tests/i915/gem_ctx_param.c
> index c795f1b45..11bc08e36 100644
> --- a/tests/i915/gem_ctx_param.c
> +++ b/tests/i915/gem_ctx_param.c
> @@ -165,6 +165,7 @@ static void test_vm(int i915)
>	int err;
>	uint32_t parent, child;
>	igt_spin_t *spin;
> +	uint64_t ahnd;
>
>	/*
>	 * Proving 2 contexts share the same GTT is quite tricky as we have no
> @@ -190,7 +191,8 @@ static void test_vm(int i915)
>
>	/* Test that we can't set the VM after we've done an execbuf */
>	arg.ctx_id = gem_context_create(i915);
> -	spin = igt_spin_new(i915, .ctx_id = arg.ctx_id);
> +	ahnd = get_reloc_ahnd(i915, arg.ctx_id);
> +	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx_id = arg.ctx_id);
>	igt_spin_free(i915, spin);
>	arg.value = gem_vm_create(i915);
>	err = __gem_context_set_param(i915, &arg);
> @@ -202,7 +204,7 @@ static void test_vm(int i915)
>	child = gem_context_create(i915);
>
>	/* Create a background spinner to keep the engines busy */
> -	spin = igt_spin_new(i915);
> +	spin = igt_spin_new(i915, .ahnd = ahnd);
>	for (int i = 0; i < 16; i++) {
>		spin->execbuf.rsvd1 = gem_context_create(i915);
>		__gem_context_set_priority(i915, spin->execbuf.rsvd1, 1023);
> @@ -259,6 +261,7 @@ static void test_vm(int i915)
>	igt_spin_free(i915, spin);
>	gem_sync(i915, batch.handle);
>	gem_close(i915, batch.handle);
> +	put_ahnd(ahnd);

I think this should work even thought the context against which ahnd has
been created has been destroyed after the first spin_free, but please
check:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy
  2021-08-04 23:26   ` Dixit, Ashutosh
  2021-08-04 23:44     ` Dixit, Ashutosh
@ 2021-08-05  7:28     ` Zbigniew Kempczyński
  2021-08-05 19:47       ` Dixit, Ashutosh
  1 sibling, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05  7:28 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev

On Wed, Aug 04, 2021 at 04:26:32PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:40 -0700, Zbigniew Kempczyński wrote:
> >
> > @@ -808,9 +816,21 @@ void igt_blitter_src_copy(int fd,
> >	uint32_t src_pitch, dst_pitch;
> >	uint32_t dst_reloc_offset, src_reloc_offset;
> >	uint32_t gen = intel_gen(intel_get_drm_devid(fd));
> > +	uint64_t batch_offset, src_offset, dst_offset;
> >	const bool has_64b_reloc = gen >= 8;
> >	int i = 0;
> >
> > +	batch_handle = gem_create(fd, 4096);
> > +	if (ahnd) {
> > +		src_offset = get_offset(ahnd, src_handle, src_size, 0);
> > +		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
> > +		batch_offset = get_offset(ahnd, batch_handle, 4096, 0);
> > +	} else {
> > +		src_offset = 16 << 20;
> > +		dst_offset = ALIGN(src_offset + src_size, 1 << 20);
> > +		batch_offset = ALIGN(dst_offset + dst_size, 1 << 20);
> 
> For the !ahnd case, we are providing relocations right? We still need to
> provide these offsets or they can all be 0?

Yes, we're providing relocations but we try to guess offsets to avoid them.
If we guess valid offsets they will be used, if we missed and kernel decides
to migrate vma(s) kernel will relocate and fill offsets within bb regardless
NO_RELOC (it's just a hint - if vmas are not moved and you filled bb with 
them just skip relocations).  

> 
> > @@ -882,22 +902,29 @@ void igt_blitter_src_copy(int fd,
> >
> >	igt_assert(i <= ARRAY_SIZE(batch));
> >
> > -	batch_handle = gem_create(fd, 4096);
> >	gem_write(fd, batch_handle, 0, batch, sizeof(batch));
> >
> > -	fill_relocation(&relocs[0], dst_handle, -1, dst_delta, dst_reloc_offset,
> > +	fill_relocation(&relocs[0], dst_handle, dst_offset,
> > +			dst_delta, dst_reloc_offset,
> >			I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER);
> > -	fill_relocation(&relocs[1], src_handle, -1, src_delta, src_reloc_offset,
> > +	fill_relocation(&relocs[1], src_handle, src_offset,
> > +			src_delta, src_reloc_offset,
> >			I915_GEM_DOMAIN_RENDER, 0);
> >
> > -	fill_object(&objs[0], dst_handle, 0, NULL, 0);
> > -	fill_object(&objs[1], src_handle, 0, NULL, 0);
> > -	fill_object(&objs[2], batch_handle, 0, relocs, 2);
> > +	fill_object(&objs[0], dst_handle, dst_offset, NULL, 0);
> > +	fill_object(&objs[1], src_handle, src_offset, NULL, 0);
> > +	fill_object(&objs[2], batch_handle, batch_offset, relocs, 2);
> >
> > -	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE;
> > +	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE | EXEC_OBJECT_WRITE;
> >	objs[1].flags |= EXEC_OBJECT_NEEDS_FENCE;
> >
> > -	exec_blit(fd, objs, 3, gen, 0);
> > +	if (ahnd) {
> > +		objs[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > +		objs[1].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > +		objs[2].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > +	}
> 
> Should be add an "else" here and pull the fill_relocation() and set the
> relocation_count to 2 only if we have !ahnd? Maybe ok to leave as is too if
> the kernel will ignore the reloc stuff when EXEC_OBJECT_PINNED is set.

We may pass relocs data to the kernel but check in no-reloc gens uses .relocation_count
field. That's why we need to provide it zeroed. 

If you're asking why I haven't do else - I'm just a little bit lazy and I wanted 
avoid to additional else {} block. But if you think code would be more readable
I will change it.

> 
> > @@ -584,10 +601,17 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
> >	obj[SCRATCH].handle = prime_fd_to_handle(i915, dmabuf);
> >
> >	obj[BATCH].handle = gem_create(i915, size);
> > +	obj[BATCH].offset = get_offset(ahnd, obj[BATCH].handle, size, 0);
> >	obj[BATCH].relocs_ptr = (uintptr_t)store;
> > -	obj[BATCH].relocation_count = ARRAY_SIZE(store);
> > +	obj[BATCH].relocation_count = !ahnd ? ARRAY_SIZE(store) : 0;
> >	memset(store, 0, sizeof(store));
> >
> > +	if (ahnd) {
> > +		obj[SCRATCH].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
> > +		obj[SCRATCH].offset = scratch_offset;
> > +		obj[BATCH].flags = EXEC_OBJECT_PINNED;
> > +	}
> 
> Why don't we compute scratch_offset in work() itself (and rather pass it in
> from the callers)?

In work() we're not aware what scratch object size is. So there's hard to
call get_offset() with size. So I need to pass size or offset, probably
experience with other tests points me to pass offset instead size.
Generally if we have some scratch and we want to use it within pipelined
executions in same context we need to provide same offset for scratch,
but offsets which are not busy for bb. Second is problematic with Simple allocator
(see one of my previous email when I describe this problem) because scratch
will have same offset - we want this, but bb will also have same offset.
Using Reloc allocator at least gives us next offsets for bb, but we have
to avoid using it twice or more for scratch).

> 
> > @@ -602,8 +626,8 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
> >		store[count].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
> >		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
> >		if (gen >= 8) {
> > -			batch[++i] = 0;
> > -			batch[++i] = 0;
> > +			batch[++i] = scratch_offset + store[count].delta;
> > +			batch[++i] = (scratch_offset + store[count].delta) >> 32;
> >		} else if (gen >= 4) {
> >			batch[++i] = 0;
> >			batch[++i] = 0;
> 
> Should we add the offset's even for previous gen's (gen < 8)? Because I am
> thinking at present kernel is supporting reloc's for gen < 12 but maybe
> later kernels will discontinue them completely so we'll need to fix the
> previous gen's all over again? Maybe too much?

On older gens you'll definitely catch relocation here (presumed_offset == -1
and lack of NO_RELOC flag).

Newer kernels cannot remove relocations because on gens where you have no
ppgtt you're not able to predict which offsets are busy or not. So passing
offset here does nothing and relocation will overwrite it.

--
Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 07/52] lib/intel_batchbuffer: Try to avoid relocations in blitting
  2021-08-04 23:42   ` Dixit, Ashutosh
@ 2021-08-05  7:34     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05  7:34 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev

On Wed, Aug 04, 2021 at 04:42:30PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:41 -0700, Zbigniew Kempczyński wrote:
> >
> > We're proposing not overlapping offsets in both blitter copying functions
> > so we can try to skip relocations.
> 
> OK, afaiu I915_EXEC_NO_RELOC is a hint so when I915_EXEC_IS_PINNED is not
> specified relocations will be applied when needed (and we are providing
> presumed_offset's even in the relocation case):

If you use NO_RELOC relocations will be applied only when kernel decides
to move your object offset. Otherwise relocation won't appear. So passing
NO_RELOC, presumed_offset == -1, obj.offset == 0x40000 won't relocate
unless kernel will move obj to different offset. So if you just provide
batch without 0x40000 you can expect not valid result or hang.

--
Zbigniew

> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
> 
> > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Cc: Petri Latvala <petri.latvala@intel.com>
> > Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  lib/intel_batchbuffer.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> > index d4a59e508..bbf8e0da2 100644
> > --- a/lib/intel_batchbuffer.c
> > +++ b/lib/intel_batchbuffer.c
> > @@ -711,7 +711,7 @@ static void exec_blit(int fd,
> >	struct drm_i915_gem_execbuffer2 exec = {
> >		.buffers_ptr = to_user_pointer(objs),
> >		.buffer_count = count,
> > -		.flags = gen >= 6 ? I915_EXEC_BLT : 0,
> > +		.flags = gen >= 6 ? I915_EXEC_BLT : 0 | I915_EXEC_NO_RELOC,
> >		.rsvd1 = ctx,
> >	};
> >
> > --
> > 2.26.0
> >

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 08/52] lib/huc_copy: Extend huc copy prototype to pass allocator handle
  2021-08-05  0:31   ` Dixit, Ashutosh
@ 2021-08-05  7:44     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05  7:44 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Wed, Aug 04, 2021 at 05:31:01PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:42 -0700, Zbigniew Kempczyński wrote:
> >
> > @@ -86,6 +90,21 @@ gen9_huc_copyfunc(int fd,
> >	buf[i++] = MFX_WAIT;
> >
> >	memset(reloc, 0, sizeof(reloc));
> > +
> > +	if (ahnd) {
> > +		obj[0].flags = EXEC_OBJECT_PINNED;
> > +		obj[1].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
> > +		obj[2].flags = EXEC_OBJECT_PINNED;
> 
> Don't need | EXEC_OBJECT_SUPPORTS_48B_ADDRESS here?
> 
> Otherwise this is:
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

gen9_emit_huc_virtual_addr_state() currently sets only first dword with
offset (other two buf[(*i)++] = 0; are intact).

This is only used within gem_huc_copy.c and I know offsets passed starts
from 0x0 and won't exceed 32-bit I left it with without 48B_ADDRESS set.

--
Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 09/52] tests/gem_bad_reloc: Skip on gens where relocations are not supported
  2021-08-05  0:33   ` Dixit, Ashutosh
@ 2021-08-05  7:46     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05  7:46 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Wed, Aug 04, 2021 at 05:33:38PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:43 -0700, Zbigniew Kempczyński wrote:
> >
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
> 
> Though looks like this is already merged.

Eh, I walked through my failure list before I merged Tejas patch.
Thanks for catching this, I remove this patch from series.

--
Zbigniew
> 
> > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Cc: Petri Latvala <petri.latvala@intel.com>
> > Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> > ---
> >  tests/i915/gem_bad_reloc.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/tests/i915/gem_bad_reloc.c b/tests/i915/gem_bad_reloc.c
> > index 3ca0f3452..0e9c4c79c 100644
> > --- a/tests/i915/gem_bad_reloc.c
> > +++ b/tests/i915/gem_bad_reloc.c
> > @@ -195,6 +195,7 @@ igt_main
> >		/* Check if relocations supported by platform */
> >		igt_require(gem_has_relocations(fd));
> >		gem_require_blitter(fd);
> > +		igt_require(gem_has_relocations(fd));
> >	}
> >
> >	igt_subtest("negative-reloc")
> > --
> > 2.26.0
> >

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator
  2021-08-05  2:07   ` Dixit, Ashutosh
@ 2021-08-05  8:02     ` Zbigniew Kempczyński
  2021-08-05 21:14       ` Dixit, Ashutosh
  2021-08-05  8:14     ` Zbigniew Kempczyński
  1 sibling, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05  8:02 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Wed, Aug 04, 2021 at 07:07:41PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:44 -0700, Zbigniew Kempczyński wrote:
> >
> > For newer gens we're not able to rely on relocations. Adopt to use
> > offsets acquired from the allocator.
> >
> > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Cc: Petri Latvala <petri.latvala@intel.com>
> > Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> > ---
> >  tests/i915/gem_busy.c | 35 +++++++++++++++++++++++++++++++----
> >  1 file changed, 31 insertions(+), 4 deletions(-)
> >
> > diff --git a/tests/i915/gem_busy.c b/tests/i915/gem_busy.c
> > index f0fca0e8a..51ec5ad04 100644
> > --- a/tests/i915/gem_busy.c
> > +++ b/tests/i915/gem_busy.c
> > @@ -108,6 +108,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
> >	uint32_t handle[3];
> >	uint32_t read, write;
> >	uint32_t active;
> > +	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
> >	unsigned i;
> >
> >	handle[TEST] = gem_create(fd, 4096);
> > @@ -117,6 +118,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
> >	/* Create a long running batch which we can use to hog the GPU */
> >	handle[BUSY] = gem_create(fd, 4096);
> >	spin = igt_spin_new(fd,
> > +			    .ahnd = ahnd,
> >			    .ctx = ctx,
> >			    .engine = e->flags,
> >			    .dependency = handle[BUSY]);
> 
> Missing put_ahnd.

Good catch.

> 
> > @@ -428,6 +442,7 @@ igt_main
> >
> >	igt_subtest_group {
> >		igt_fixture {
> > +			intel_allocator_multiprocess_start();
> >			igt_fork_hang_detector(fd);
> >		}
> >
> > @@ -445,6 +460,21 @@ igt_main
> >			}
> >		}
> >
> 
> Just above here is basic() which doesn't have a fork. Is it ok to do
> intel_allocator_multiprocess_start/stop when we don't have a fork? If yes,
> then can we _always_ do intel_allocator_multiprocess_start/stop rather than
> only when we have fork? Thanks.

intel_allocator_multiprocess_start() creates allocator thread which acts
for children (igt_fork) to alloc/free offsets. If you use alloc/free within
same process (from which thread was spawned) internal structure is mutexed
and no IPCs are called. So only consequence of this here is additional thread
in system/memory (which does nothing for basic() tests). It will be stopped
with intel_allocator_multiprocess_stop().

But for purity test should work without additional dependencies so I'll fix
this - it will be sent in v4.

--
Zbigniew 

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator
  2021-08-05  2:07   ` Dixit, Ashutosh
  2021-08-05  8:02     ` Zbigniew Kempczyński
@ 2021-08-05  8:14     ` Zbigniew Kempczyński
  1 sibling, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05  8:14 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Wed, Aug 04, 2021 at 07:07:41PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:44 -0700, Zbigniew Kempczyński wrote:
> >
> > For newer gens we're not able to rely on relocations. Adopt to use
> > offsets acquired from the allocator.
> >
> > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Cc: Petri Latvala <petri.latvala@intel.com>
> > Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> > ---
> >  tests/i915/gem_busy.c | 35 +++++++++++++++++++++++++++++++----
> >  1 file changed, 31 insertions(+), 4 deletions(-)
> >
> > diff --git a/tests/i915/gem_busy.c b/tests/i915/gem_busy.c
> > index f0fca0e8a..51ec5ad04 100644
> > --- a/tests/i915/gem_busy.c
> > +++ b/tests/i915/gem_busy.c
> > @@ -108,6 +108,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
> >	uint32_t handle[3];
> >	uint32_t read, write;
> >	uint32_t active;
> > +	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
> >	unsigned i;
> >
> >	handle[TEST] = gem_create(fd, 4096);
> > @@ -117,6 +118,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
> >	/* Create a long running batch which we can use to hog the GPU */
> >	handle[BUSY] = gem_create(fd, 4096);
> >	spin = igt_spin_new(fd,
> > +			    .ahnd = ahnd,
> >			    .ctx = ctx,
> >			    .engine = e->flags,
> >			    .dependency = handle[BUSY]);
> 
> Missing put_ahnd.
> 
> > @@ -428,6 +442,7 @@ igt_main
> >
> >	igt_subtest_group {
> >		igt_fixture {
> > +			intel_allocator_multiprocess_start();
> >			igt_fork_hang_detector(fd);
> >		}
> >
> > @@ -445,6 +460,21 @@ igt_main
> >			}
> >		}
> >
> 
> Just above here is basic() which doesn't have a fork. Is it ok to do
> intel_allocator_multiprocess_start/stop when we don't have a fork? If yes,
> then can we _always_ do intel_allocator_multiprocess_start/stop rather than
> only when we have fork? Thanks.

It seems that basic() is called within same dynamic subtest "busy", so migrating
it outside of this (there's all() with igt_fork inside "busy") would split
that subtest. I'm not sure we want to do this.

--
Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 13/52] tests/gem_ctx_exec: Adopt to use allocator
  2021-08-05  3:06   ` Dixit, Ashutosh
@ 2021-08-05  8:46     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05  8:46 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Wed, Aug 04, 2021 at 08:06:11PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:47 -0700, Zbigniew Kempczyński wrote:
> 
> With the couple of nits below addressed, this is:
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
> 
> > @@ -345,10 +354,14 @@ static void close_race(int i915)
> >	const intel_ctx_t **ctx;
> >	uint32_t *ctx_id;
> >	igt_spin_t *spin;
> > +	uint64_t ahnd;
> >
> >	/* Check we can execute a polling spinner */
> >	base_ctx = intel_ctx_create(i915, NULL);
> > -	igt_spin_free(i915, igt_spin_new(i915, .ctx = base_ctx,
> > +	ahnd = get_reloc_ahnd(i915, base_ctx->id);
> > +	igt_spin_free(i915, igt_spin_new(i915,
> > +					 .ahnd = ahnd,
> > +					 .ctx = base_ctx,
> >					 .flags = IGT_SPIN_POLL_RUN));
> 
> Missing put_ahnd here I think.

We may free ahnd here (internally allocator structure will be free
due to refcnt == 0), then first child which will call get_reloc_ahnd()
will recreate this. Or we may keep this allocator structure till the
end of the test - I will put put_ahnd() there. 
 
> 
> > @@ -403,6 +419,7 @@ static void close_race(int i915)
> >		}
> >
> >		igt_spin_free(i915, spin);
> > +		put_ahnd(ahnd);
> 
> nit: prefer to move it next to intel_ctx_destroy.

I need to add this there, not move. put_ahnd() in children will
decrement refcount of allocator structure then last put_ahnd()
in main process will lead to freeing it. 

> 
> > @@ -474,11 +491,22 @@ igt_main
> >	igt_subtest("basic-nohangcheck")
> >		nohangcheck_hostile(fd);
> >
> > -	igt_subtest("basic-close-race")
> > -		close_race(fd);
> > +	igt_subtest_group {
> > +		igt_fixture {
> > +			intel_allocator_multiprocess_start();
> > +		}
> > +
> > +		igt_subtest("basic-close-race")
> > +				close_race(fd);
> 
> indent

Ack.

Thanks for review!

--
Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy
  2021-08-04 23:44     ` Dixit, Ashutosh
@ 2021-08-05  8:50       ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05  8:50 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev

On Wed, Aug 04, 2021 at 04:44:20PM -0700, Dixit, Ashutosh wrote:
> On Wed, 04 Aug 2021 16:26:32 -0700, Dixit, Ashutosh wrote:
> >
> > On Mon, 26 Jul 2021 12:59:40 -0700, Zbigniew Kempczyński wrote:
> > >
> > > @@ -808,9 +816,21 @@ void igt_blitter_src_copy(int fd,
> > >	uint32_t src_pitch, dst_pitch;
> > >	uint32_t dst_reloc_offset, src_reloc_offset;
> > >	uint32_t gen = intel_gen(intel_get_drm_devid(fd));
> > > +	uint64_t batch_offset, src_offset, dst_offset;
> > >	const bool has_64b_reloc = gen >= 8;
> > >	int i = 0;
> > >
> > > +	batch_handle = gem_create(fd, 4096);
> > > +	if (ahnd) {
> > > +		src_offset = get_offset(ahnd, src_handle, src_size, 0);
> > > +		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
> > > +		batch_offset = get_offset(ahnd, batch_handle, 4096, 0);
> > > +	} else {
> > > +		src_offset = 16 << 20;
> > > +		dst_offset = ALIGN(src_offset + src_size, 1 << 20);
> > > +		batch_offset = ALIGN(dst_offset + dst_size, 1 << 20);
> >
> > For the !ahnd case, we are providing relocations right? We still need to
> > provide these offsets or they can all be 0?
> 
> This is probably needed because of I915_EXEC_NO_RELOC added in the next
> patch (Patch 07/20)?

Yes, we want to avoid relocations. We pass offsets which should be ok
but if kernel will decide they are not it will relocate. But if we hit
we need to provide bb properly patched.

--
Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 16/52] tests/gem_ctx_param: Adopt to use allocator
  2021-08-05  7:18   ` Dixit, Ashutosh
@ 2021-08-05 10:19     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-05 10:19 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev

On Thu, Aug 05, 2021 at 12:18:18AM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:50 -0700, Zbigniew Kempczyński wrote:
> >
> > diff --git a/tests/i915/gem_ctx_param.c b/tests/i915/gem_ctx_param.c
> > index c795f1b45..11bc08e36 100644
> > --- a/tests/i915/gem_ctx_param.c
> > +++ b/tests/i915/gem_ctx_param.c
> > @@ -165,6 +165,7 @@ static void test_vm(int i915)
> >	int err;
> >	uint32_t parent, child;
> >	igt_spin_t *spin;
> > +	uint64_t ahnd;
> >
> >	/*
> >	 * Proving 2 contexts share the same GTT is quite tricky as we have no
> > @@ -190,7 +191,8 @@ static void test_vm(int i915)
> >
> >	/* Test that we can't set the VM after we've done an execbuf */
> >	arg.ctx_id = gem_context_create(i915);
> > -	spin = igt_spin_new(i915, .ctx_id = arg.ctx_id);
> > +	ahnd = get_reloc_ahnd(i915, arg.ctx_id);
> > +	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx_id = arg.ctx_id);
> >	igt_spin_free(i915, spin);
> >	arg.value = gem_vm_create(i915);
> >	err = __gem_context_set_param(i915, &arg);
> > @@ -202,7 +204,7 @@ static void test_vm(int i915)
> >	child = gem_context_create(i915);
> >
> >	/* Create a background spinner to keep the engines busy */
> > -	spin = igt_spin_new(i915);
> > +	spin = igt_spin_new(i915, .ahnd = ahnd);
> >	for (int i = 0; i < 16; i++) {
> >		spin->execbuf.rsvd1 = gem_context_create(i915);
> >		__gem_context_set_priority(i915, spin->execbuf.rsvd1, 1023);
> > @@ -259,6 +261,7 @@ static void test_vm(int i915)
> >	igt_spin_free(i915, spin);
> >	gem_sync(i915, batch.handle);
> >	gem_close(i915, batch.handle);
> > +	put_ahnd(ahnd);
> 
> I think this should work even thought the context against which ahnd has
> been created has been destroyed after the first spin_free, but please
> check:
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

Yes, ahnd is only offset provider here for spinners. But to avoid 
confusion I'm going to delete first ahnd (for arg.ctx_id and recreate
ahnd for default context before start second spinner.

As this is cosmetic change I dare to keep your r-b.

--
Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-08-04  6:13       ` Zbigniew Kempczyński
@ 2021-08-05 18:53         ` Dixit, Ashutosh
  2021-08-06  1:15           ` Dixit, Ashutosh
  2021-08-06  5:35           ` Zbigniew Kempczyński
  0 siblings, 2 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05 18:53 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Tue, 03 Aug 2021 23:13:41 -0700, Zbigniew Kempczyński wrote:
>
> On Tue, Aug 03, 2021 at 05:18:32PM -0700, Dixit, Ashutosh wrote:
> > On Tue, 03 Aug 2021 14:01:24 -0700, Dixit, Ashutosh wrote:
> > >
> > > On Mon, 26 Jul 2021 12:59:36 -0700, Zbigniew Kempczyński wrote:
> > > >
> > > > +static inline uint64_t get_simple_ahnd(int fd, uint32_t ctx)
> > > > +{
> > > > +	bool do_relocs = gem_has_relocations(fd);
> > > > +
> > > > +	return do_relocs ? 0 : intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_SIMPLE);
> > >
> > > Should this function be e.g.
> > >
> > >     return intel_allocator_open(fd, 0, do_relocs ?
> > >                                 INTEL_ALLOCATOR_RELOC : INTEL_ALLOCATOR_SIMPLE);
> > >
> > > Similarly for others.
> >
> > The patch is fine but there was the above code (which I wrote) in
> > gem_linear_blits.c, hence I was wondering.
>
> On the beginning - I'm sorry for email length. It may give some light how
> things were designed, why and what issues with that we got.
>
> Regarding gem_linear_blits - in this case doesn't matter which allocator
> you'll use. There's summary:
>
> 1. Reloc allocator just increments offsets but is doing this in multiprocess
>    environment. It doesn't track which offsets are occupied.
>
> 2. Simple takes care of which offsets are occupied.
>
> For gem_linear_blits on older gens kernel will propose its own offsets
> if we pass something it don't like. For simple we're on newer gens
> and we got full ppgtt so we don't overlap with offsets.

Yes I think I understand everything above.

> Even if Simple is stateful we got some case in which its usage
> is currently limited (so you can see using reloc in most of the
> tests). Problem is with...  it is stateful. Most of tests creates batch
> (gem object), use it and destroy it. From allocator perspective we alloc
> offset, then we free it. In next round we got same offset for another batch
> (gem object). So kernel serialize the execution until previous vma is freed.
> This lead to non-pipelined execution.

Maybe I am wrong but to me it looks fixable. Maybe we need to keep track of
the "last allocated offset" in simple so that next time we allocate a new
offset even if the previous one has been freed (rather than reallocating
the previous offset). Or we can allocate starting from a random offset?

> You can see pattern in many tests - ahnd = get_reloc_ahnd(...),
> get offset for scratch surface, then pass scratch_offset to some execution
> function. This allows to keep us same offset for scratch and get new
> offsets for batches. The best would be to have something hybrid which would
> propose new (and not busy) bb, but that should happen in multiprocess env
> so I haven't found how to write this yet. Libdrm handles pools of objects
> and reuses them if they were not busy. But doing this in multiprocess
> requires synchronization so some additional mechanism should be added
> to allocator to handle this case.
>
> I still wonder to introduce .dependency_offset in creating spinner when
> .dependency handle is passed. Currently we have to use Simple to ensure
> we got same offset for .dependency. As spinners keep batch handles until
> they are freed this likely is not a problem. But it may be in the future.

I am not following the above two paragraphs, maybe we can discuss more some
time.

> >
> > > +static inline uint64_t get_offset(uint64_t ahnd, uint32_t handle,
> > > +				  uint64_t size, uint64_t alignment)
> > > +{
> > > +	if (!ahnd)
> > > +		return 0;
> > > +
> > > +	return intel_allocator_alloc(ahnd, handle, size, alignment);
> > > +}
> > > +
> > > +static inline bool put_offset(uint64_t ahnd, uint32_t handle)
> > > +{
> > > +	if (!ahnd)
> > > +		return 0;
> > > +
> > > +	return intel_allocator_free(ahnd, handle);
> > > +}
> > > +
> >
> > Also for the function names are too generic with potential for namespace
> > conflicts, probably ahnd_get_offset/ahnd_put_offset?
>
> If there will be more voices to change it I'll do it. At the moment
> I wanted to have few short functions. I thought about ahnd_get_offset(ahnd, ...)
> but when ahnd would be valid allocator handle, asserting otherwise.
> get_offset(ahnd, ...) would just return some offset, for relocations
> it may be 0 (like currently is), allocating offset for valid ahnd.

No need to change, it's fine for now. Thanks for the long explanation.

-Ashutosh

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy
  2021-08-05  7:28     ` Zbigniew Kempczyński
@ 2021-08-05 19:47       ` Dixit, Ashutosh
  2021-08-06  6:17         ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05 19:47 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Thu, 05 Aug 2021 00:28:30 -0700, Zbigniew Kempczyński wrote:
>

Thanks for the responding. I have replied below, please see if anything
needs to be addressed but otherwise this is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

> On Wed, Aug 04, 2021 at 04:26:32PM -0700, Dixit, Ashutosh wrote:
> > On Mon, 26 Jul 2021 12:59:40 -0700, Zbigniew Kempczyński wrote:
> > >
> > > @@ -808,9 +816,21 @@ void igt_blitter_src_copy(int fd,
> > >	uint32_t src_pitch, dst_pitch;
> > >	uint32_t dst_reloc_offset, src_reloc_offset;
> > >	uint32_t gen = intel_gen(intel_get_drm_devid(fd));
> > > +	uint64_t batch_offset, src_offset, dst_offset;
> > >	const bool has_64b_reloc = gen >= 8;
> > >	int i = 0;
> > >
> > > +	batch_handle = gem_create(fd, 4096);
> > > +	if (ahnd) {
> > > +		src_offset = get_offset(ahnd, src_handle, src_size, 0);
> > > +		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
> > > +		batch_offset = get_offset(ahnd, batch_handle, 4096, 0);
> > > +	} else {
> > > +		src_offset = 16 << 20;
> > > +		dst_offset = ALIGN(src_offset + src_size, 1 << 20);
> > > +		batch_offset = ALIGN(dst_offset + dst_size, 1 << 20);
> >
> > For the !ahnd case, we are providing relocations right? We still need to
> > provide these offsets or they can all be 0?
>
> Yes, we're providing relocations but we try to guess offsets to avoid them.
> If we guess valid offsets they will be used, if we missed and kernel decides
> to migrate vma(s) kernel will relocate and fill offsets within bb regardless
> NO_RELOC (it's just a hint - if vmas are not moved and you filled bb with
> them just skip relocations).

Yes this is as I thought as I said in the later mail.

> > > @@ -882,22 +902,29 @@ void igt_blitter_src_copy(int fd,
> > >
> > >	igt_assert(i <= ARRAY_SIZE(batch));
> > >
> > > -	batch_handle = gem_create(fd, 4096);
> > >	gem_write(fd, batch_handle, 0, batch, sizeof(batch));
> > >
> > > -	fill_relocation(&relocs[0], dst_handle, -1, dst_delta, dst_reloc_offset,
> > > +	fill_relocation(&relocs[0], dst_handle, dst_offset,
> > > +			dst_delta, dst_reloc_offset,
> > >			I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER);
> > > -	fill_relocation(&relocs[1], src_handle, -1, src_delta, src_reloc_offset,
> > > +	fill_relocation(&relocs[1], src_handle, src_offset,
> > > +			src_delta, src_reloc_offset,
> > >			I915_GEM_DOMAIN_RENDER, 0);
> > >
> > > -	fill_object(&objs[0], dst_handle, 0, NULL, 0);
> > > -	fill_object(&objs[1], src_handle, 0, NULL, 0);
> > > -	fill_object(&objs[2], batch_handle, 0, relocs, 2);
> > > +	fill_object(&objs[0], dst_handle, dst_offset, NULL, 0);
> > > +	fill_object(&objs[1], src_handle, src_offset, NULL, 0);
> > > +	fill_object(&objs[2], batch_handle, batch_offset, relocs, 2);
> > >
> > > -	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE;
> > > +	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE | EXEC_OBJECT_WRITE;
> > >	objs[1].flags |= EXEC_OBJECT_NEEDS_FENCE;
> > >
> > > -	exec_blit(fd, objs, 3, gen, 0);
> > > +	if (ahnd) {
> > > +		objs[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > > +		objs[1].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > > +		objs[2].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > > +	}
> >
> > Should be add an "else" here and pull the fill_relocation() and set the
> > relocation_count to 2 only if we have !ahnd? Maybe ok to leave as is too if
> > the kernel will ignore the reloc stuff when EXEC_OBJECT_PINNED is set.
>
> We may pass relocs data to the kernel but check in no-reloc gens uses .relocation_count
> field. That's why we need to provide it zeroed.
>
> If you're asking why I haven't do else - I'm just a little bit lazy and I wanted
> avoid to additional else {} block. But if you think code would be more readable
> I will change it.

No it's ok, no need to change. But looks like the relocation_count above is
unconditionally set to 2 in both reloc and no-reloc case. If this works
then maybe relocation_count is ignored if EXEC_OBJECT_PINNED is set?
Otherwise we may to set it to 0 in the non-reloc case.

> > > @@ -584,10 +601,17 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
> > >	obj[SCRATCH].handle = prime_fd_to_handle(i915, dmabuf);
> > >
> > >	obj[BATCH].handle = gem_create(i915, size);
> > > +	obj[BATCH].offset = get_offset(ahnd, obj[BATCH].handle, size, 0);
> > >	obj[BATCH].relocs_ptr = (uintptr_t)store;
> > > -	obj[BATCH].relocation_count = ARRAY_SIZE(store);
> > > +	obj[BATCH].relocation_count = !ahnd ? ARRAY_SIZE(store) : 0;
> > >	memset(store, 0, sizeof(store));
> > >
> > > +	if (ahnd) {
> > > +		obj[SCRATCH].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
> > > +		obj[SCRATCH].offset = scratch_offset;
> > > +		obj[BATCH].flags = EXEC_OBJECT_PINNED;
> > > +	}
> >
> > Why don't we compute scratch_offset in work() itself (and rather pass it in
> > from the callers)?
>
> In work() we're not aware what scratch object size is. So there's hard to
> call get_offset() with size. So I need to pass size or offset, probably
> experience with other tests points me to pass offset instead size.
> Generally if we have some scratch and we want to use it within pipelined
> executions in same context we need to provide same offset for scratch,
> but offsets which are not busy for bb. Second is problematic with Simple allocator
> (see one of my previous email when I describe this problem) because scratch
> will have same offset - we want this, but bb will also have same offset.
> Using Reloc allocator at least gives us next offsets for bb, but we have
> to avoid using it twice or more for scratch).

Sorry, I did not realize scratch is not available in work. I would rather
pass scratch into the function but maybe it's ok as is too.

>
> >
> > > @@ -602,8 +626,8 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
> > >		store[count].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
> > >		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
> > >		if (gen >= 8) {
> > > -			batch[++i] = 0;
> > > -			batch[++i] = 0;
> > > +			batch[++i] = scratch_offset + store[count].delta;
> > > +			batch[++i] = (scratch_offset + store[count].delta) >> 32;
> > >		} else if (gen >= 4) {
> > >			batch[++i] = 0;
> > >			batch[++i] = 0;
> >
> > Should we add the offset's even for previous gen's (gen < 8)? Because I am
> > thinking at present kernel is supporting reloc's for gen < 12 but maybe
> > later kernels will discontinue them completely so we'll need to fix the
> > previous gen's all over again? Maybe too much?
>
> On older gens you'll definitely catch relocation here (presumed_offset == -1
> and lack of NO_RELOC flag).
>
> Newer kernels cannot remove relocations because on gens where you have no
> ppgtt you're not able to predict which offsets are busy or not. So passing
> offset here does nothing and relocation will overwrite it.

Ah ok, because multiple processes are sharing the global gtt. Thanks.

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator
  2021-08-05  8:02     ` Zbigniew Kempczyński
@ 2021-08-05 21:14       ` Dixit, Ashutosh
  2021-08-06  6:56         ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05 21:14 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Thu, 05 Aug 2021 01:02:41 -0700, Zbigniew Kempczyński wrote:
>

Please fix the put_ahnd. With that this patch is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

But I want to discuss the intel_allocator_multiprocess_start/stop issue a
bit more below.

> On Wed, Aug 04, 2021 at 07:07:41PM -0700, Dixit, Ashutosh wrote:
> > On Mon, 26 Jul 2021 12:59:44 -0700, Zbigniew Kempczyński wrote:
> > >
> > > For newer gens we're not able to rely on relocations. Adopt to use
> > > offsets acquired from the allocator.
> > >
> > > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > > Cc: Petri Latvala <petri.latvala@intel.com>
> > > Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> > > ---
> > >  tests/i915/gem_busy.c | 35 +++++++++++++++++++++++++++++++----
> > >  1 file changed, 31 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/tests/i915/gem_busy.c b/tests/i915/gem_busy.c
> > > index f0fca0e8a..51ec5ad04 100644
> > > --- a/tests/i915/gem_busy.c
> > > +++ b/tests/i915/gem_busy.c
> > > @@ -108,6 +108,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
> > >	uint32_t handle[3];
> > >	uint32_t read, write;
> > >	uint32_t active;
> > > +	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
> > >	unsigned i;
> > >
> > >	handle[TEST] = gem_create(fd, 4096);
> > > @@ -117,6 +118,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
> > >	/* Create a long running batch which we can use to hog the GPU */
> > >	handle[BUSY] = gem_create(fd, 4096);
> > >	spin = igt_spin_new(fd,
> > > +			    .ahnd = ahnd,
> > >			    .ctx = ctx,
> > >			    .engine = e->flags,
> > >			    .dependency = handle[BUSY]);
> >
> > Missing put_ahnd.
>
> Good catch.
>
> >
> > > @@ -428,6 +442,7 @@ igt_main
> > >
> > >	igt_subtest_group {
> > >		igt_fixture {
> > > +			intel_allocator_multiprocess_start();
> > >			igt_fork_hang_detector(fd);
> > >		}
> > >
> > > @@ -445,6 +460,21 @@ igt_main
> > >			}
> > >		}
> > >
> >
> > Just above here is basic() which doesn't have a fork. Is it ok to do
> > intel_allocator_multiprocess_start/stop when we don't have a fork? If yes,
> > then can we _always_ do intel_allocator_multiprocess_start/stop rather than
> > only when we have fork? Thanks.
>
> intel_allocator_multiprocess_start() creates allocator thread which acts
> for children (igt_fork) to alloc/free offsets. If you use alloc/free within
> same process (from which thread was spawned) internal structure is mutexed
> and no IPCs are called. So only consequence of this here is additional thread
> in system/memory (which does nothing for basic() tests). It will be stopped
> with intel_allocator_multiprocess_stop().

OK, in that case let me ask the question I asked above in another way. Can
we add intel_allocator_multiprocess_start() to common_init() (the program
entry point) and similarly say intel_allocator_multiprocess_stop() to
igt_exit() (or common_exit_handler(), basically the program exit point) so
that these always run and we don't have to add them only for specific tests
which fork? What would be the disadvantage of doing this? Thanks.

> But for purity test should work without additional dependencies so I'll fix
> this - it will be sent in v4.
>
> --
> Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 17/52] tests/gem_eio: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 17/52] tests/gem_eio: " Zbigniew Kempczyński
@ 2021-08-05 21:44   ` Dixit, Ashutosh
  2021-08-06  7:16     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-05 21:44 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:51 -0700, Zbigniew Kempczyński wrote:
>
> -static void reset_stress(int fd, const intel_ctx_t *ctx0,
> +static void reset_stress(int fd, uint64_t ahnd, const intel_ctx_t *ctx0,

I think it would have been ok to allocate ahnd inside reset_stress() but
looks like we wanted to keep it tied to ctx_create/destroy, so this is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-08-05 18:53         ` Dixit, Ashutosh
@ 2021-08-06  1:15           ` Dixit, Ashutosh
  2021-08-06  5:51             ` Zbigniew Kempczyński
  2021-08-06  5:35           ` Zbigniew Kempczyński
  1 sibling, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  1:15 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Thu, 05 Aug 2021 11:53:54 -0700, Dixit, Ashutosh wrote:
>
> On Tue, 03 Aug 2021 23:13:41 -0700, Zbigniew Kempczyński wrote:
> >
> > Even if Simple is stateful we got some case in which its usage
> > is currently limited (so you can see using reloc in most of the
> > tests). Problem is with...  it is stateful. Most of tests creates batch
> > (gem object), use it and destroy it. From allocator perspective we alloc
> > offset, then we free it. In next round we got same offset for another batch
> > (gem object). So kernel serialize the execution until previous vma is freed.
> > This lead to non-pipelined execution.
>
> Maybe I am wrong but to me it looks fixable. Maybe we need to keep track of
> the "last allocated offset" in simple so that next time we allocate a new
> offset even if the previous one has been freed (rather than reallocating
> the previous offset). Or we can allocate starting from a random offset?
>
> > You can see pattern in many tests - ahnd = get_reloc_ahnd(...),
> > get offset for scratch surface, then pass scratch_offset to some execution
> > function. This allows to keep us same offset for scratch and get new
> > offsets for batches.

So I think I see this in e.g. the gem_exec_async patch. Allocating new
offsets for batches helps to avoid the stalls mentioned above, correct?

> > The best would be to have something hybrid which would propose new (and
> > not busy) bb, but that should happen in multiprocess env so I haven't
> > found how to write this yet. Libdrm handles pools of objects and reuses
> > them if they were not busy. But doing this in multiprocess requires
> > synchronization so some additional mechanism should be added to
> > allocator to handle this case.

What I proposed above using "last allocated offset" will not work in the
multiprocess env?

> > I still wonder to introduce .dependency_offset in creating spinner when
> > .dependency handle is passed. Currently we have to use Simple to ensure
> > we got same offset for .dependency. As spinners keep batch handles until
> > they are freed this likely is not a problem. But it may be in the
> > future.

OK.

> I am not following the above two paragraphs, maybe we can discuss more some
> time.

I do follow what you are saying now somewhat. Thank you.

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 18/52] tests/gem_exec_async: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 18/52] tests/gem_exec_async: " Zbigniew Kempczyński
@ 2021-08-06  1:43   ` Dixit, Ashutosh
  2021-08-06  7:33     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  1:43 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:52 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

But a couple of questions/comments below.

> +static void store_dword(int fd, int id, const intel_ctx_t *ctx,
> +			 unsigned ring, uint32_t target, uint64_t target_offset,
> +			 uint32_t offset, uint32_t value)
>  {
>	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
>	struct drm_i915_gem_exec_object2 obj[2];
> @@ -50,6 +53,15 @@ static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
>	obj[0].flags = EXEC_OBJECT_ASYNC;
>	obj[1].handle = gem_create(fd, 4096);
>
> +	if (id) {
> +		obj[0].offset = target_offset;
> +		obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE |
> +				EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> +		obj[1].offset = (id + 1) * SZ_1M;

So this is where I think we are assigning new offsets to successive batches
to avoid stalls, correct? Though I don't know why we don't get these
offsets from the allocator (though I guess this will work since we have a
4K scratch buffer and whatever the spin needs)?

Maybe we can add a one line comment above, something like:

/* Assign new offsets to successive batches to prevent stalls */

> @@ -89,6 +101,8 @@ static void one(int fd, const intel_ctx_t *ctx,
>	uint32_t scratch = gem_create(fd, 4096);
>	igt_spin_t *spin;
>	uint32_t *result;
> +	uint64_t ahnd = get_simple_l2h_ahnd(fd, ctx->id);

Is there a particular reason for using simple rather than the reloc
allocator here?

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 23/52] tests/gem_exec_suspend: Adopt to use allocator
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 23/52] tests/gem_exec_suspend: Adopt to use allocator Zbigniew Kempczyński
@ 2021-08-06  2:15   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  2:15 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Mon, 26 Jul 2021 12:59:57 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 24/52] tests/gem_exec_parallel: Adopt to use alloctor
  2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 24/52] tests/gem_exec_parallel: Adopt to use alloctor Zbigniew Kempczyński
@ 2021-08-06  4:39   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  4:39 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 12:59:58 -0700, Zbigniew Kempczyński wrote:
>

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

A couple of comments below.

> @@ -112,7 +116,7 @@ static void *thread(void *data)
>	reloc.delta = 4*t->id;
>	obj[1].handle = gem_create(fd, 4096);
>	obj[1].relocs_ptr = to_user_pointer(&reloc);
> -	obj[1].relocation_count = 1;
> +	obj[1].relocation_count = !t->ahnd ? 1 : 0;
>	gem_write(fd, obj[1].handle, 0, batch, sizeof(batch));
>
>	memset(&execbuf, 0, sizeof(execbuf));
> @@ -140,6 +144,18 @@ static void *thread(void *data)
>		if (t->flags & FDS)
>			obj[0].handle = gem_open(fd, obj[0].handle);
>
> +		if (t->ahnd) {
> +			offset = t->offsets[x];
> +			i = 0;
> +			batch[++i] = offset + 4*t->id;
> +			batch[++i] = offset >> 32;
> +			obj[0].offset = offset;
> +			obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
> +			obj[1].offset = get_offset(t->ahnd, obj[1].handle, 4096, 0);
> +			obj[1].flags |= EXEC_OBJECT_PINNED;
> +			gem_write(fd, obj[1].handle, 0, batch, sizeof(batch));

It was probably cleaner to eliminate the previous gem_write() above for
batch initialization, but I guess this works too.

> @@ -213,6 +229,7 @@ static void all(int fd, const intel_ctx_t *ctx,
>	void *arg[NUMOBJ];
>	int go;
>	int i;
> +	uint64_t ahnd = get_reloc_ahnd(fd, 0), offsets[NUMOBJ];
>
>	if (flags & CONTEXTS)

When CONTEXTS flag is set it ahnd should probably be tied to the
context but here we have it tied to context 0, which is probably not
exactly correct but works?

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 29/52] tests/gem_mmap_wc: Adopt to use allocator
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 29/52] tests/gem_mmap_wc: Adopt to use allocator Zbigniew Kempczyński
@ 2021-08-06  4:51   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  4:51 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 13:00:03 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 31/52] tests/gem_ringfill: Adopt to use allocator
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 31/52] tests/gem_ringfill: Adopt to use allocator Zbigniew Kempczyński
@ 2021-08-06  5:04   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  5:04 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 13:00:05 -0700, Zbigniew Kempczyński wrote:
>
> @@ -187,7 +201,7 @@ static void run_test(int fd, const intel_ctx_t *ctx, unsigned ring,
>
>	memset(&hang, 0, sizeof(hang));
>	if (flags & HANG)
> -		hang = igt_hang_ctx(fd, ctx->id, ring & ~(3<<13), 0);
> +		hang = igt_hang_ring_with_ahnd(fd, ring & ~(3<<13), ahnd);

Missing put_ahnd. With that:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-08-05 18:53         ` Dixit, Ashutosh
  2021-08-06  1:15           ` Dixit, Ashutosh
@ 2021-08-06  5:35           ` Zbigniew Kempczyński
  2021-08-06  5:52             ` Dixit, Ashutosh
  1 sibling, 1 reply; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-06  5:35 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala, Chris Wilson

On Thu, Aug 05, 2021 at 11:53:54AM -0700, Dixit, Ashutosh wrote:

<cut>
 
> > Even if Simple is stateful we got some case in which its usage
> > is currently limited (so you can see using reloc in most of the
> > tests). Problem is with...  it is stateful. Most of tests creates batch
> > (gem object), use it and destroy it. From allocator perspective we alloc
> > offset, then we free it. In next round we got same offset for another batch
> > (gem object). So kernel serialize the execution until previous vma is freed.
> > This lead to non-pipelined execution.
> 
> Maybe I am wrong but to me it looks fixable. Maybe we need to keep track of
> the "last allocated offset" in simple so that next time we allocate a new
> offset even if the previous one has been freed (rather than reallocating
> the previous offset). Or we can allocate starting from a random offset?

Yes, strategy with shifting offset (like in reloc) but beeing stateful will
likely work in most cases. I'm going to experiment with adding this to 
Simple when we finish this series. It will be much easier to find out this
will work when we will replace reloc->simple after such change on all tests.
I don't think random is good choice, if we hit busy offset we need to iterate
one or more in lists. 

> 
> > You can see pattern in many tests - ahnd = get_reloc_ahnd(...),
> > get offset for scratch surface, then pass scratch_offset to some execution
> > function. This allows to keep us same offset for scratch and get new
> > offsets for batches. The best would be to have something hybrid which would
> > propose new (and not busy) bb, but that should happen in multiprocess env
> > so I haven't found how to write this yet. Libdrm handles pools of objects
> > and reuses them if they were not busy. But doing this in multiprocess
> > requires synchronization so some additional mechanism should be added
> > to allocator to handle this case.
> >
> > I still wonder to introduce .dependency_offset in creating spinner when
> > .dependency handle is passed. Currently we have to use Simple to ensure
> > we got same offset for .dependency. As spinners keep batch handles until
> > they are freed this likely is not a problem. But it may be in the future.
> 
> I am not following the above two paragraphs, maybe we can discuss more some
> time.

Ok.

> 
> > >
> > > > +static inline uint64_t get_offset(uint64_t ahnd, uint32_t handle,
> > > > +				  uint64_t size, uint64_t alignment)
> > > > +{
> > > > +	if (!ahnd)
> > > > +		return 0;
> > > > +
> > > > +	return intel_allocator_alloc(ahnd, handle, size, alignment);
> > > > +}
> > > > +
> > > > +static inline bool put_offset(uint64_t ahnd, uint32_t handle)
> > > > +{
> > > > +	if (!ahnd)
> > > > +		return 0;
> > > > +
> > > > +	return intel_allocator_free(ahnd, handle);
> > > > +}
> > > > +
> > >
> > > Also for the function names are too generic with potential for namespace
> > > conflicts, probably ahnd_get_offset/ahnd_put_offset?
> >
> > If there will be more voices to change it I'll do it. At the moment
> > I wanted to have few short functions. I thought about ahnd_get_offset(ahnd, ...)
> > but when ahnd would be valid allocator handle, asserting otherwise.
> > get_offset(ahnd, ...) would just return some offset, for relocations
> > it may be 0 (like currently is), allocating offset for valid ahnd.
> 
> No need to change, it's fine for now. Thanks for the long explanation.
> 
> -Ashutosh

I assume this is equal to r-b if not already applied.

--
Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 35/52] tests/gem_unfence_active_buffers: Adopt to use allocator
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 35/52] tests/gem_unfence_active_buffers: " Zbigniew Kempczyński
@ 2021-08-06  5:44   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  5:44 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 13:00:09 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 36/52] tests/gem_unref_active_buffers: Adopt to use allocator
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 36/52] tests/gem_unref_active_buffers: " Zbigniew Kempczyński
@ 2021-08-06  5:46   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  5:46 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 13:00:10 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 37/52] tests/gem_wait: Adopt to use allocator
  2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 37/52] tests/gem_wait: " Zbigniew Kempczyński
@ 2021-08-06  5:48   ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  5:48 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev, Petri Latvala

On Mon, 26 Jul 2021 13:00:11 -0700, Zbigniew Kempczyński wrote:
>
> For newer gens we're not able to rely on relocations. Adopt to use
> offsets acquired from the allocator.

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-08-06  1:15           ` Dixit, Ashutosh
@ 2021-08-06  5:51             ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-06  5:51 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev

On Thu, Aug 05, 2021 at 06:15:34PM -0700, Dixit, Ashutosh wrote:
> On Thu, 05 Aug 2021 11:53:54 -0700, Dixit, Ashutosh wrote:
> >
> > On Tue, 03 Aug 2021 23:13:41 -0700, Zbigniew Kempczyński wrote:
> > >
> > > Even if Simple is stateful we got some case in which its usage
> > > is currently limited (so you can see using reloc in most of the
> > > tests). Problem is with...  it is stateful. Most of tests creates batch
> > > (gem object), use it and destroy it. From allocator perspective we alloc
> > > offset, then we free it. In next round we got same offset for another batch
> > > (gem object). So kernel serialize the execution until previous vma is freed.
> > > This lead to non-pipelined execution.
> >
> > Maybe I am wrong but to me it looks fixable. Maybe we need to keep track of
> > the "last allocated offset" in simple so that next time we allocate a new
> > offset even if the previous one has been freed (rather than reallocating
> > the previous offset). Or we can allocate starting from a random offset?
> >
> > > You can see pattern in many tests - ahnd = get_reloc_ahnd(...),
> > > get offset for scratch surface, then pass scratch_offset to some execution
> > > function. This allows to keep us same offset for scratch and get new
> > > offsets for batches.
> 
> So I think I see this in e.g. the gem_exec_async patch. Allocating new
> offsets for batches helps to avoid the stalls mentioned above, correct?

Yes, but here we use id to "generate" offset for bb. That's corner case
where we need same scratch_offset in store_dword as well as in spinner
(.dependency), so stateful allocator must be passed. Btw as there's
simple allocator I could also pass ahnd and get_offset(..., scratch,
...) one time in store_dword, but I still need to pass scratch size at
least or define it outside of function.

> 
> > > The best would be to have something hybrid which would propose new (and
> > > not busy) bb, but that should happen in multiprocess env so I haven't
> > > found how to write this yet. Libdrm handles pools of objects and reuses
> > > them if they were not busy. But doing this in multiprocess requires
> > > synchronization so some additional mechanism should be added to
> > > allocator to handle this case.
> 
> What I proposed above using "last allocated offset" will not work in the
> multiprocess env?

It will work. Strange things can happen when we wrap around gtt size, what
for ppgtt likely shouldn't be big deal (size is big so risk we hit busy 
offset is small, same like in reloc right now).

--
Zbigniew

> 
> > > I still wonder to introduce .dependency_offset in creating spinner when
> > > .dependency handle is passed. Currently we have to use Simple to ensure
> > > we got same offset for .dependency. As spinners keep batch handles until
> > > they are freed this likely is not a problem. But it may be in the
> > > future.
> 
> OK.
> 
> > I am not following the above two paragraphs, maybe we can discuss more some
> > time.
> 
> I do follow what you are saying now somewhat. Thank you.

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use
  2021-08-06  5:35           ` Zbigniew Kempczyński
@ 2021-08-06  5:52             ` Dixit, Ashutosh
  0 siblings, 0 replies; 102+ messages in thread
From: Dixit, Ashutosh @ 2021-08-06  5:52 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Thu, 05 Aug 2021 22:35:44 -0700, Zbigniew Kempczyński wrote:
>
> I assume this is equal to r-b if not already applied.

It already has R-b. For me the easiest way to see which patches have R-b is
on Patchwork. Patchwork also records the R-b so if you get the series back
from there they already have R-b so you don't need to manually apply it.

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy
  2021-08-05 19:47       ` Dixit, Ashutosh
@ 2021-08-06  6:17         ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-06  6:17 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev

On Thu, Aug 05, 2021 at 12:47:30PM -0700, Dixit, Ashutosh wrote:
> On Thu, 05 Aug 2021 00:28:30 -0700, Zbigniew Kempczyński wrote:
> >
> 
> Thanks for the responding. I have replied below, please see if anything
> needs to be addressed but otherwise this is:
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
> 
> > On Wed, Aug 04, 2021 at 04:26:32PM -0700, Dixit, Ashutosh wrote:
> > > On Mon, 26 Jul 2021 12:59:40 -0700, Zbigniew Kempczyński wrote:
> > > >
> > > > @@ -808,9 +816,21 @@ void igt_blitter_src_copy(int fd,
> > > >	uint32_t src_pitch, dst_pitch;
> > > >	uint32_t dst_reloc_offset, src_reloc_offset;
> > > >	uint32_t gen = intel_gen(intel_get_drm_devid(fd));
> > > > +	uint64_t batch_offset, src_offset, dst_offset;
> > > >	const bool has_64b_reloc = gen >= 8;
> > > >	int i = 0;
> > > >
> > > > +	batch_handle = gem_create(fd, 4096);
> > > > +	if (ahnd) {
> > > > +		src_offset = get_offset(ahnd, src_handle, src_size, 0);
> > > > +		dst_offset = get_offset(ahnd, dst_handle, dst_size, 0);
> > > > +		batch_offset = get_offset(ahnd, batch_handle, 4096, 0);
> > > > +	} else {
> > > > +		src_offset = 16 << 20;
> > > > +		dst_offset = ALIGN(src_offset + src_size, 1 << 20);
> > > > +		batch_offset = ALIGN(dst_offset + dst_size, 1 << 20);
> > >
> > > For the !ahnd case, we are providing relocations right? We still need to
> > > provide these offsets or they can all be 0?
> >
> > Yes, we're providing relocations but we try to guess offsets to avoid them.
> > If we guess valid offsets they will be used, if we missed and kernel decides
> > to migrate vma(s) kernel will relocate and fill offsets within bb regardless
> > NO_RELOC (it's just a hint - if vmas are not moved and you filled bb with
> > them just skip relocations).
> 
> Yes this is as I thought as I said in the later mail.
> 
> > > > @@ -882,22 +902,29 @@ void igt_blitter_src_copy(int fd,
> > > >
> > > >	igt_assert(i <= ARRAY_SIZE(batch));
> > > >
> > > > -	batch_handle = gem_create(fd, 4096);
> > > >	gem_write(fd, batch_handle, 0, batch, sizeof(batch));
> > > >
> > > > -	fill_relocation(&relocs[0], dst_handle, -1, dst_delta, dst_reloc_offset,
> > > > +	fill_relocation(&relocs[0], dst_handle, dst_offset,
> > > > +			dst_delta, dst_reloc_offset,
> > > >			I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER);
> > > > -	fill_relocation(&relocs[1], src_handle, -1, src_delta, src_reloc_offset,
> > > > +	fill_relocation(&relocs[1], src_handle, src_offset,
> > > > +			src_delta, src_reloc_offset,
> > > >			I915_GEM_DOMAIN_RENDER, 0);
> > > >
> > > > -	fill_object(&objs[0], dst_handle, 0, NULL, 0);
> > > > -	fill_object(&objs[1], src_handle, 0, NULL, 0);
> > > > -	fill_object(&objs[2], batch_handle, 0, relocs, 2);
> > > > +	fill_object(&objs[0], dst_handle, dst_offset, NULL, 0);
> > > > +	fill_object(&objs[1], src_handle, src_offset, NULL, 0);
> > > > +	fill_object(&objs[2], batch_handle, batch_offset, relocs, 2);
> > > >
> > > > -	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE;
> > > > +	objs[0].flags |= EXEC_OBJECT_NEEDS_FENCE | EXEC_OBJECT_WRITE;
> > > >	objs[1].flags |= EXEC_OBJECT_NEEDS_FENCE;
> > > >
> > > > -	exec_blit(fd, objs, 3, gen, 0);
> > > > +	if (ahnd) {
> > > > +		objs[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > > > +		objs[1].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > > > +		objs[2].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > > > +	}
> > >
> > > Should be add an "else" here and pull the fill_relocation() and set the
> > > relocation_count to 2 only if we have !ahnd? Maybe ok to leave as is too if
> > > the kernel will ignore the reloc stuff when EXEC_OBJECT_PINNED is set.
> >
> > We may pass relocs data to the kernel but check in no-reloc gens uses .relocation_count
> > field. That's why we need to provide it zeroed.
> >
> > If you're asking why I haven't do else - I'm just a little bit lazy and I wanted
> > avoid to additional else {} block. But if you think code would be more readable
> > I will change it.
> 
> No it's ok, no need to change. But looks like the relocation_count above is
> unconditionally set to 2 in both reloc and no-reloc case. If this works
> then maybe relocation_count is ignored if EXEC_OBJECT_PINNED is set?
> Otherwise we may to set it to 0 in the non-reloc case.

Oh, series (v3) got 2 what is wrong - I got it fixed right after this version was
sent and I'd seen results so I forgot about it. You've noticed that well, currently
it is:

+       fill_object(&objs[2], batch_handle, batch_offset, relocs, !ahnd ? 2 : 0);

V4 will be send soon.

Thanks for review.
--
Zbigniew

> 
> > > > @@ -584,10 +601,17 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
> > > >	obj[SCRATCH].handle = prime_fd_to_handle(i915, dmabuf);
> > > >
> > > >	obj[BATCH].handle = gem_create(i915, size);
> > > > +	obj[BATCH].offset = get_offset(ahnd, obj[BATCH].handle, size, 0);
> > > >	obj[BATCH].relocs_ptr = (uintptr_t)store;
> > > > -	obj[BATCH].relocation_count = ARRAY_SIZE(store);
> > > > +	obj[BATCH].relocation_count = !ahnd ? ARRAY_SIZE(store) : 0;
> > > >	memset(store, 0, sizeof(store));
> > > >
> > > > +	if (ahnd) {
> > > > +		obj[SCRATCH].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
> > > > +		obj[SCRATCH].offset = scratch_offset;
> > > > +		obj[BATCH].flags = EXEC_OBJECT_PINNED;
> > > > +	}
> > >
> > > Why don't we compute scratch_offset in work() itself (and rather pass it in
> > > from the callers)?
> >
> > In work() we're not aware what scratch object size is. So there's hard to
> > call get_offset() with size. So I need to pass size or offset, probably
> > experience with other tests points me to pass offset instead size.
> > Generally if we have some scratch and we want to use it within pipelined
> > executions in same context we need to provide same offset for scratch,
> > but offsets which are not busy for bb. Second is problematic with Simple allocator
> > (see one of my previous email when I describe this problem) because scratch
> > will have same offset - we want this, but bb will also have same offset.
> > Using Reloc allocator at least gives us next offsets for bb, but we have
> > to avoid using it twice or more for scratch).
> 
> Sorry, I did not realize scratch is not available in work. I would rather
> pass scratch into the function but maybe it's ok as is too.
> 
> >
> > >
> > > > @@ -602,8 +626,8 @@ static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
> > > >		store[count].write_domain = I915_GEM_DOMAIN_INSTRUCTION;
> > > >		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
> > > >		if (gen >= 8) {
> > > > -			batch[++i] = 0;
> > > > -			batch[++i] = 0;
> > > > +			batch[++i] = scratch_offset + store[count].delta;
> > > > +			batch[++i] = (scratch_offset + store[count].delta) >> 32;
> > > >		} else if (gen >= 4) {
> > > >			batch[++i] = 0;
> > > >			batch[++i] = 0;
> > >
> > > Should we add the offset's even for previous gen's (gen < 8)? Because I am
> > > thinking at present kernel is supporting reloc's for gen < 12 but maybe
> > > later kernels will discontinue them completely so we'll need to fix the
> > > previous gen's all over again? Maybe too much?
> >
> > On older gens you'll definitely catch relocation here (presumed_offset == -1
> > and lack of NO_RELOC flag).
> >
> > Newer kernels cannot remove relocations because on gens where you have no
> > ppgtt you're not able to predict which offsets are busy or not. So passing
> > offset here does nothing and relocation will overwrite it.
> 
> Ah ok, because multiple processes are sharing the global gtt. Thanks.

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator
  2021-08-05 21:14       ` Dixit, Ashutosh
@ 2021-08-06  6:56         ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-06  6:56 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Thu, Aug 05, 2021 at 02:14:12PM -0700, Dixit, Ashutosh wrote:
> On Thu, 05 Aug 2021 01:02:41 -0700, Zbigniew Kempczyński wrote:
> >
> 
> Please fix the put_ahnd. With that this patch is:
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
> 
> But I want to discuss the intel_allocator_multiprocess_start/stop issue a
> bit more below.
> 
> > On Wed, Aug 04, 2021 at 07:07:41PM -0700, Dixit, Ashutosh wrote:
> > > On Mon, 26 Jul 2021 12:59:44 -0700, Zbigniew Kempczyński wrote:
> > > >
> > > > For newer gens we're not able to rely on relocations. Adopt to use
> > > > offsets acquired from the allocator.
> > > >
> > > > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > > > Cc: Petri Latvala <petri.latvala@intel.com>
> > > > Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
> > > > ---
> > > >  tests/i915/gem_busy.c | 35 +++++++++++++++++++++++++++++++----
> > > >  1 file changed, 31 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/tests/i915/gem_busy.c b/tests/i915/gem_busy.c
> > > > index f0fca0e8a..51ec5ad04 100644
> > > > --- a/tests/i915/gem_busy.c
> > > > +++ b/tests/i915/gem_busy.c
> > > > @@ -108,6 +108,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
> > > >	uint32_t handle[3];
> > > >	uint32_t read, write;
> > > >	uint32_t active;
> > > > +	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
> > > >	unsigned i;
> > > >
> > > >	handle[TEST] = gem_create(fd, 4096);
> > > > @@ -117,6 +118,7 @@ static void semaphore(int fd, const intel_ctx_t *ctx,
> > > >	/* Create a long running batch which we can use to hog the GPU */
> > > >	handle[BUSY] = gem_create(fd, 4096);
> > > >	spin = igt_spin_new(fd,
> > > > +			    .ahnd = ahnd,
> > > >			    .ctx = ctx,
> > > >			    .engine = e->flags,
> > > >			    .dependency = handle[BUSY]);
> > >
> > > Missing put_ahnd.
> >
> > Good catch.
> >
> > >
> > > > @@ -428,6 +442,7 @@ igt_main
> > > >
> > > >	igt_subtest_group {
> > > >		igt_fixture {
> > > > +			intel_allocator_multiprocess_start();
> > > >			igt_fork_hang_detector(fd);
> > > >		}
> > > >
> > > > @@ -445,6 +460,21 @@ igt_main
> > > >			}
> > > >		}
> > > >
> > >
> > > Just above here is basic() which doesn't have a fork. Is it ok to do
> > > intel_allocator_multiprocess_start/stop when we don't have a fork? If yes,
> > > then can we _always_ do intel_allocator_multiprocess_start/stop rather than
> > > only when we have fork? Thanks.
> >
> > intel_allocator_multiprocess_start() creates allocator thread which acts
> > for children (igt_fork) to alloc/free offsets. If you use alloc/free within
> > same process (from which thread was spawned) internal structure is mutexed
> > and no IPCs are called. So only consequence of this here is additional thread
> > in system/memory (which does nothing for basic() tests). It will be stopped
> > with intel_allocator_multiprocess_stop().
> 
> OK, in that case let me ask the question I asked above in another way. Can
> we add intel_allocator_multiprocess_start() to common_init() (the program
> entry point) and similarly say intel_allocator_multiprocess_stop() to
> igt_exit() (or common_exit_handler(), basically the program exit point) so
> that these always run and we don't have to add them only for specific tests
> which fork? What would be the disadvantage of doing this? Thanks.

At the moment likely not. Currently each test which completes reinitialize
data structures (intel_allocator_init(), called in exit_subtest()). It
doesn't perform any sysvipc calls (recreating msgqueue). I should
happen to clean the mess from previous (likely failed) run - what
intel_allocator_multiprocess_stop() does now. Starting/stopping allocator
which is designed for Intel i915 mostly would also be called in kms
tests which are vendor agnostic if I'm not wrong.

I still queued to handle situations when child calls assert. This generates
problem now because we're stopping test now without stopping allocator thread
properly.

I would stick at the moment to spawn allocator thread when it is necessary.
Currently allocator implementation  multiprocess environment is fragile and 
handle errors in limited manner.

Putting intel_allocator_multiprocess_start()/stop() in fixture is imo best
way of handling error situations. We ensure something we create/destroy ipc
regardless tests results.

--
Zbigniew

> 
> > But for purity test should work without additional dependencies so I'll fix
> > this - it will be sent in v4.
> >
> > --
> > Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 17/52] tests/gem_eio: Adopt to use allocator
  2021-08-05 21:44   ` Dixit, Ashutosh
@ 2021-08-06  7:16     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-06  7:16 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Thu, Aug 05, 2021 at 02:44:39PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:51 -0700, Zbigniew Kempczyński wrote:
> >
> > -static void reset_stress(int fd, const intel_ctx_t *ctx0,
> > +static void reset_stress(int fd, uint64_t ahnd, const intel_ctx_t *ctx0,
> 
> I think it would have been ok to allocate ahnd inside reset_stress() but
> looks like we wanted to keep it tied to ctx_create/destroy, so this is:

Yes, spin_sync() is running within ctx0 so I would keep ahnd for ctx0
for whole test (test_reset_stress() goes over rings within ctx0 so they
share vm). If we would recreate ahnd for ctx0 for each run we would 
just evict vmas from previous call.

--
Zbigniew

> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

^ permalink raw reply	[flat|nested] 102+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v3 18/52] tests/gem_exec_async: Adopt to use allocator
  2021-08-06  1:43   ` Dixit, Ashutosh
@ 2021-08-06  7:33     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 102+ messages in thread
From: Zbigniew Kempczyński @ 2021-08-06  7:33 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: igt-dev, Petri Latvala

On Thu, Aug 05, 2021 at 06:43:56PM -0700, Dixit, Ashutosh wrote:
> On Mon, 26 Jul 2021 12:59:52 -0700, Zbigniew Kempczyński wrote:
> >
> > For newer gens we're not able to rely on relocations. Adopt to use
> > offsets acquired from the allocator.
> 
> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
> 
> But a couple of questions/comments below.
> 
> > +static void store_dword(int fd, int id, const intel_ctx_t *ctx,
> > +			 unsigned ring, uint32_t target, uint64_t target_offset,
> > +			 uint32_t offset, uint32_t value)
> >  {
> >	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
> >	struct drm_i915_gem_exec_object2 obj[2];
> > @@ -50,6 +53,15 @@ static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
> >	obj[0].flags = EXEC_OBJECT_ASYNC;
> >	obj[1].handle = gem_create(fd, 4096);
> >
> > +	if (id) {
> > +		obj[0].offset = target_offset;
> > +		obj[0].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE |
> > +				EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > +		obj[1].offset = (id + 1) * SZ_1M;
> 
> So this is where I think we are assigning new offsets to successive batches
> to avoid stalls, correct? Though I don't know why we don't get these
> offsets from the allocator (though I guess this will work since we have a
> 4K scratch buffer and whatever the spin needs)?

In store_dword() we got gem_close(fd, obj[1].handle) so we risk in
igt_fork() we got same offset for different batches. 

> 
> Maybe we can add a one line comment above, something like:

Ok, I'm going to add comment with the description why we need simple
and why we pass 'id' to store_word().

> 
> /* Assign new offsets to successive batches to prevent stalls */
> 
> > @@ -89,6 +101,8 @@ static void one(int fd, const intel_ctx_t *ctx,
> >	uint32_t scratch = gem_create(fd, 4096);
> >	igt_spin_t *spin;
> >	uint32_t *result;
> > +	uint64_t ahnd = get_simple_l2h_ahnd(fd, ctx->id);
> 
> Is there a particular reason for using simple rather than the reloc
> allocator here?

.dependency (scratch). We want to get same offset for same handle so
reloc allocator cannot be used here.

--
Zbigniew

^ permalink raw reply	[flat|nested] 102+ messages in thread

end of thread, other threads:[~2021-08-06  7:33 UTC | newest]

Thread overview: 102+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-26 19:59 [igt-dev] [PATCH i-g-t v3 00/52] Add allocator support in IGT Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 01/52] lib/igt_dummyload: Add support of using allocator in igt spinner Zbigniew Kempczyński
2021-08-03 23:07   ` Dixit, Ashutosh
2021-08-04  6:19     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 02/52] lib/intel_allocator: Add few helper functions for common use Zbigniew Kempczyński
2021-08-03 21:01   ` Dixit, Ashutosh
2021-08-04  0:18     ` Dixit, Ashutosh
2021-08-04  6:13       ` Zbigniew Kempczyński
2021-08-05 18:53         ` Dixit, Ashutosh
2021-08-06  1:15           ` Dixit, Ashutosh
2021-08-06  5:51             ` Zbigniew Kempczyński
2021-08-06  5:35           ` Zbigniew Kempczyński
2021-08-06  5:52             ` Dixit, Ashutosh
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 03/52] lib/igt_gt: Add passing ahnd as an argument to igt_hang Zbigniew Kempczyński
2021-08-03 23:15   ` Dixit, Ashutosh
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 04/52] lib/intel_batchbuffer: Ensure relocation code will be called Zbigniew Kempczyński
2021-08-03 23:34   ` Dixit, Ashutosh
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 05/52] lib/intel_batchbuffer: Add allocator support in blitter fast copy Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 06/52] lib/intel_batchbuffer: Add allocator support in blitter src copy Zbigniew Kempczyński
2021-08-04 23:26   ` Dixit, Ashutosh
2021-08-04 23:44     ` Dixit, Ashutosh
2021-08-05  8:50       ` Zbigniew Kempczyński
2021-08-05  7:28     ` Zbigniew Kempczyński
2021-08-05 19:47       ` Dixit, Ashutosh
2021-08-06  6:17         ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 07/52] lib/intel_batchbuffer: Try to avoid relocations in blitting Zbigniew Kempczyński
2021-08-04 23:42   ` Dixit, Ashutosh
2021-08-05  7:34     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 08/52] lib/huc_copy: Extend huc copy prototype to pass allocator handle Zbigniew Kempczyński
2021-08-05  0:31   ` Dixit, Ashutosh
2021-08-05  7:44     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 09/52] tests/gem_bad_reloc: Skip on gens where relocations are not supported Zbigniew Kempczyński
2021-08-05  0:33   ` Dixit, Ashutosh
2021-08-05  7:46     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 10/52] tests/gem_busy: Adopt to use allocator Zbigniew Kempczyński
2021-08-05  2:07   ` Dixit, Ashutosh
2021-08-05  8:02     ` Zbigniew Kempczyński
2021-08-05 21:14       ` Dixit, Ashutosh
2021-08-06  6:56         ` Zbigniew Kempczyński
2021-08-05  8:14     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 11/52] tests/gem_create: " Zbigniew Kempczyński
2021-08-05  2:14   ` Dixit, Ashutosh
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 12/52] tests/gem_ctx_engines: " Zbigniew Kempczyński
2021-08-05  2:40   ` Dixit, Ashutosh
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 13/52] tests/gem_ctx_exec: " Zbigniew Kempczyński
2021-08-05  3:06   ` Dixit, Ashutosh
2021-08-05  8:46     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 14/52] tests/gem_ctx_freq: " Zbigniew Kempczyński
2021-08-05  6:07   ` Dixit, Ashutosh
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 15/52] tests/gem_ctx_isolation: " Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 16/52] tests/gem_ctx_param: " Zbigniew Kempczyński
2021-08-05  7:18   ` Dixit, Ashutosh
2021-08-05 10:19     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 17/52] tests/gem_eio: " Zbigniew Kempczyński
2021-08-05 21:44   ` Dixit, Ashutosh
2021-08-06  7:16     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 18/52] tests/gem_exec_async: " Zbigniew Kempczyński
2021-08-06  1:43   ` Dixit, Ashutosh
2021-08-06  7:33     ` Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 19/52] tests/gem_exec_big: Require relocation support Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 20/52] tests/gem_exec_capture: Support gens without relocations Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 21/52] tests/gem_exec_gttfill: Require relocation support Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 22/52] tests/gem_exec_store: Support gens without relocations Zbigniew Kempczyński
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 23/52] tests/gem_exec_suspend: Adopt to use allocator Zbigniew Kempczyński
2021-08-06  2:15   ` Dixit, Ashutosh
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 24/52] tests/gem_exec_parallel: Adopt to use alloctor Zbigniew Kempczyński
2021-08-06  4:39   ` Dixit, Ashutosh
2021-07-26 19:59 ` [igt-dev] [PATCH i-g-t v3 25/52] tests/gem_exec_params: Support gens without relocations Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 26/52] tests/gem_mmap: Add allocator support Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 27/52] tests/gem_mmap_gtt: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 28/52] tests/gem_mmap_offset: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 29/52] tests/gem_mmap_wc: Adopt to use allocator Zbigniew Kempczyński
2021-08-06  4:51   ` Dixit, Ashutosh
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 30/52] tests/gem_request_retire: Add allocator support Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 31/52] tests/gem_ringfill: Adopt to use allocator Zbigniew Kempczyński
2021-08-06  5:04   ` Dixit, Ashutosh
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 32/52] tests/gem_softpin: Exercise eviction with softpinning Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 33/52] tests/gem_spin_batch: Adopt to use allocator Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 34/52] tests/gem_tiled_fence_blits: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 35/52] tests/gem_unfence_active_buffers: " Zbigniew Kempczyński
2021-08-06  5:44   ` Dixit, Ashutosh
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 36/52] tests/gem_unref_active_buffers: " Zbigniew Kempczyński
2021-08-06  5:46   ` Dixit, Ashutosh
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 37/52] tests/gem_wait: " Zbigniew Kempczyński
2021-08-06  5:48   ` Dixit, Ashutosh
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 38/52] tests/gem_watchdog: Adopt to use no-reloc Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 39/52] tests/gem_workarounds: Adopt to use allocator Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 40/52] tests/i915_hangman: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 41/52] tests/i915_module_load: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 42/52] tests/i915_pm_rc6_residency: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 43/52] tests/i915_pm_rpm: Adopt to use no-reloc Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 44/52] tests/i915_pm_rps: Alter " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 45/52] tests/kms_busy: Adopt to use allocator Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 46/52] tests/kms_cursor_legacy: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 47/52] tests/kms_flip: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 48/52] tests/kms_vblank: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 49/52] tests/perf_pmu: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 50/52] tests/sysfs_heartbeat_interval: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 51/52] tests/sysfs_preempt_timeout: " Zbigniew Kempczyński
2021-07-26 20:00 ` [igt-dev] [PATCH i-g-t v3 52/52] tests/sysfs_timeslice_duration: " Zbigniew Kempczyński
2021-07-26 21:33 ` [igt-dev] ✓ Fi.CI.BAT: success for Add allocator support in IGT (rev3) Patchwork
2021-07-27  2:14 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.