All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof
@ 2023-11-16 14:53 Francois Dugast
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 01/13] drm-uapi/xe: Extend drm_xe_vm_bind_op Francois Dugast
                   ` (16 more replies)
  0 siblings, 17 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev

This aligns on kernel series:
https://patchwork.freedesktop.org/series/126535/

Francois Dugast (5):
  drm-uapi/xe: Extend drm_xe_vm_bind_op
  drm-uapi/xe: Reject bo creation of unaligned size
  drm-uapi/xe: Align on a common way to return  arrays (memory regions)
  drm-uapi/xe: Align on a common way to return arrays (gt)
  drm-uapi/xe: Align on a common way to return arrays (engines)

Rodrigo Vivi (8):
  xe_ioctl: Converge bo_create to the most used version
  xe_ioctl: Rename *xe_bo_create_flags to simply xe_bo_create
  xe_query: Add missing include.
  xe_query: Kill visible_vram_if_possible
  drm-uapi/xe: Separate bo_create placement from flags
  xe: s/hw_engine/engine
  drm-uapi/xe: Align with drm_xe_query_engine_info
  drm-uapi/xe: Add Tile ID information to the GT info query

 benchmarks/gem_wsim.c                    |  13 +-
 include/drm-uapi/xe_drm.h                | 158 ++++++++++++-------
 lib/igt_draw.c                           |   7 +-
 lib/igt_fb.c                             |   7 +-
 lib/intel_batchbuffer.c                  |   8 +-
 lib/intel_blt.c                          |   4 +-
 lib/intel_bufops.c                       |   2 +-
 lib/xe/xe_ioctl.c                        |  44 +++---
 lib/xe/xe_ioctl.h                        |   8 +-
 lib/xe/xe_query.c                        | 191 +++++++++--------------
 lib/xe/xe_query.h                        |  37 ++---
 lib/xe/xe_spin.c                         |  15 +-
 lib/xe/xe_util.c                         |   6 +-
 tests/intel-ci/xe-fast-feedback.testlist |   2 +-
 tests/intel/api_intel_allocator.c        |   4 +-
 tests/intel/kms_big_fb.c                 |  22 +--
 tests/intel/kms_ccs.c                    |   5 +-
 tests/intel/xe_ccs.c                     |  12 +-
 tests/intel/xe_copy_basic.c              |   8 +-
 tests/intel/xe_create.c                  |  15 +-
 tests/intel/xe_dma_buf_sync.c            |   7 +-
 tests/intel/xe_drm_fdinfo.c              |  21 +--
 tests/intel/xe_evict.c                   |  48 +++---
 tests/intel/xe_evict_ccs.c               |   7 +-
 tests/intel/xe_exec_balancer.c           |  37 +++--
 tests/intel/xe_exec_basic.c              |  18 +--
 tests/intel/xe_exec_compute_mode.c       |  13 +-
 tests/intel/xe_exec_fault_mode.c         |  20 +--
 tests/intel/xe_exec_reset.c              |  65 ++++----
 tests/intel/xe_exec_store.c              |  31 ++--
 tests/intel/xe_exec_threads.c            |  39 ++---
 tests/intel/xe_exercise_blt.c            |   4 +-
 tests/intel/xe_guc_pc.c                  |   9 +-
 tests/intel/xe_huc_copy.c                |   2 +-
 tests/intel/xe_intel_bb.c                |   5 +-
 tests/intel/xe_mmap.c                    |  65 ++++----
 tests/intel/xe_noexec_ping_pong.c        |  11 +-
 tests/intel/xe_perf_pmu.c                |  10 +-
 tests/intel/xe_pm.c                      |  39 ++---
 tests/intel/xe_pm_residency.c            |   7 +-
 tests/intel/xe_prime_self_import.c       |  63 ++++++--
 tests/intel/xe_query.c                   |  80 +++++-----
 tests/intel/xe_spin_batch.c              |  12 +-
 tests/intel/xe_vm.c                      | 108 +++++++------
 tests/intel/xe_waitfence.c               |  22 ++-
 tests/kms_addfb_basic.c                  |   2 +-
 tests/kms_getfb.c                        |   2 +-
 tests/kms_plane.c                        |   2 +-
 48 files changed, 699 insertions(+), 618 deletions(-)

-- 
2.34.1

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 01/13] drm-uapi/xe: Extend drm_xe_vm_bind_op
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-21 17:01   ` Kamil Konieczny
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 02/13] xe_ioctl: Converge bo_create to the most used version Francois Dugast
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Extend drm_xe_vm_bind_op")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index af32ec161..5ef16f16e 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -613,6 +613,9 @@ struct drm_xe_vm_destroy {
 };
 
 struct drm_xe_vm_bind_op {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
 	/**
 	 * @obj: GEM object to operate on, MBZ for MAP_USERPTR, MBZ for UNMAP
 	 */
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 02/13] xe_ioctl: Converge bo_create to the most used version
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 01/13] drm-uapi/xe: Extend drm_xe_vm_bind_op Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-21 17:13   ` Kamil Konieczny
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 03/13] xe_ioctl: Rename *xe_bo_create_flags to simply xe_bo_create Francois Dugast
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Let's unify the call instead of having 2 separated
options for the same goal.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 lib/xe/xe_ioctl.c           | 15 ---------------
 lib/xe/xe_ioctl.h           |  1 -
 tests/intel/xe_perf_pmu.c   |  4 ++--
 tests/intel/xe_spin_batch.c |  2 +-
 tests/intel/xe_vm.c         |  9 +++++----
 5 files changed, 8 insertions(+), 23 deletions(-)

diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 738c4ffdb..78d431ab2 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -253,21 +253,6 @@ uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags)
 	return handle;
 }
 
-uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size)
-{
-	struct drm_xe_gem_create create = {
-		.vm_id = vm,
-		.size = size,
-		.flags = vram_if_possible(fd, gt),
-	};
-	int err;
-
-	err = igt_ioctl(fd, DRM_IOCTL_XE_GEM_CREATE, &create);
-	igt_assert_eq(err, 0);
-
-	return create.handle;
-}
-
 uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext, bool async)
 {
 	struct drm_xe_engine_class_instance instance = {
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index a9171bcf7..fb191d98f 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -67,7 +67,6 @@ void xe_vm_destroy(int fd, uint32_t vm);
 uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags,
 			      uint32_t *handle);
 uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags);
-uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size);
 uint32_t xe_exec_queue_create(int fd, uint32_t vm,
 			  struct drm_xe_engine_class_instance *instance,
 			  uint64_t ext);
diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
index e9d05cf2b..2c549f778 100644
--- a/tests/intel/xe_perf_pmu.c
+++ b/tests/intel/xe_perf_pmu.c
@@ -103,7 +103,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create(fd, eci->gt_id, vm, bo_size);
+	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
 	spin = xe_bo_map(fd, bo, bo_size);
 
 	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
@@ -223,7 +223,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
-	bo = xe_bo_create(fd, gt, vm, bo_size);
+	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < num_placements; i++) {
diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
index 6ab604d9b..261fde9af 100644
--- a/tests/intel/xe_spin_batch.c
+++ b/tests/intel/xe_spin_batch.c
@@ -169,7 +169,7 @@ static void xe_spin_fixed_duration(int fd)
 	exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
 	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
 	bo_size = ALIGN(sizeof(*spin) + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
-	bo = xe_bo_create(fd, 0, vm, bo_size);
+	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
 	spin = xe_bo_map(fd, bo, bo_size);
 	spin_addr = intel_allocator_alloc_with_strategy(ahnd, bo, bo_size, 0,
 							ALLOC_STRATEGY_LOW_TO_HIGH);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 05e8e7516..eedd05b57 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -267,7 +267,7 @@ static void test_partial_unbinds(int fd)
 {
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	size_t bo_size = 3 * xe_get_default_alignment(fd);
-	uint32_t bo = xe_bo_create(fd, 0, vm, bo_size);
+	uint32_t bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
 	uint64_t unbind_size = bo_size / 3;
 	uint64_t addr = 0x1a0000;
 
@@ -316,7 +316,7 @@ static void unbind_all(int fd, int n_vmas)
 	};
 
 	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
-	bo = xe_bo_create(fd, 0, vm, bo_size);
+	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
 
 	for (i = 0; i < n_vmas; ++i)
 		xe_vm_bind_async(fd, vm, 0, bo, 0, addr + i * bo_size,
@@ -362,6 +362,7 @@ static void userptr_invalid(int fd)
 	xe_vm_destroy(fd, vm);
 }
 
+
 /**
  * SUBTEST: shared-%s-page
  * Description: Test shared arg[1] page
@@ -1575,9 +1576,9 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_assert(map0 != MAP_FAILED);
 		igt_assert(map1 != MAP_FAILED);
 	} else {
-		bo0 = xe_bo_create(fd, eci->gt_id, vm, bo_size);
+		bo0 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
 		map0 = xe_bo_map(fd, bo0, bo_size);
-		bo1 = xe_bo_create(fd, eci->gt_id, vm, bo_size);
+		bo1 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
 		map1 = xe_bo_map(fd, bo1, bo_size);
 	}
 	memset(map0, 0, bo_size);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 03/13] xe_ioctl: Rename *xe_bo_create_flags to simply xe_bo_create
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 01/13] drm-uapi/xe: Extend drm_xe_vm_bind_op Francois Dugast
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 02/13] xe_ioctl: Converge bo_create to the most used version Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-21 17:24   ` Kamil Konieczny
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 04/13] xe_query: Add missing include Francois Dugast
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Now that we have only one variant we can unify to the
simplest version.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 benchmarks/gem_wsim.c              |  2 +-
 lib/igt_draw.c                     |  6 ++---
 lib/igt_fb.c                       |  6 ++---
 lib/intel_batchbuffer.c            |  6 ++---
 lib/intel_blt.c                    |  2 +-
 lib/intel_bufops.c                 |  2 +-
 lib/xe/xe_ioctl.c                  |  8 +++---
 lib/xe/xe_ioctl.h                  |  6 ++---
 lib/xe/xe_spin.c                   |  8 +++---
 tests/intel/api_intel_allocator.c  |  4 +--
 tests/intel/kms_big_fb.c           | 22 ++++++++--------
 tests/intel/kms_ccs.c              |  4 +--
 tests/intel/xe_ccs.c               | 12 ++++-----
 tests/intel/xe_copy_basic.c        |  8 +++---
 tests/intel/xe_dma_buf_sync.c      |  4 +--
 tests/intel/xe_drm_fdinfo.c        |  6 ++---
 tests/intel/xe_evict.c             | 40 +++++++++++++++---------------
 tests/intel/xe_evict_ccs.c         |  6 ++---
 tests/intel/xe_exec_balancer.c     |  6 ++---
 tests/intel/xe_exec_basic.c        |  3 +--
 tests/intel/xe_exec_compute_mode.c |  4 +--
 tests/intel/xe_exec_fault_mode.c   | 10 ++++----
 tests/intel/xe_exec_reset.c        | 16 ++++++------
 tests/intel/xe_exec_store.c        | 12 ++++-----
 tests/intel/xe_exec_threads.c      | 12 ++++-----
 tests/intel/xe_exercise_blt.c      |  4 +--
 tests/intel/xe_guc_pc.c            |  4 +--
 tests/intel/xe_intel_bb.c          |  2 +-
 tests/intel/xe_mmap.c              | 32 ++++++++++++------------
 tests/intel/xe_noexec_ping_pong.c  |  4 +--
 tests/intel/xe_perf_pmu.c          |  4 +--
 tests/intel/xe_pm.c                |  6 ++---
 tests/intel/xe_pm_residency.c      |  4 +--
 tests/intel/xe_prime_self_import.c | 28 ++++++++++-----------
 tests/intel/xe_spin_batch.c        |  2 +-
 tests/intel/xe_vm.c                | 35 +++++++++++++-------------
 tests/intel/xe_waitfence.c         | 20 +++++++--------
 tests/kms_addfb_basic.c            |  2 +-
 tests/kms_getfb.c                  |  2 +-
 39 files changed, 181 insertions(+), 183 deletions(-)

diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
index df4850086..d6d3deb5f 100644
--- a/benchmarks/gem_wsim.c
+++ b/benchmarks/gem_wsim.c
@@ -1734,7 +1734,7 @@ xe_alloc_step_batch(struct workload *wrk, struct w_step *w)
 	struct dep_entry *dep;
 	int i;
 
-	w->bb_handle = xe_bo_create_flags(fd, vm->id, PAGE_SIZE,
+	w->bb_handle = xe_bo_create(fd, vm->id, PAGE_SIZE,
 				visible_vram_if_possible(fd, eq->hwe_list[0].gt_id));
 	w->xe.data = xe_bo_map(fd, w->bb_handle, PAGE_SIZE);
 	w->xe.exec.address =
diff --git a/lib/igt_draw.c b/lib/igt_draw.c
index 9a7664a37..5935eb058 100644
--- a/lib/igt_draw.c
+++ b/lib/igt_draw.c
@@ -795,9 +795,9 @@ static void draw_rect_render(int fd, struct cmd_data *cmd_data,
 	if (is_i915_device(fd))
 		tmp.handle = gem_create(fd, tmp.size);
 	else
-		tmp.handle = xe_bo_create_flags(fd, 0,
-						ALIGN(tmp.size, xe_get_default_alignment(fd)),
-						visible_vram_if_possible(fd, 0));
+		tmp.handle = xe_bo_create(fd, 0,
+					  ALIGN(tmp.size, xe_get_default_alignment(fd)),
+					  visible_vram_if_possible(fd, 0));
 
 	tmp.stride = rect->w * pixel_size;
 	tmp.bpp = buf->bpp;
diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index e70d2e3ce..f96dca7a4 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -1205,8 +1205,8 @@ static int create_bo_for_fb(struct igt_fb *fb, bool prefer_sysmem)
 			/* If we can't use fences, we won't use ggtt detiling later. */
 			igt_assert(err == 0 || err == -EOPNOTSUPP);
 		} else if (is_xe_device(fd)) {
-			fb->gem_handle = xe_bo_create_flags(fd, 0, fb->size,
-							visible_vram_if_possible(fd, 0));
+			fb->gem_handle = xe_bo_create(fd, 0, fb->size,
+						      visible_vram_if_possible(fd, 0));
 		} else if (is_vc4_device(fd)) {
 			fb->gem_handle = igt_vc4_create_bo(fd, fb->size);
 
@@ -2903,7 +2903,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
 
 		bb_size = ALIGN(bb_size + xe_cs_prefetch_size(dst_fb->fd),
 				xe_get_default_alignment(dst_fb->fd));
-		xe_bb = xe_bo_create_flags(dst_fb->fd, 0, bb_size, mem_region);
+		xe_bb = xe_bo_create(dst_fb->fd, 0, bb_size, mem_region);
 	}
 
 	for (int i = 0; i < dst_fb->num_planes - dst_cc; i++) {
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index f12d6219d..7fa4e3487 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -945,7 +945,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
 
 		ibb->alignment = xe_get_default_alignment(fd);
 		size = ALIGN(size, ibb->alignment);
-		ibb->handle = xe_bo_create_flags(fd, 0, size, visible_vram_if_possible(fd, 0));
+		ibb->handle = xe_bo_create(fd, 0, size, visible_vram_if_possible(fd, 0));
 
 		/* Limit to 48-bit due to MI_* address limitation */
 		ibb->gtt_size = 1ull << min_t(uint32_t, xe_va_bits(fd), 48);
@@ -1403,8 +1403,8 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	if (ibb->driver == INTEL_DRIVER_I915)
 		ibb->handle = gem_create(ibb->fd, ibb->size);
 	else
-		ibb->handle = xe_bo_create_flags(ibb->fd, 0, ibb->size,
-						 visible_vram_if_possible(ibb->fd, 0));
+		ibb->handle = xe_bo_create(ibb->fd, 0, ibb->size,
+					   visible_vram_if_possible(ibb->fd, 0));
 
 	/* Reacquire offset for RELOC and SIMPLE */
 	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
diff --git a/lib/intel_blt.c b/lib/intel_blt.c
index 2edcd72f3..36830fb3e 100644
--- a/lib/intel_blt.c
+++ b/lib/intel_blt.c
@@ -1807,7 +1807,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region,
 			flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
 
 		size = ALIGN(size, xe_get_default_alignment(blt->fd));
-		handle = xe_bo_create_flags(blt->fd, 0, size, flags);
+		handle = xe_bo_create(blt->fd, 0, size, flags);
 	} else {
 		igt_assert(__gem_create_in_memory_regions(blt->fd, &handle,
 							  &size, region) == 0);
diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index 2c91adb88..6f3a77f47 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -920,7 +920,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 				igt_assert_eq(__gem_create(bops->fd, &size, &buf->handle), 0);
 		} else {
 			size = ALIGN(size, xe_get_default_alignment(bops->fd));
-			buf->handle = xe_bo_create_flags(bops->fd, 0, size, region);
+			buf->handle = xe_bo_create(bops->fd, 0, size, region);
 		}
 	}
 
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 78d431ab2..63fa2ae25 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -226,8 +226,8 @@ void xe_vm_destroy(int fd, uint32_t vm)
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_DESTROY, &destroy), 0);
 }
 
-uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags,
-			      uint32_t *handle)
+uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
+			uint32_t *handle)
 {
 	struct drm_xe_gem_create create = {
 		.vm_id = vm,
@@ -244,11 +244,11 @@ uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags
 	return 0;
 }
 
-uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags)
+uint32_t xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags)
 {
 	uint32_t handle;
 
-	igt_assert_eq(__xe_bo_create_flags(fd, vm, size, flags, &handle), 0);
+	igt_assert_eq(__xe_bo_create(fd, vm, size, flags, &handle), 0);
 
 	return handle;
 }
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index fb191d98f..1ec29c2c5 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -64,9 +64,9 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
 			    uint32_t bo, struct drm_xe_sync *sync,
 			    uint32_t num_syncs);
 void xe_vm_destroy(int fd, uint32_t vm);
-uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags,
-			      uint32_t *handle);
-uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags);
+uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
+			uint32_t *handle);
+uint32_t xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags);
 uint32_t xe_exec_queue_create(int fd, uint32_t vm,
 			  struct drm_xe_engine_class_instance *instance,
 			  uint64_t ext);
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index cfc663acc..828938434 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -219,8 +219,8 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
 			spin->engine = xe_exec_queue_create_class(fd, spin->vm, DRM_XE_ENGINE_CLASS_COPY);
 	}
 
-	spin->handle = xe_bo_create_flags(fd, spin->vm, bo_size,
-					  visible_vram_if_possible(fd, 0));
+	spin->handle = xe_bo_create(fd, spin->vm, bo_size,
+				    visible_vram_if_possible(fd, 0));
 	xe_spin = xe_bo_map(fd, spin->handle, bo_size);
 	addr = intel_allocator_alloc_with_strategy(ahnd, spin->handle, bo_size, 0, ALLOC_STRATEGY_LOW_TO_HIGH);
 	xe_vm_bind_sync(fd, spin->vm, spin->handle, 0, addr, bo_size);
@@ -298,8 +298,8 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
 
 	vm = xe_vm_create(fd, 0, 0);
 
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, hwe->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, hwe->gt_id));
 	spin = xe_bo_map(fd, bo, 0x1000);
 
 	xe_vm_bind_sync(fd, vm, bo, 0, addr, bo_size);
diff --git a/tests/intel/api_intel_allocator.c b/tests/intel/api_intel_allocator.c
index f3fcf8a34..158fd86a1 100644
--- a/tests/intel/api_intel_allocator.c
+++ b/tests/intel/api_intel_allocator.c
@@ -468,8 +468,8 @@ static void __simple_allocs(int fd)
 
 		size = (rand() % 4 + 1) * 0x1000;
 		if (is_xe)
-			handles[i] = xe_bo_create_flags(fd, 0, size,
-							system_memory(fd));
+			handles[i] = xe_bo_create(fd, 0, size,
+						  system_memory(fd));
 		else
 			handles[i] = gem_create(fd, size);
 
diff --git a/tests/intel/kms_big_fb.c b/tests/intel/kms_big_fb.c
index 2c7b24fca..9c2b8dc79 100644
--- a/tests/intel/kms_big_fb.c
+++ b/tests/intel/kms_big_fb.c
@@ -777,10 +777,10 @@ test_size_overflow(data_t *data)
 	if (is_i915_device(data->drm_fd))
 		bo = gem_buffer_create_fb_obj(data->drm_fd, (1ULL << 32) - 4096);
 	else
-		bo = xe_bo_create_flags(data->drm_fd, 0,
-					ALIGN(((1ULL << 32) - 4096),
-					      xe_get_default_alignment(data->drm_fd)),
-					vram_if_possible(data->drm_fd, 0));
+		bo = xe_bo_create(data->drm_fd, 0,
+				  ALIGN(((1ULL << 32) - 4096),
+					xe_get_default_alignment(data->drm_fd)),
+				  vram_if_possible(data->drm_fd, 0));
 	igt_require(bo);
 
 	ret = __kms_addfb(data->drm_fd, bo,
@@ -837,10 +837,10 @@ test_size_offset_overflow(data_t *data)
 	if (is_i915_device(data->drm_fd))
 		bo = gem_buffer_create_fb_obj(data->drm_fd, (1ULL << 32) - 4096);
 	else
-		bo = xe_bo_create_flags(data->drm_fd, 0,
-					ALIGN(((1ULL << 32) - 4096),
-					      xe_get_default_alignment(data->drm_fd)),
-					vram_if_possible(data->drm_fd, 0));
+		bo = xe_bo_create(data->drm_fd, 0,
+				  ALIGN(((1ULL << 32) - 4096),
+					xe_get_default_alignment(data->drm_fd)),
+				  vram_if_possible(data->drm_fd, 0));
 	igt_require(bo);
 
 	offsets[0] = 0;
@@ -926,9 +926,9 @@ test_addfb(data_t *data)
 	if (is_i915_device(data->drm_fd))
 		bo = gem_buffer_create_fb_obj(data->drm_fd, size);
 	else
-		bo = xe_bo_create_flags(data->drm_fd, 0,
-					ALIGN(size, xe_get_default_alignment(data->drm_fd)),
-					vram_if_possible(data->drm_fd, 0));
+		bo = xe_bo_create(data->drm_fd, 0,
+				  ALIGN(size, xe_get_default_alignment(data->drm_fd)),
+				  vram_if_possible(data->drm_fd, 0));
 	igt_require(bo);
 
 	if (is_i915_device(data->drm_fd) && intel_display_ver(data->devid) < 4)
diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c
index 93e837b84..337afc00c 100644
--- a/tests/intel/kms_ccs.c
+++ b/tests/intel/kms_ccs.c
@@ -434,8 +434,8 @@ static void test_bad_ccs_plane(data_t *data, int width, int height, int ccs_plan
 	if (data->flags & TEST_BAD_CCS_HANDLE) {
 		bad_ccs_bo = is_i915_device(data->drm_fd) ?
 				gem_create(data->drm_fd, fb.size) :
-				xe_bo_create_flags(data->drm_fd, 0, fb.size,
-						   visible_vram_if_possible(data->drm_fd, 0));
+				xe_bo_create(data->drm_fd, 0, fb.size,
+					     visible_vram_if_possible(data->drm_fd, 0));
 		f.handles[ccs_plane] = bad_ccs_bo;
 	}
 
diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
index 465f67e23..ceecba416 100644
--- a/tests/intel/xe_ccs.c
+++ b/tests/intel/xe_ccs.c
@@ -102,8 +102,8 @@ static void surf_copy(int xe,
 
 	igt_assert(mid->compression);
 	ccscopy = (uint32_t *) malloc(ccssize);
-	ccs = xe_bo_create_flags(xe, 0, ccssize, sysmem);
-	ccs2 = xe_bo_create_flags(xe, 0, ccssize, sysmem);
+	ccs = xe_bo_create(xe, 0, ccssize, sysmem);
+	ccs2 = xe_bo_create(xe, 0, ccssize, sysmem);
 
 	blt_ctrl_surf_copy_init(xe, &surf);
 	surf.print_bb = param.print_bb;
@@ -111,7 +111,7 @@ static void surf_copy(int xe,
 				 uc_mocs, BLT_INDIRECT_ACCESS);
 	blt_set_ctrl_surf_object(&surf.dst, ccs, sysmem, ccssize, uc_mocs, DIRECT_ACCESS);
 	bb_size = xe_get_default_alignment(xe);
-	bb1 = xe_bo_create_flags(xe, 0, bb_size, sysmem);
+	bb1 = xe_bo_create(xe, 0, bb_size, sysmem);
 	blt_set_batch(&surf.bb, bb1, bb_size, sysmem);
 	blt_ctrl_surf_copy(xe, ctx, NULL, ahnd, &surf);
 	intel_ctx_xe_sync(ctx, true);
@@ -166,7 +166,7 @@ static void surf_copy(int xe,
 	blt_set_copy_object(&blt.dst, dst);
 	blt_set_object_ext(&ext.src, mid->compression_type, mid->x2, mid->y2, SURFACE_TYPE_2D);
 	blt_set_object_ext(&ext.dst, 0, dst->x2, dst->y2, SURFACE_TYPE_2D);
-	bb2 = xe_bo_create_flags(xe, 0, bb_size, sysmem);
+	bb2 = xe_bo_create(xe, 0, bb_size, sysmem);
 	blt_set_batch(&blt.bb, bb2, bb_size, sysmem);
 	blt_block_copy(xe, ctx, NULL, ahnd, &blt, &ext);
 	intel_ctx_xe_sync(ctx, true);
@@ -297,7 +297,7 @@ static void block_copy(int xe,
 	uint8_t uc_mocs = intel_get_uc_mocs_index(xe);
 	int result;
 
-	bb = xe_bo_create_flags(xe, 0, bb_size, region1);
+	bb = xe_bo_create(xe, 0, bb_size, region1);
 
 	if (!blt_uses_extended_block_copy(xe))
 		pext = NULL;
@@ -418,7 +418,7 @@ static void block_multicopy(int xe,
 	uint8_t uc_mocs = intel_get_uc_mocs_index(xe);
 	int result;
 
-	bb = xe_bo_create_flags(xe, 0, bb_size, region1);
+	bb = xe_bo_create(xe, 0, bb_size, region1);
 
 	if (!blt_uses_extended_block_copy(xe))
 		pext3 = NULL;
diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c
index 191c29155..715f7d3b5 100644
--- a/tests/intel/xe_copy_basic.c
+++ b/tests/intel/xe_copy_basic.c
@@ -52,7 +52,7 @@ mem_copy(int fd, uint32_t src_handle, uint32_t dst_handle, const intel_ctx_t *ct
 	uint32_t bb;
 	int result;
 
-	bb = xe_bo_create_flags(fd, 0, bb_size, region);
+	bb = xe_bo_create(fd, 0, bb_size, region);
 
 	blt_mem_init(fd, &mem);
 	blt_set_mem_object(&mem.src, src_handle, size, 0, width, height,
@@ -102,7 +102,7 @@ mem_set(int fd, uint32_t dst_handle, const intel_ctx_t *ctx, uint32_t size,
 	uint32_t bb;
 	uint8_t *result;
 
-	bb = xe_bo_create_flags(fd, 0, bb_size, region);
+	bb = xe_bo_create(fd, 0, bb_size, region);
 	blt_mem_init(fd, &mem);
 	blt_set_mem_object(&mem.dst, dst_handle, size, 0, width, height, region,
 			   dst_mocs, M_LINEAR, COMPRESSION_DISABLED);
@@ -132,8 +132,8 @@ static void copy_test(int fd, uint32_t size, enum blt_cmd_type cmd, uint32_t reg
 	uint32_t bo_size = ALIGN(size, xe_get_default_alignment(fd));
 	intel_ctx_t *ctx;
 
-	src_handle = xe_bo_create_flags(fd, 0, bo_size, region);
-	dst_handle = xe_bo_create_flags(fd, 0, bo_size, region);
+	src_handle = xe_bo_create(fd, 0, bo_size, region);
+	dst_handle = xe_bo_create(fd, 0, bo_size, region);
 	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	exec_queue = xe_exec_queue_create(fd, vm, &inst, 0);
 	ctx = intel_ctx_xe(fd, vm, exec_queue, 0, 0, 0);
diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
index 0d835dddb..ac9d9d767 100644
--- a/tests/intel/xe_dma_buf_sync.c
+++ b/tests/intel/xe_dma_buf_sync.c
@@ -119,8 +119,8 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd[0]),
 			xe_get_default_alignment(fd[0]));
 	for (i = 0; i < n_bo; ++i) {
-		bo[i] = xe_bo_create_flags(fd[0], 0, bo_size,
-					   visible_vram_if_possible(fd[0], hwe0->gt_id));
+		bo[i] = xe_bo_create(fd[0], 0, bo_size,
+				     visible_vram_if_possible(fd[0], hwe0->gt_id));
 		dma_buf_fd[i] = prime_handle_to_fd(fd[0], bo[i]);
 		import_bo[i] = prime_fd_to_handle(fd[1], dma_buf_fd[i]);
 
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index 4ef30cf49..8f737a533 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -85,7 +85,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
 		pre_size = info.region_mem[memregion->instance + 1].active;
 
-		bo = xe_bo_create_flags(fd, vm, bo_size, region);
+		bo = xe_bo_create(fd, vm, bo_size, region);
 		data = xe_bo_map(fd, bo, bo_size);
 
 		for (i = 0; i < N_EXEC_QUEUES; i++) {
@@ -185,7 +185,7 @@ static void test_shared(int xe)
 		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
 		pre_size = info.region_mem[memregion->instance + 1].shared;
 
-		bo = xe_bo_create_flags(xe, 0, BO_SIZE, region);
+		bo = xe_bo_create(xe, 0, BO_SIZE, region);
 
 		flink.handle = bo;
 		ret = igt_ioctl(xe, DRM_IOCTL_GEM_FLINK, &flink);
@@ -232,7 +232,7 @@ static void test_total_resident(int xe)
 		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
 		pre_size = info.region_mem[memregion->instance + 1].shared;
 
-		handle = xe_bo_create_flags(xe, vm, BO_SIZE, region);
+		handle = xe_bo_create(xe, vm, BO_SIZE, region);
 		xe_vm_bind_sync(xe, vm, handle, 0, addr, BO_SIZE);
 
 		ret = igt_parse_drm_fdinfo(xe, &info, NULL, 0, NULL, 0);
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 6d953e58b..a9d501d5f 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -99,18 +99,18 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
                                 i < n_execs / 8 ? 0 : vm;
 
 			if (flags & MULTI_VM) {
-				__bo = bo[i] = xe_bo_create_flags(fd, 0,
-								  bo_size,
-								  visible_vram_memory(fd, eci->gt_id));
+				__bo = bo[i] = xe_bo_create(fd, 0,
+							    bo_size,
+							    visible_vram_memory(fd, eci->gt_id));
 			} else if (flags & THREADED) {
-				__bo = bo[i] = xe_bo_create_flags(fd, vm,
-								  bo_size,
-								  visible_vram_memory(fd, eci->gt_id));
+				__bo = bo[i] = xe_bo_create(fd, vm,
+							    bo_size,
+							    visible_vram_memory(fd, eci->gt_id));
 			} else {
-				__bo = bo[i] = xe_bo_create_flags(fd, _vm,
-								  bo_size,
-								  visible_vram_memory(fd, eci->gt_id) |
-								  system_memory(fd));
+				__bo = bo[i] = xe_bo_create(fd, _vm,
+							    bo_size,
+							    visible_vram_memory(fd, eci->gt_id) |
+							    system_memory(fd));
 			}
 		} else {
 			__bo = bo[i % (n_execs / 2)];
@@ -275,18 +275,18 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
                                 i < n_execs / 8 ? 0 : vm;
 
 			if (flags & MULTI_VM) {
-				__bo = bo[i] = xe_bo_create_flags(fd, 0,
-								  bo_size,
-								  visible_vram_memory(fd, eci->gt_id));
+				__bo = bo[i] = xe_bo_create(fd, 0,
+							    bo_size,
+							    visible_vram_memory(fd, eci->gt_id));
 			} else if (flags & THREADED) {
-				__bo = bo[i] = xe_bo_create_flags(fd, vm,
-								  bo_size,
-								  visible_vram_memory(fd, eci->gt_id));
+				__bo = bo[i] = xe_bo_create(fd, vm,
+							    bo_size,
+							    visible_vram_memory(fd, eci->gt_id));
 			} else {
-				__bo = bo[i] = xe_bo_create_flags(fd, _vm,
-								  bo_size,
-								  visible_vram_memory(fd, eci->gt_id) |
-								  system_memory(fd));
+				__bo = bo[i] = xe_bo_create(fd, _vm,
+							    bo_size,
+							    visible_vram_memory(fd, eci->gt_id) |
+							    system_memory(fd));
 			}
 		} else {
 			__bo = bo[i % (n_execs / 2)];
diff --git a/tests/intel/xe_evict_ccs.c b/tests/intel/xe_evict_ccs.c
index 1f5c795ef..1dc12eedd 100644
--- a/tests/intel/xe_evict_ccs.c
+++ b/tests/intel/xe_evict_ccs.c
@@ -82,7 +82,7 @@ static void copy_obj(struct blt_copy_data *blt,
 	w = src_obj->x2;
 	h = src_obj->y2;
 
-	bb = xe_bo_create_flags(fd, 0, bb_size, visible_vram_memory(fd, 0));
+	bb = xe_bo_create(fd, 0, bb_size, visible_vram_memory(fd, 0));
 
 	blt->color_depth = CD_32bit;
 	blt->print_bb = params.print_bb;
@@ -274,8 +274,8 @@ static void evict_single(int fd, int child, const struct config *config)
 		}
 
 		if (config->flags & TEST_SIMPLE) {
-			big_obj = xe_bo_create_flags(fd, vm, kb_left * SZ_1K,
-						     vram_memory(fd, 0));
+			big_obj = xe_bo_create(fd, vm, kb_left * SZ_1K,
+					       vram_memory(fd, 0));
 			break;
 		}
 
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index 8a0165b8c..da34e117d 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -70,7 +70,7 @@ static void test_all_active(int fd, int gt, int class)
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
+	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < num_placements; i++) {
@@ -224,7 +224,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		}
 		memset(data, 0, bo_size);
 	} else {
-		bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
+		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 
@@ -452,7 +452,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 			igt_assert(data);
 		}
 	} else {
-		bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
+		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index a401f0165..841696b68 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -140,8 +140,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & DEFER_ALLOC)
 			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
 
-		bo = xe_bo_create_flags(fd, n_vm == 1 ? vm[0] : 0,
-					bo_size, bo_flags);
+		bo = xe_bo_create(fd, n_vm == 1 ? vm[0] : 0, bo_size, bo_flags);
 		if (!(flags & DEFER_BIND))
 			data = xe_bo_map(fd, bo, bo_size);
 	}
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 20d3fc6e8..beb962f79 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -141,8 +141,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 			igt_assert(data);
 		}
 	} else {
-		bo = xe_bo_create_flags(fd, flags & VM_FOR_BO ? vm : 0,
-					bo_size, visible_vram_if_possible(fd, eci->gt_id));
+		bo = xe_bo_create(fd, flags & VM_FOR_BO ? vm : 0,
+				  bo_size, visible_vram_if_possible(fd, eci->gt_id));
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index 92d552f97..903ad430d 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -151,12 +151,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		}
 	} else {
 		if (flags & PREFETCH)
-			bo = xe_bo_create_flags(fd, 0, bo_size,
-						all_memory_regions(fd) |
-						visible_vram_if_possible(fd, 0));
+			bo = xe_bo_create(fd, 0, bo_size,
+					  all_memory_regions(fd) |
+					  visible_vram_if_possible(fd, 0));
 		else
-			bo = xe_bo_create_flags(fd, 0, bo_size,
-						visible_vram_if_possible(fd, eci->gt_id));
+			bo = xe_bo_create(fd, 0, bo_size,
+					  visible_vram_if_possible(fd, eci->gt_id));
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 195e62911..704690e83 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -50,8 +50,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, eci->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, eci->gt_id));
 	spin = xe_bo_map(fd, bo, bo_size);
 
 	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
@@ -181,7 +181,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
+	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
@@ -367,8 +367,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, eci->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, eci->gt_id));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
@@ -534,8 +534,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, eci->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, eci->gt_id));
 	data = xe_bo_map(fd, bo, bo_size);
 	memset(data, 0, bo_size);
 
@@ -661,7 +661,7 @@ static void submit_jobs(struct gt_thread_data *t)
 	uint32_t bo;
 	uint32_t *data;
 
-	bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
 	data = xe_bo_map(fd, bo, bo_size);
 	data[0] = MI_BATCH_BUFFER_END;
 
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 9c14bfd14..bcc4de8d0 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -81,8 +81,8 @@ static void store(int fd)
 			xe_get_default_alignment(fd));
 
 	hw_engine = xe_hw_engine(fd, 1);
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, hw_engine->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, hw_engine->gt_id));
 
 	xe_vm_bind_async(fd, vm, hw_engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
 	data = xe_bo_map(fd, bo, bo_size);
@@ -150,8 +150,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
 	sync[0].handle = syncobj_create(fd, 0);
 
 	for (i = 0; i < count; i++) {
-		bo[i] = xe_bo_create_flags(fd, vm, bo_size,
-					       visible_vram_if_possible(fd, eci->gt_id));
+		bo[i] = xe_bo_create(fd, vm, bo_size,
+				     visible_vram_if_possible(fd, eci->gt_id));
 		bo_map[i] = xe_bo_map(fd, bo[i], bo_size);
 		dst_offset[i] = intel_allocator_alloc_with_strategy(ahnd, bo[i],
 								    bo_size, 0,
@@ -235,8 +235,8 @@ static void store_all(int fd, int gt, int class)
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, 0));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	xe_for_each_hw_engine(fd, hwe) {
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index bb979b18c..a9b0c0b09 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -106,8 +106,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 			igt_assert(data);
 		}
 	} else {
-		bo = xe_bo_create_flags(fd, vm, bo_size,
-					visible_vram_if_possible(fd, gt));
+		bo = xe_bo_create(fd, vm, bo_size,
+				  visible_vram_if_possible(fd, gt));
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
@@ -307,8 +307,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 			igt_assert(data);
 		}
 	} else {
-		bo = xe_bo_create_flags(fd, 0, bo_size,
-					visible_vram_if_possible(fd, eci->gt_id));
+		bo = xe_bo_create(fd, 0, bo_size,
+				  visible_vram_if_possible(fd, eci->gt_id));
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
@@ -510,8 +510,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 			igt_assert(data);
 		}
 	} else {
-		bo = xe_bo_create_flags(fd, vm, bo_size,
-					visible_vram_if_possible(fd, eci->gt_id));
+		bo = xe_bo_create(fd, vm, bo_size,
+				  visible_vram_if_possible(fd, eci->gt_id));
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
index fd310138d..9c69be3ef 100644
--- a/tests/intel/xe_exercise_blt.c
+++ b/tests/intel/xe_exercise_blt.c
@@ -125,7 +125,7 @@ static void fast_copy_emit(int xe, const intel_ctx_t *ctx,
 	uint32_t bb, width = param.width, height = param.height;
 	int result;
 
-	bb = xe_bo_create_flags(xe, 0, bb_size, region1);
+	bb = xe_bo_create(xe, 0, bb_size, region1);
 
 	blt_copy_init(xe, &bltinit);
 	src = blt_create_object(&bltinit, region1, width, height, bpp, 0,
@@ -184,7 +184,7 @@ static void fast_copy(int xe, const intel_ctx_t *ctx,
 	uint32_t width = param.width, height = param.height;
 	int result;
 
-	bb = xe_bo_create_flags(xe, 0, bb_size, region1);
+	bb = xe_bo_create(xe, 0, bb_size, region1);
 
 	blt_copy_init(xe, &blt);
 	src = blt_create_object(&blt, region1, width, height, bpp, 0,
diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
index fa2f20cca..1e29d8905 100644
--- a/tests/intel/xe_guc_pc.c
+++ b/tests/intel/xe_guc_pc.c
@@ -65,8 +65,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, eci->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, eci->gt_id));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
index d66996cd5..a3a315297 100644
--- a/tests/intel/xe_intel_bb.c
+++ b/tests/intel/xe_intel_bb.c
@@ -396,7 +396,7 @@ static void create_in_region(struct buf_ops *bops, uint64_t region)
 		intel_bb_set_debug(ibb, true);
 
 	size = xe_min_page_size(xe, system_memory(xe));
-	handle = xe_bo_create_flags(xe, 0, size, system_memory(xe));
+	handle = xe_bo_create(xe, 0, size, system_memory(xe));
 	intel_buf_init_full(bops, handle, &buf,
 			    width/4, height, 32, 0,
 			    I915_TILING_NONE, 0,
diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
index 7e7e43c00..a805eabda 100644
--- a/tests/intel/xe_mmap.c
+++ b/tests/intel/xe_mmap.c
@@ -52,7 +52,7 @@ test_mmap(int fd, uint32_t flags)
 
 	igt_require_f(flags, "Device doesn't support such memory region\n");
 
-	bo = xe_bo_create_flags(fd, 0, 4096, flags);
+	bo = xe_bo_create(fd, 0, 4096, flags);
 
 	map = xe_bo_map(fd, bo, 4096);
 	strcpy(map, "Write some data to the BO!");
@@ -72,8 +72,8 @@ static void test_bad_flags(int fd)
 {
 	uint64_t size = xe_get_default_alignment(fd);
 	struct drm_xe_gem_mmap_offset mmo = {
-		.handle = xe_bo_create_flags(fd, 0, size,
-					     visible_vram_if_possible(fd, 0)),
+		.handle = xe_bo_create(fd, 0, size,
+				       visible_vram_if_possible(fd, 0)),
 		.flags = -1u,
 	};
 
@@ -92,8 +92,8 @@ static void test_bad_extensions(int fd)
 	uint64_t size = xe_get_default_alignment(fd);
 	struct xe_user_extension ext;
 	struct drm_xe_gem_mmap_offset mmo = {
-		.handle = xe_bo_create_flags(fd, 0, size,
-					     visible_vram_if_possible(fd, 0)),
+		.handle = xe_bo_create(fd, 0, size,
+				       visible_vram_if_possible(fd, 0)),
 	};
 
 	mmo.extensions = to_user_pointer(&ext);
@@ -113,8 +113,8 @@ static void test_bad_object(int fd)
 {
 	uint64_t size = xe_get_default_alignment(fd);
 	struct drm_xe_gem_mmap_offset mmo = {
-		.handle = xe_bo_create_flags(fd, 0, size,
-					     visible_vram_if_possible(fd, 0)),
+		.handle = xe_bo_create(fd, 0, size,
+				       visible_vram_if_possible(fd, 0)),
 	};
 
 	mmo.handle = 0xdeadbeef;
@@ -159,13 +159,13 @@ static void test_small_bar(int fd)
 	uint32_t *map;
 
 	/* 2BIG invalid case */
-	igt_assert_neq(__xe_bo_create_flags(fd, 0, visible_size + 4096,
-					    visible_vram_memory(fd, 0), &bo),
+	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + 4096,
+				      visible_vram_memory(fd, 0), &bo),
 		       0);
 
 	/* Normal operation */
-	bo = xe_bo_create_flags(fd, 0, visible_size / 4,
-				visible_vram_memory(fd, 0));
+	bo = xe_bo_create(fd, 0, visible_size / 4,
+			  visible_vram_memory(fd, 0));
 	mmo = xe_bo_mmap_offset(fd, bo);
 	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
@@ -176,9 +176,9 @@ static void test_small_bar(int fd)
 	gem_close(fd, bo);
 
 	/* Normal operation with system memory spilling */
-	bo = xe_bo_create_flags(fd, 0, visible_size,
-				visible_vram_memory(fd, 0) |
-				system_memory(fd));
+	bo = xe_bo_create(fd, 0, visible_size,
+			  visible_vram_memory(fd, 0) |
+			  system_memory(fd));
 	mmo = xe_bo_mmap_offset(fd, bo);
 	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
@@ -189,8 +189,8 @@ static void test_small_bar(int fd)
 	gem_close(fd, bo);
 
 	/* Bogus operation with SIGBUS */
-	bo = xe_bo_create_flags(fd, 0, visible_size + 4096,
-				vram_memory(fd, 0));
+	bo = xe_bo_create(fd, 0, visible_size + 4096,
+			  vram_memory(fd, 0));
 	mmo = xe_bo_mmap_offset(fd, bo);
 	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 9c2a70ff3..88ef39783 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -70,8 +70,8 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
 				  (unsigned long) bo_size,
 				  (unsigned int) vm[i]);
 
-			bo[i][j] = xe_bo_create_flags(fd, vm[i], bo_size,
-						      vram_memory(fd, 0));
+			bo[i][j] = xe_bo_create(fd, vm[i], bo_size,
+						vram_memory(fd, 0));
 			xe_vm_bind(fd, vm[i], bo[i][j], 0, 0x40000 + j*bo_size,
 				   bo_size, NULL, 0);
 		}
diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
index 2c549f778..406bd4b8d 100644
--- a/tests/intel/xe_perf_pmu.c
+++ b/tests/intel/xe_perf_pmu.c
@@ -103,7 +103,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
 	spin = xe_bo_map(fd, bo, bo_size);
 
 	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
@@ -223,7 +223,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < num_placements; i++) {
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index 9423984cc..9bfe1acad 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -271,8 +271,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 	if (check_rpm && runtime_usage_available(device.pci_xe))
 		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
 
-	bo = xe_bo_create_flags(device.fd_xe, vm, bo_size,
-				visible_vram_if_possible(device.fd_xe, eci->gt_id));
+	bo = xe_bo_create(device.fd_xe, vm, bo_size,
+			  visible_vram_if_possible(device.fd_xe, eci->gt_id));
 	data = xe_bo_map(device.fd_xe, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
@@ -409,7 +409,7 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 	threshold = vram_used_mb + (SIZE / 1024 /1024);
 	igt_require(threshold < vram_total_mb);
 
-	bo = xe_bo_create_flags(device.fd_xe, 0, SIZE, flags);
+	bo = xe_bo_create(device.fd_xe, 0, SIZE, flags);
 	map = xe_bo_map(device.fd_xe, bo, SIZE);
 	memset(map, 0, SIZE);
 	munmap(map, SIZE);
diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
index c87eeef3c..cc133f5fb 100644
--- a/tests/intel/xe_pm_residency.c
+++ b/tests/intel/xe_pm_residency.c
@@ -100,8 +100,8 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
 	exec_queue = xe_exec_queue_create(fd, vm, hwe, 0);
 	bo_size = xe_get_default_alignment(fd);
 
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, hwe->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, hwe->gt_id));
 	data = xe_bo_map(fd, bo, bo_size);
 	syncobj = syncobj_create(fd, 0);
 
diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
index 536230f9f..378368eaa 100644
--- a/tests/intel/xe_prime_self_import.c
+++ b/tests/intel/xe_prime_self_import.c
@@ -105,7 +105,7 @@ static void test_with_fd_dup(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle = xe_bo_create_flags(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
+	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
 
 	dma_buf_fd1 = prime_handle_to_fd(fd1, handle);
 	gem_close(fd1, handle);
@@ -138,8 +138,8 @@ static void test_with_two_bos(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle1 = xe_bo_create_flags(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
-	handle2 = xe_bo_create_flags(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
+	handle1 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
+	handle2 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
 
 	dma_buf_fd = prime_handle_to_fd(fd1, handle1);
 	handle_import = prime_fd_to_handle(fd2, dma_buf_fd);
@@ -174,8 +174,8 @@ static void test_with_one_bo_two_files(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle_orig = xe_bo_create_flags(fd1, 0, bo_size,
-					 visible_vram_if_possible(fd1, 0));
+	handle_orig = xe_bo_create(fd1, 0, bo_size,
+				   visible_vram_if_possible(fd1, 0));
 	dma_buf_fd1 = prime_handle_to_fd(fd1, handle_orig);
 
 	flink_name = gem_flink(fd1, handle_orig);
@@ -207,7 +207,7 @@ static void test_with_one_bo(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle = xe_bo_create_flags(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
+	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
 
 	dma_buf_fd = prime_handle_to_fd(fd1, handle);
 	handle_import1 = prime_fd_to_handle(fd2, dma_buf_fd);
@@ -293,8 +293,8 @@ static void *thread_fn_reimport_vs_close(void *p)
 
 	fds[0] = drm_open_driver(DRIVER_XE);
 
-	handle = xe_bo_create_flags(fds[0], 0, bo_size,
-				    visible_vram_if_possible(fds[0], 0));
+	handle = xe_bo_create(fds[0], 0, bo_size,
+			      visible_vram_if_possible(fds[0], 0));
 
 	fds[1] = prime_handle_to_fd(fds[0], handle);
 	pthread_barrier_init(&g_barrier, NULL, num_threads);
@@ -336,8 +336,8 @@ static void *thread_fn_export_vs_close(void *p)
 
 	igt_until_timeout(g_time_out) {
 		/* We want to race gem close against prime export on handle one.*/
-		handle = xe_bo_create_flags(fd, 0, bo_size,
-					    visible_vram_if_possible(fd, 0));
+		handle = xe_bo_create(fd, 0, bo_size,
+				      visible_vram_if_possible(fd, 0));
 		if (handle != 1)
 			gem_close(fd, handle);
 
@@ -433,8 +433,8 @@ static void test_llseek_size(void)
 	for (i = 0; i < 10; i++) {
 		int bufsz = xe_get_default_alignment(fd) << i;
 
-		handle = xe_bo_create_flags(fd, 0, bufsz,
-					    visible_vram_if_possible(fd, 0));
+		handle = xe_bo_create(fd, 0, bufsz,
+				      visible_vram_if_possible(fd, 0));
 		dma_buf_fd = prime_handle_to_fd(fd, handle);
 
 		gem_close(fd, handle);
@@ -462,8 +462,8 @@ static void test_llseek_bad(void)
 
 	fd = drm_open_driver(DRIVER_XE);
 
-	handle = xe_bo_create_flags(fd, 0, bo_size,
-				    visible_vram_if_possible(fd, 0));
+	handle = xe_bo_create(fd, 0, bo_size,
+			      visible_vram_if_possible(fd, 0));
 	dma_buf_fd = prime_handle_to_fd(fd, handle);
 
 	gem_close(fd, handle);
diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
index 261fde9af..c1b161f9c 100644
--- a/tests/intel/xe_spin_batch.c
+++ b/tests/intel/xe_spin_batch.c
@@ -169,7 +169,7 @@ static void xe_spin_fixed_duration(int fd)
 	exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
 	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
 	bo_size = ALIGN(sizeof(*spin) + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
-	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
 	spin = xe_bo_map(fd, bo, bo_size);
 	spin_addr = intel_allocator_alloc_with_strategy(ahnd, bo, bo_size, 0,
 							ALLOC_STRATEGY_LOW_TO_HIGH);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index eedd05b57..52195737c 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -51,8 +51,8 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
 	batch_size = (n_dwords * 4 + 1) * sizeof(uint32_t);
 	batch_size = ALIGN(batch_size + xe_cs_prefetch_size(fd),
 			   xe_get_default_alignment(fd));
-	batch_bo = xe_bo_create_flags(fd, vm, batch_size,
-				      visible_vram_if_possible(fd, 0));
+	batch_bo = xe_bo_create(fd, vm, batch_size,
+				visible_vram_if_possible(fd, 0));
 	batch_map = xe_bo_map(fd, batch_bo, batch_size);
 
 	for (i = 0; i < n_dwords; i++) {
@@ -116,7 +116,7 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
 		vms = malloc(sizeof(*vms) * n_addrs);
 		igt_assert(vms);
 	}
-	bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
 	map = xe_bo_map(fd, bo, bo_size);
 	memset(map, 0, bo_size);
 
@@ -267,7 +267,7 @@ static void test_partial_unbinds(int fd)
 {
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	size_t bo_size = 3 * xe_get_default_alignment(fd);
-	uint32_t bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
+	uint32_t bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
 	uint64_t unbind_size = bo_size / 3;
 	uint64_t addr = 0x1a0000;
 
@@ -316,7 +316,7 @@ static void unbind_all(int fd, int n_vmas)
 	};
 
 	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
-	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
 
 	for (i = 0; i < n_vmas; ++i)
 		xe_vm_bind_async(fd, vm, 0, bo, 0, addr + i * bo_size,
@@ -362,7 +362,6 @@ static void userptr_invalid(int fd)
 	xe_vm_destroy(fd, vm);
 }
 
-
 /**
  * SUBTEST: shared-%s-page
  * Description: Test shared arg[1] page
@@ -422,8 +421,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		addr_stride = addr_stride + bo_size;
 
 	for (i = 0; i < n_bo; ++i) {
-		bo[i] = xe_bo_create_flags(fd, vm, bo_size,
-					   visible_vram_if_possible(fd, eci->gt_id));
+		bo[i] = xe_bo_create(fd, vm, bo_size,
+				     visible_vram_if_possible(fd, eci->gt_id));
 		data[i] = xe_bo_map(fd, bo[i], bo_size);
 	}
 
@@ -601,8 +600,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 	bo_size = sizeof(*data) * N_EXEC_QUEUES;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, eci->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, eci->gt_id));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < N_EXEC_QUEUES; i++) {
@@ -782,8 +781,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create_flags(fd, vm, bo_size,
-				visible_vram_if_possible(fd, eci->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size,
+			  visible_vram_if_possible(fd, eci->gt_id));
 	data = xe_bo_map(fd, bo, bo_size);
 
 	if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
@@ -980,8 +979,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_skip_on(xe_visible_vram_size(fd, 0) && bo_size >
 			    xe_visible_vram_size(fd, 0));
 
-		bo = xe_bo_create_flags(fd, vm, bo_size,
-					visible_vram_if_possible(fd, eci->gt_id));
+		bo = xe_bo_create(fd, vm, bo_size,
+				  visible_vram_if_possible(fd, eci->gt_id));
 		map = xe_bo_map(fd, bo, bo_size);
 	}
 
@@ -1272,8 +1271,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 			    MAP_ANONYMOUS, -1, 0);
 		igt_assert(map != MAP_FAILED);
 	} else {
-		bo = xe_bo_create_flags(fd, vm, bo_size,
-					visible_vram_if_possible(fd, eci->gt_id));
+		bo = xe_bo_create(fd, vm, bo_size,
+				  visible_vram_if_possible(fd, eci->gt_id));
 		map = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(map, 0, bo_size);
@@ -1576,9 +1575,9 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_assert(map0 != MAP_FAILED);
 		igt_assert(map1 != MAP_FAILED);
 	} else {
-		bo0 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
+		bo0 = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
 		map0 = xe_bo_map(fd, bo0, bo_size);
-		bo1 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
+		bo1 = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
 		map1 = xe_bo_map(fd, bo1, bo_size);
 	}
 	memset(map0, 0, bo_size);
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index b1cae0d9b..46048f9d5 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -64,19 +64,19 @@ waitfence(int fd, enum waittype wt)
 	int64_t timeout;
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
-	bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
+	bo_1 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
 	do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
-	bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
+	bo_2 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
 	do_bind(fd, vm, bo_2, 0, 0xc0000000, 0x40000, 2);
-	bo_3 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
+	bo_3 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
 	do_bind(fd, vm, bo_3, 0, 0x180000000, 0x40000, 3);
-	bo_4 = xe_bo_create_flags(fd, vm, 0x10000, MY_FLAG);
+	bo_4 = xe_bo_create(fd, vm, 0x10000, MY_FLAG);
 	do_bind(fd, vm, bo_4, 0, 0x140000000, 0x10000, 4);
-	bo_5 = xe_bo_create_flags(fd, vm, 0x100000, MY_FLAG);
+	bo_5 = xe_bo_create(fd, vm, 0x100000, MY_FLAG);
 	do_bind(fd, vm, bo_5, 0, 0x100000000, 0x100000, 5);
-	bo_6 = xe_bo_create_flags(fd, vm, 0x1c0000, MY_FLAG);
+	bo_6 = xe_bo_create(fd, vm, 0x1c0000, MY_FLAG);
 	do_bind(fd, vm, bo_6, 0, 0xc0040000, 0x1c0000, 6);
-	bo_7 = xe_bo_create_flags(fd, vm, 0x10000, MY_FLAG);
+	bo_7 = xe_bo_create(fd, vm, 0x10000, MY_FLAG);
 	do_bind(fd, vm, bo_7, 0, 0xeffff0000, 0x10000, 7);
 
 	if (wt == RELTIME) {
@@ -134,7 +134,7 @@ invalid_flag(int fd)
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
-	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
+	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
 
 	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
 
@@ -159,7 +159,7 @@ invalid_ops(int fd)
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
-	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
+	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
 
 	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
 
@@ -184,7 +184,7 @@ invalid_engine(int fd)
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
-	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
+	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
 
 	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
 
diff --git a/tests/kms_addfb_basic.c b/tests/kms_addfb_basic.c
index fc16b8814..4f293c2ee 100644
--- a/tests/kms_addfb_basic.c
+++ b/tests/kms_addfb_basic.c
@@ -199,7 +199,7 @@ static void invalid_tests(int fd)
 			handle = gem_create_in_memory_regions(fd, size, REGION_SMEM);
 		} else {
 			igt_require(xe_has_vram(fd));
-			handle = xe_bo_create_flags(fd, 0, size, system_memory(fd));
+			handle = xe_bo_create(fd, 0, size, system_memory(fd));
 		}
 
 		f.handles[0] = handle;
diff --git a/tests/kms_getfb.c b/tests/kms_getfb.c
index 059f66d99..1f9e813d8 100644
--- a/tests/kms_getfb.c
+++ b/tests/kms_getfb.c
@@ -149,7 +149,7 @@ static void get_ccs_fb(int fd, struct drm_mode_fb_cmd2 *ret)
 	if (is_i915_device(fd))
 		add.handles[0] = gem_buffer_create_fb_obj(fd, size);
 	else
-		add.handles[0] = xe_bo_create_flags(fd, 0, size, vram_if_possible(fd, 0));
+		add.handles[0] = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0));
 	igt_require(add.handles[0] != 0);
 
 	if (!HAS_FLATCCS(devid))
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 04/13] xe_query: Add missing include.
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (2 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 03/13] xe_ioctl: Rename *xe_bo_create_flags to simply xe_bo_create Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-21 17:00   ` Kamil Konieczny
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 05/13] xe_query: Kill visible_vram_if_possible Francois Dugast
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

When trying to use xe_for_each_mem_region from a caller
that is not importing the igt_aux.h, the following build issue
will occur:

../lib/xe/xe_query.h:76:38: error: implicit declaration of function ‘igt_fls’ [-Werror=implicit-function-declaration]
   76 |         for (uint64_t __i = 0; __i < igt_fls(__memreg); __i++) \

So, to avoid a dependency chain, let's include from the file
that is using the helper.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 lib/xe/xe_query.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 38e9aa440..7b3fc3100 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -11,6 +11,8 @@
 
 #include <stdint.h>
 #include <xe_drm.h>
+
+#include "igt_aux.h"
 #include "igt_list.h"
 #include "igt_sizes.h"
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 05/13] xe_query: Kill visible_vram_if_possible
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (3 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 04/13] xe_query: Add missing include Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-21 17:40   ` Kamil Konieczny
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 06/13] drm-uapi/xe: Separate bo_create placement from flags Francois Dugast
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Let the caller set the flag and the xe_bo_query clear if
not needed.

Although the current helper makes the code cleaner, the
goal is to split the flags into placement and flags as two
different arguments on xe_bo_create. So, the flag decision
cannot be hidden under the helper.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 benchmarks/gem_wsim.c              |  3 ++-
 lib/igt_draw.c                     |  3 ++-
 lib/igt_fb.c                       |  3 ++-
 lib/intel_batchbuffer.c            |  6 ++++--
 lib/xe/xe_ioctl.c                  | 19 +++++++++++++++++++
 lib/xe/xe_query.c                  | 26 --------------------------
 lib/xe/xe_query.h                  |  1 -
 lib/xe/xe_spin.c                   |  7 ++++---
 tests/intel/kms_ccs.c              |  3 ++-
 tests/intel/xe_dma_buf_sync.c      |  3 ++-
 tests/intel/xe_exec_balancer.c     |  9 ++++++---
 tests/intel/xe_exec_basic.c        |  2 +-
 tests/intel/xe_exec_compute_mode.c |  3 ++-
 tests/intel/xe_exec_fault_mode.c   |  6 ++++--
 tests/intel/xe_exec_reset.c        | 14 +++++++++-----
 tests/intel/xe_exec_store.c        |  9 ++++++---
 tests/intel/xe_exec_threads.c      |  9 ++++++---
 tests/intel/xe_guc_pc.c            |  3 ++-
 tests/intel/xe_mmap.c              |  9 ++++++---
 tests/intel/xe_pm.c                |  3 ++-
 tests/intel/xe_pm_residency.c      |  3 ++-
 tests/intel/xe_prime_self_import.c | 27 ++++++++++++++++++---------
 tests/intel/xe_vm.c                | 21 ++++++++++++++-------
 23 files changed, 115 insertions(+), 77 deletions(-)

diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
index d6d3deb5f..966d9b465 100644
--- a/benchmarks/gem_wsim.c
+++ b/benchmarks/gem_wsim.c
@@ -1735,7 +1735,8 @@ xe_alloc_step_batch(struct workload *wrk, struct w_step *w)
 	int i;
 
 	w->bb_handle = xe_bo_create(fd, vm->id, PAGE_SIZE,
-				visible_vram_if_possible(fd, eq->hwe_list[0].gt_id));
+				    vram_if_possible(fd, eq->hwe_list[0].gt_id) |
+				    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	w->xe.data = xe_bo_map(fd, w->bb_handle, PAGE_SIZE);
 	w->xe.exec.address =
 		intel_allocator_alloc_with_strategy(vm->ahnd, w->bb_handle, PAGE_SIZE,
diff --git a/lib/igt_draw.c b/lib/igt_draw.c
index 5935eb058..b16afd799 100644
--- a/lib/igt_draw.c
+++ b/lib/igt_draw.c
@@ -797,7 +797,8 @@ static void draw_rect_render(int fd, struct cmd_data *cmd_data,
 	else
 		tmp.handle = xe_bo_create(fd, 0,
 					  ALIGN(tmp.size, xe_get_default_alignment(fd)),
-					  visible_vram_if_possible(fd, 0));
+					  vram_if_possible(fd, 0) |
+					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	tmp.stride = rect->w * pixel_size;
 	tmp.bpp = buf->bpp;
diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index f96dca7a4..0a6aa27c8 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -1206,7 +1206,8 @@ static int create_bo_for_fb(struct igt_fb *fb, bool prefer_sysmem)
 			igt_assert(err == 0 || err == -EOPNOTSUPP);
 		} else if (is_xe_device(fd)) {
 			fb->gem_handle = xe_bo_create(fd, 0, fb->size,
-						      visible_vram_if_possible(fd, 0));
+						      vram_if_possible(fd, 0)
+						      | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		} else if (is_vc4_device(fd)) {
 			fb->gem_handle = igt_vc4_create_bo(fd, fb->size);
 
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 7fa4e3487..45b1665f7 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -945,7 +945,8 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
 
 		ibb->alignment = xe_get_default_alignment(fd);
 		size = ALIGN(size, ibb->alignment);
-		ibb->handle = xe_bo_create(fd, 0, size, visible_vram_if_possible(fd, 0));
+		ibb->handle = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0) |
+					   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 		/* Limit to 48-bit due to MI_* address limitation */
 		ibb->gtt_size = 1ull << min_t(uint32_t, xe_va_bits(fd), 48);
@@ -1404,7 +1405,8 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 		ibb->handle = gem_create(ibb->fd, ibb->size);
 	else
 		ibb->handle = xe_bo_create(ibb->fd, 0, ibb->size,
-					   visible_vram_if_possible(ibb->fd, 0));
+					   vram_if_possible(ibb->fd, 0) |
+					   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	/* Reacquire offset for RELOC and SIMPLE */
 	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 63fa2ae25..1d63081d6 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -226,6 +226,18 @@ void xe_vm_destroy(int fd, uint32_t vm)
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_DESTROY, &destroy), 0);
 }
 
+static bool vram_selected(int fd, uint32_t selected_regions)
+{
+	uint64_t regions = all_memory_regions(fd) & selected_regions;
+	uint64_t region;
+
+	xe_for_each_mem_region(fd, regions, region)
+		if (xe_mem_region(fd, region)->mem_class == DRM_XE_MEM_REGION_CLASS_VRAM)
+			return true;
+
+	return false;
+}
+
 uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
 			uint32_t *handle)
 {
@@ -236,6 +248,13 @@ uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
 	};
 	int err;
 
+	/*
+	 * In case vram_if_possible returned system_memory,
+	 *  visible VRAM cannot be requested through flags
+	 */
+	if (!vram_selected(fd, flags))
+		create.flags &= ~DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
+
 	err = igt_ioctl(fd, DRM_IOCTL_XE_GEM_CREATE, &create);
 	if (err)
 		return err;
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index afd443be3..760a150db 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -442,32 +442,6 @@ uint64_t vram_if_possible(int fd, int gt)
 	return vram_memory(fd, gt) ?: system_memory(fd);
 }
 
-/**
- * visible_vram_if_possible:
- * @fd: xe device fd
- * @gt: gt id
- *
- * Returns vram memory bitmask for xe device @fd and @gt id or system memory if
- * there's no vram memory available for @gt. Also attaches the
- * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
- * when using vram.
- */
-uint64_t visible_vram_if_possible(int fd, int gt)
-{
-	uint64_t regions = all_memory_regions(fd);
-	uint64_t system_memory = regions & 0x1;
-	uint64_t vram = regions & (0x2 << gt);
-
-	/*
-	 * TODO: Keep it backwards compat for now. Fixup once the kernel side
-	 * has landed.
-	 */
-	if (__xe_visible_vram_size(fd, gt))
-		return vram ? vram | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
-	else
-		return vram ? vram : system_memory; /* older kernel */
-}
-
 /**
  * xe_hw_engines:
  * @fd: xe device fd
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 7b3fc3100..4dd0ad573 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -82,7 +82,6 @@ uint64_t system_memory(int fd);
 uint64_t vram_memory(int fd, int gt);
 uint64_t visible_vram_memory(int fd, int gt);
 uint64_t vram_if_possible(int fd, int gt);
-uint64_t visible_vram_if_possible(int fd, int gt);
 struct drm_xe_engine_class_instance *xe_hw_engines(int fd);
 struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx);
 struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index 828938434..270b58bf5 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -220,7 +220,8 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
 	}
 
 	spin->handle = xe_bo_create(fd, spin->vm, bo_size,
-				    visible_vram_if_possible(fd, 0));
+				    vram_if_possible(fd, 0) |
+				    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	xe_spin = xe_bo_map(fd, spin->handle, bo_size);
 	addr = intel_allocator_alloc_with_strategy(ahnd, spin->handle, bo_size, 0, ALLOC_STRATEGY_LOW_TO_HIGH);
 	xe_vm_bind_sync(fd, spin->vm, spin->handle, 0, addr, bo_size);
@@ -298,8 +299,8 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
 
 	vm = xe_vm_create(fd, 0, 0);
 
-	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, hwe->gt_id));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, hwe->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	spin = xe_bo_map(fd, bo, 0x1000);
 
 	xe_vm_bind_sync(fd, vm, bo, 0, addr, bo_size);
diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c
index 337afc00c..5ae28615f 100644
--- a/tests/intel/kms_ccs.c
+++ b/tests/intel/kms_ccs.c
@@ -435,7 +435,8 @@ static void test_bad_ccs_plane(data_t *data, int width, int height, int ccs_plan
 		bad_ccs_bo = is_i915_device(data->drm_fd) ?
 				gem_create(data->drm_fd, fb.size) :
 				xe_bo_create(data->drm_fd, 0, fb.size,
-					     visible_vram_if_possible(data->drm_fd, 0));
+					     vram_if_possible(data->drm_fd, 0) |
+					     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		f.handles[ccs_plane] = bad_ccs_bo;
 	}
 
diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
index ac9d9d767..9318647af 100644
--- a/tests/intel/xe_dma_buf_sync.c
+++ b/tests/intel/xe_dma_buf_sync.c
@@ -120,7 +120,8 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
 			xe_get_default_alignment(fd[0]));
 	for (i = 0; i < n_bo; ++i) {
 		bo[i] = xe_bo_create(fd[0], 0, bo_size,
-				     visible_vram_if_possible(fd[0], hwe0->gt_id));
+				     vram_if_possible(fd[0], hwe0->gt_id) |
+				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		dma_buf_fd[i] = prime_handle_to_fd(fd[0], bo[i]);
 		import_bo[i] = prime_fd_to_handle(fd[1], dma_buf_fd[i]);
 
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index da34e117d..388bb6185 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -70,7 +70,8 @@ static void test_all_active(int fd, int gt, int class)
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
-	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < num_placements; i++) {
@@ -224,7 +225,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		}
 		memset(data, 0, bo_size);
 	} else {
-		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
+		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
+				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 
@@ -452,7 +454,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 			igt_assert(data);
 		}
 	} else {
-		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
+		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
+				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index 841696b68..ca287b2e5 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -136,7 +136,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	} else {
 		uint32_t bo_flags;
 
-		bo_flags = visible_vram_if_possible(fd, eci->gt_id);
+		bo_flags = vram_if_possible(fd, eci->gt_id) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
 		if (flags & DEFER_ALLOC)
 			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
 
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index beb962f79..07a27fd29 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -142,7 +142,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		}
 	} else {
 		bo = xe_bo_create(fd, flags & VM_FOR_BO ? vm : 0,
-				  bo_size, visible_vram_if_possible(fd, eci->gt_id));
+				  bo_size, vram_if_possible(fd, eci->gt_id) |
+				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index 903ad430d..bfd61c4ea 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -153,10 +153,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & PREFETCH)
 			bo = xe_bo_create(fd, 0, bo_size,
 					  all_memory_regions(fd) |
-					  visible_vram_if_possible(fd, 0));
+					  vram_if_possible(fd, 0) |
+					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		else
 			bo = xe_bo_create(fd, 0, bo_size,
-					  visible_vram_if_possible(fd, eci->gt_id));
+					  vram_if_possible(fd, eci->gt_id) |
+					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 704690e83..3affb19ae 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -51,7 +51,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, eci->gt_id));
+			  vram_if_possible(fd, eci->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	spin = xe_bo_map(fd, bo, bo_size);
 
 	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
@@ -181,7 +182,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
@@ -368,7 +370,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, eci->gt_id));
+			  vram_if_possible(fd, eci->gt_id) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
@@ -535,7 +537,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, eci->gt_id));
+			  vram_if_possible(fd, eci->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 	memset(data, 0, bo_size);
 
@@ -661,7 +664,8 @@ static void submit_jobs(struct gt_thread_data *t)
 	uint32_t bo;
 	uint32_t *data;
 
-	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 	data[0] = MI_BATCH_BUFFER_END;
 
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index bcc4de8d0..884183202 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -82,7 +82,8 @@ static void store(int fd)
 
 	hw_engine = xe_hw_engine(fd, 1);
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, hw_engine->gt_id));
+			  vram_if_possible(fd, hw_engine->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	xe_vm_bind_async(fd, vm, hw_engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
 	data = xe_bo_map(fd, bo, bo_size);
@@ -151,7 +152,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
 
 	for (i = 0; i < count; i++) {
 		bo[i] = xe_bo_create(fd, vm, bo_size,
-				     visible_vram_if_possible(fd, eci->gt_id));
+				     vram_if_possible(fd, eci->gt_id) |
+				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		bo_map[i] = xe_bo_map(fd, bo[i], bo_size);
 		dst_offset[i] = intel_allocator_alloc_with_strategy(ahnd, bo[i],
 								    bo_size, 0,
@@ -236,7 +238,8 @@ static void store_all(int fd, int gt, int class)
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, 0));
+			  vram_if_possible(fd, 0) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	xe_for_each_hw_engine(fd, hwe) {
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index a9b0c0b09..ebc41dadd 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -107,7 +107,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 		}
 	} else {
 		bo = xe_bo_create(fd, vm, bo_size,
-				  visible_vram_if_possible(fd, gt));
+				  vram_if_possible(fd, gt) |
+				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
@@ -308,7 +309,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		}
 	} else {
 		bo = xe_bo_create(fd, 0, bo_size,
-				  visible_vram_if_possible(fd, eci->gt_id));
+				  vram_if_possible(fd, eci->gt_id) |
+				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
@@ -511,7 +513,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		}
 	} else {
 		bo = xe_bo_create(fd, vm, bo_size,
-				  visible_vram_if_possible(fd, eci->gt_id));
+				  vram_if_possible(fd, eci->gt_id) |
+				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(data, 0, bo_size);
diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
index 1e29d8905..4234475e0 100644
--- a/tests/intel/xe_guc_pc.c
+++ b/tests/intel/xe_guc_pc.c
@@ -66,7 +66,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, eci->gt_id));
+			  vram_if_possible(fd, eci->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
index a805eabda..a4b53ad48 100644
--- a/tests/intel/xe_mmap.c
+++ b/tests/intel/xe_mmap.c
@@ -73,7 +73,8 @@ static void test_bad_flags(int fd)
 	uint64_t size = xe_get_default_alignment(fd);
 	struct drm_xe_gem_mmap_offset mmo = {
 		.handle = xe_bo_create(fd, 0, size,
-				       visible_vram_if_possible(fd, 0)),
+				       vram_if_possible(fd, 0) |
+				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
 		.flags = -1u,
 	};
 
@@ -93,7 +94,8 @@ static void test_bad_extensions(int fd)
 	struct xe_user_extension ext;
 	struct drm_xe_gem_mmap_offset mmo = {
 		.handle = xe_bo_create(fd, 0, size,
-				       visible_vram_if_possible(fd, 0)),
+				       vram_if_possible(fd, 0) |
+				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
 	};
 
 	mmo.extensions = to_user_pointer(&ext);
@@ -114,7 +116,8 @@ static void test_bad_object(int fd)
 	uint64_t size = xe_get_default_alignment(fd);
 	struct drm_xe_gem_mmap_offset mmo = {
 		.handle = xe_bo_create(fd, 0, size,
-				       visible_vram_if_possible(fd, 0)),
+				       vram_if_possible(fd, 0) |
+				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
 	};
 
 	mmo.handle = 0xdeadbeef;
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index 9bfe1acad..9fd3527f7 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -272,7 +272,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
 
 	bo = xe_bo_create(device.fd_xe, vm, bo_size,
-			  visible_vram_if_possible(device.fd_xe, eci->gt_id));
+			  vram_if_possible(device.fd_xe, eci->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(device.fd_xe, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
index cc133f5fb..40a1693b8 100644
--- a/tests/intel/xe_pm_residency.c
+++ b/tests/intel/xe_pm_residency.c
@@ -101,7 +101,8 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
 	bo_size = xe_get_default_alignment(fd);
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, hwe->gt_id));
+			  vram_if_possible(fd, hwe->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 	syncobj = syncobj_create(fd, 0);
 
diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
index 378368eaa..2c2f2898c 100644
--- a/tests/intel/xe_prime_self_import.c
+++ b/tests/intel/xe_prime_self_import.c
@@ -105,7 +105,8 @@ static void test_with_fd_dup(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
+	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
+			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	dma_buf_fd1 = prime_handle_to_fd(fd1, handle);
 	gem_close(fd1, handle);
@@ -138,8 +139,10 @@ static void test_with_two_bos(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle1 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
-	handle2 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
+	handle1 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
+			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
+	handle2 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
+			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	dma_buf_fd = prime_handle_to_fd(fd1, handle1);
 	handle_import = prime_fd_to_handle(fd2, dma_buf_fd);
@@ -175,7 +178,8 @@ static void test_with_one_bo_two_files(void)
 	fd2 = drm_open_driver(DRIVER_XE);
 
 	handle_orig = xe_bo_create(fd1, 0, bo_size,
-				   visible_vram_if_possible(fd1, 0));
+				   vram_if_possible(fd1, 0) |
+				   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	dma_buf_fd1 = prime_handle_to_fd(fd1, handle_orig);
 
 	flink_name = gem_flink(fd1, handle_orig);
@@ -207,7 +211,8 @@ static void test_with_one_bo(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
+	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
+			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	dma_buf_fd = prime_handle_to_fd(fd1, handle);
 	handle_import1 = prime_fd_to_handle(fd2, dma_buf_fd);
@@ -294,7 +299,8 @@ static void *thread_fn_reimport_vs_close(void *p)
 	fds[0] = drm_open_driver(DRIVER_XE);
 
 	handle = xe_bo_create(fds[0], 0, bo_size,
-			      visible_vram_if_possible(fds[0], 0));
+			      vram_if_possible(fds[0], 0) |
+			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	fds[1] = prime_handle_to_fd(fds[0], handle);
 	pthread_barrier_init(&g_barrier, NULL, num_threads);
@@ -337,7 +343,8 @@ static void *thread_fn_export_vs_close(void *p)
 	igt_until_timeout(g_time_out) {
 		/* We want to race gem close against prime export on handle one.*/
 		handle = xe_bo_create(fd, 0, bo_size,
-				      visible_vram_if_possible(fd, 0));
+				      vram_if_possible(fd, 0) |
+				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		if (handle != 1)
 			gem_close(fd, handle);
 
@@ -434,7 +441,8 @@ static void test_llseek_size(void)
 		int bufsz = xe_get_default_alignment(fd) << i;
 
 		handle = xe_bo_create(fd, 0, bufsz,
-				      visible_vram_if_possible(fd, 0));
+				      vram_if_possible(fd, 0) |
+				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		dma_buf_fd = prime_handle_to_fd(fd, handle);
 
 		gem_close(fd, handle);
@@ -463,7 +471,8 @@ static void test_llseek_bad(void)
 	fd = drm_open_driver(DRIVER_XE);
 
 	handle = xe_bo_create(fd, 0, bo_size,
-			      visible_vram_if_possible(fd, 0));
+			      vram_if_possible(fd, 0) |
+			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	dma_buf_fd = prime_handle_to_fd(fd, handle);
 
 	gem_close(fd, handle);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 52195737c..eb2e0078d 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -52,7 +52,8 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
 	batch_size = ALIGN(batch_size + xe_cs_prefetch_size(fd),
 			   xe_get_default_alignment(fd));
 	batch_bo = xe_bo_create(fd, vm, batch_size,
-				visible_vram_if_possible(fd, 0));
+				vram_if_possible(fd, 0) |
+				DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	batch_map = xe_bo_map(fd, batch_bo, batch_size);
 
 	for (i = 0; i < n_dwords; i++) {
@@ -116,7 +117,8 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
 		vms = malloc(sizeof(*vms) * n_addrs);
 		igt_assert(vms);
 	}
-	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	map = xe_bo_map(fd, bo, bo_size);
 	memset(map, 0, bo_size);
 
@@ -422,7 +424,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 
 	for (i = 0; i < n_bo; ++i) {
 		bo[i] = xe_bo_create(fd, vm, bo_size,
-				     visible_vram_if_possible(fd, eci->gt_id));
+				     vram_if_possible(fd, eci->gt_id) |
+				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data[i] = xe_bo_map(fd, bo[i], bo_size);
 	}
 
@@ -601,7 +604,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, eci->gt_id));
+			  vram_if_possible(fd, eci->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < N_EXEC_QUEUES; i++) {
@@ -782,7 +786,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  visible_vram_if_possible(fd, eci->gt_id));
+			  vram_if_possible(fd, eci->gt_id) |
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
@@ -980,7 +985,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 			    xe_visible_vram_size(fd, 0));
 
 		bo = xe_bo_create(fd, vm, bo_size,
-				  visible_vram_if_possible(fd, eci->gt_id));
+				  vram_if_possible(fd, eci->gt_id) |
+				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		map = xe_bo_map(fd, bo, bo_size);
 	}
 
@@ -1272,7 +1278,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_assert(map != MAP_FAILED);
 	} else {
 		bo = xe_bo_create(fd, vm, bo_size,
-				  visible_vram_if_possible(fd, eci->gt_id));
+				  vram_if_possible(fd, eci->gt_id) |
+				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		map = xe_bo_map(fd, bo, bo_size);
 	}
 	memset(map, 0, bo_size);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 06/13] drm-uapi/xe: Separate bo_create placement from flags
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (4 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 05/13] xe_query: Kill visible_vram_if_possible Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 07/13] xe: s/hw_engine/engine Francois Dugast
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with kernel commit ("drm/xe/uapi: Separate bo_create placement from flags")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 benchmarks/gem_wsim.c              |  2 +-
 include/drm-uapi/xe_drm.h          | 12 +++++-----
 lib/igt_draw.c                     |  2 +-
 lib/igt_fb.c                       |  6 ++---
 lib/intel_batchbuffer.c            |  4 ++--
 lib/intel_blt.c                    |  4 ++--
 lib/intel_bufops.c                 |  2 +-
 lib/xe/xe_ioctl.c                  | 12 +++++-----
 lib/xe/xe_ioctl.h                  |  7 +++---
 lib/xe/xe_query.c                  | 21 -----------------
 lib/xe/xe_query.h                  |  1 -
 lib/xe/xe_spin.c                   |  4 ++--
 tests/intel/api_intel_allocator.c  |  2 +-
 tests/intel/kms_big_fb.c           |  6 ++---
 tests/intel/kms_ccs.c              |  2 +-
 tests/intel/xe_ccs.c               | 12 +++++-----
 tests/intel/xe_copy_basic.c        |  8 +++----
 tests/intel/xe_create.c            |  4 ++--
 tests/intel/xe_dma_buf_sync.c      |  2 +-
 tests/intel/xe_drm_fdinfo.c        |  6 ++---
 tests/intel/xe_evict.c             | 22 +++++++++++-------
 tests/intel/xe_evict_ccs.c         |  5 +++--
 tests/intel/xe_exec_balancer.c     |  6 ++---
 tests/intel/xe_exec_basic.c        |  5 +++--
 tests/intel/xe_exec_compute_mode.c |  2 +-
 tests/intel/xe_exec_fault_mode.c   |  4 ++--
 tests/intel/xe_exec_reset.c        | 11 ++++-----
 tests/intel/xe_exec_store.c        |  6 ++---
 tests/intel/xe_exec_threads.c      |  6 ++---
 tests/intel/xe_exercise_blt.c      |  4 ++--
 tests/intel/xe_guc_pc.c            |  2 +-
 tests/intel/xe_intel_bb.c          |  2 +-
 tests/intel/xe_mmap.c              | 36 +++++++++++++++++-------------
 tests/intel/xe_noexec_ping_pong.c  |  2 +-
 tests/intel/xe_perf_pmu.c          |  4 ++--
 tests/intel/xe_pm.c                | 10 ++++-----
 tests/intel/xe_pm_residency.c      |  2 +-
 tests/intel/xe_prime_self_import.c | 18 +++++++--------
 tests/intel/xe_spin_batch.c        |  2 +-
 tests/intel/xe_vm.c                | 22 +++++++++---------
 tests/intel/xe_waitfence.c         | 22 +++++++++---------
 tests/kms_addfb_basic.c            |  2 +-
 tests/kms_getfb.c                  |  2 +-
 43 files changed, 154 insertions(+), 162 deletions(-)

diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
index 966d9b465..d134b2dea 100644
--- a/benchmarks/gem_wsim.c
+++ b/benchmarks/gem_wsim.c
@@ -1735,7 +1735,7 @@ xe_alloc_step_batch(struct workload *wrk, struct w_step *w)
 	int i;
 
 	w->bb_handle = xe_bo_create(fd, vm->id, PAGE_SIZE,
-				    vram_if_possible(fd, eq->hwe_list[0].gt_id) |
+				    vram_if_possible(fd, eq->hwe_list[0].gt_id),
 				    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	w->xe.data = xe_bo_map(fd, w->bb_handle, PAGE_SIZE);
 	w->xe.exec.address =
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 5ef16f16e..200f018e1 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -499,8 +499,11 @@ struct drm_xe_gem_create {
 	 */
 	__u64 size;
 
-#define DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING		(0x1 << 24)
-#define DRM_XE_GEM_CREATE_FLAG_SCANOUT			(0x1 << 25)
+	/** @placement: A mask of memory instances of where BO can be placed. */
+	__u32 placement;
+
+#define DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING		(1 << 0)
+#define DRM_XE_GEM_CREATE_FLAG_SCANOUT			(1 << 1)
 /*
  * When using VRAM as a possible placement, ensure that the corresponding VRAM
  * allocation will always use the CPU accessible part of VRAM. This is important
@@ -516,7 +519,7 @@ struct drm_xe_gem_create {
  * display surfaces, therefore the kernel requires setting this flag for such
  * objects, otherwise an error is thrown on small-bar systems.
  */
-#define DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM	(0x1 << 26)
+#define DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM	(1 << 2)
 	/**
 	 * @flags: Flags, currently a mask of memory instances of where BO can
 	 * be placed
@@ -540,9 +543,6 @@ struct drm_xe_gem_create {
 	 */
 	__u32 handle;
 
-	/** @pad: MBZ */
-	__u32 pad;
-
 	/** @reserved: Reserved */
 	__u64 reserved[2];
 };
diff --git a/lib/igt_draw.c b/lib/igt_draw.c
index b16afd799..1e0ff8707 100644
--- a/lib/igt_draw.c
+++ b/lib/igt_draw.c
@@ -797,7 +797,7 @@ static void draw_rect_render(int fd, struct cmd_data *cmd_data,
 	else
 		tmp.handle = xe_bo_create(fd, 0,
 					  ALIGN(tmp.size, xe_get_default_alignment(fd)),
-					  vram_if_possible(fd, 0) |
+					  vram_if_possible(fd, 0),
 					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	tmp.stride = rect->w * pixel_size;
diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index 0a6aa27c8..9c1257801 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -1206,8 +1206,8 @@ static int create_bo_for_fb(struct igt_fb *fb, bool prefer_sysmem)
 			igt_assert(err == 0 || err == -EOPNOTSUPP);
 		} else if (is_xe_device(fd)) {
 			fb->gem_handle = xe_bo_create(fd, 0, fb->size,
-						      vram_if_possible(fd, 0)
-						      | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
+						      vram_if_possible(fd, 0),
+						      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		} else if (is_vc4_device(fd)) {
 			fb->gem_handle = igt_vc4_create_bo(fd, fb->size);
 
@@ -2904,7 +2904,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
 
 		bb_size = ALIGN(bb_size + xe_cs_prefetch_size(dst_fb->fd),
 				xe_get_default_alignment(dst_fb->fd));
-		xe_bb = xe_bo_create(dst_fb->fd, 0, bb_size, mem_region);
+		xe_bb = xe_bo_create(dst_fb->fd, 0, bb_size, mem_region, 0);
 	}
 
 	for (int i = 0; i < dst_fb->num_planes - dst_cc; i++) {
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 45b1665f7..e5709a973 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -945,7 +945,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
 
 		ibb->alignment = xe_get_default_alignment(fd);
 		size = ALIGN(size, ibb->alignment);
-		ibb->handle = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0) |
+		ibb->handle = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0),
 					   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 		/* Limit to 48-bit due to MI_* address limitation */
@@ -1405,7 +1405,7 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 		ibb->handle = gem_create(ibb->fd, ibb->size);
 	else
 		ibb->handle = xe_bo_create(ibb->fd, 0, ibb->size,
-					   vram_if_possible(ibb->fd, 0) |
+					   vram_if_possible(ibb->fd, 0),
 					   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	/* Reacquire offset for RELOC and SIMPLE */
diff --git a/lib/intel_blt.c b/lib/intel_blt.c
index 36830fb3e..2ab4f69cf 100644
--- a/lib/intel_blt.c
+++ b/lib/intel_blt.c
@@ -1801,13 +1801,13 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region,
 	obj->size = size;
 
 	if (blt->driver == INTEL_DRIVER_XE) {
-		uint64_t flags = region;
+		uint64_t flags = 0;
 
 		if (create_mapping && region != system_memory(blt->fd))
 			flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
 
 		size = ALIGN(size, xe_get_default_alignment(blt->fd));
-		handle = xe_bo_create(blt->fd, 0, size, flags);
+		handle = xe_bo_create(blt->fd, 0, size, region, flags);
 	} else {
 		igt_assert(__gem_create_in_memory_regions(blt->fd, &handle,
 							  &size, region) == 0);
diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index 6f3a77f47..5582481f6 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -920,7 +920,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 				igt_assert_eq(__gem_create(bops->fd, &size, &buf->handle), 0);
 		} else {
 			size = ALIGN(size, xe_get_default_alignment(bops->fd));
-			buf->handle = xe_bo_create(bops->fd, 0, size, region);
+			buf->handle = xe_bo_create(bops->fd, 0, size, region, 0);
 		}
 	}
 
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 1d63081d6..d2bdbe5f2 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -238,12 +238,13 @@ static bool vram_selected(int fd, uint32_t selected_regions)
 	return false;
 }
 
-uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
-			uint32_t *handle)
+uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t placement,
+			uint32_t flags, uint32_t *handle)
 {
 	struct drm_xe_gem_create create = {
 		.vm_id = vm,
 		.size = size,
+		.placement = placement,
 		.flags = flags,
 	};
 	int err;
@@ -252,7 +253,7 @@ uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
 	 * In case vram_if_possible returned system_memory,
 	 *  visible VRAM cannot be requested through flags
 	 */
-	if (!vram_selected(fd, flags))
+	if (!vram_selected(fd, placement))
 		create.flags &= ~DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
 
 	err = igt_ioctl(fd, DRM_IOCTL_XE_GEM_CREATE, &create);
@@ -263,11 +264,12 @@ uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
 	return 0;
 }
 
-uint32_t xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags)
+uint32_t xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t placement,
+		      uint32_t flags)
 {
 	uint32_t handle;
 
-	igt_assert_eq(__xe_bo_create(fd, vm, size, flags, &handle), 0);
+	igt_assert_eq(__xe_bo_create(fd, vm, size, placement, flags, &handle), 0);
 
 	return handle;
 }
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index 1ec29c2c5..bc609442a 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -64,9 +64,10 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
 			    uint32_t bo, struct drm_xe_sync *sync,
 			    uint32_t num_syncs);
 void xe_vm_destroy(int fd, uint32_t vm);
-uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
-			uint32_t *handle);
-uint32_t xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags);
+uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t placement,
+			uint32_t flags, uint32_t *handle);
+uint32_t xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t placement,
+		      uint32_t flags);
 uint32_t xe_exec_queue_create(int fd, uint32_t vm,
 			  struct drm_xe_engine_class_instance *instance,
 			  uint64_t ext);
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 760a150db..fa17b46b6 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -408,27 +408,6 @@ static uint64_t __xe_visible_vram_size(int fd, int gt)
 	return xe_dev->visible_vram_size[gt];
 }
 
-/**
- * visible_vram_memory:
- * @fd: xe device fd
- * @gt: gt id
- *
- * Returns vram memory bitmask for xe device @fd and @gt id, with
- * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM also set, to ensure that CPU access is
- * possible.
- */
-uint64_t visible_vram_memory(int fd, int gt)
-{
-	/*
-	 * TODO: Keep it backwards compat for now. Fixup once the kernel side
-	 * has landed.
-	 */
-	if (__xe_visible_vram_size(fd, gt))
-		return vram_memory(fd, gt) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
-	else
-		return vram_memory(fd, gt); /* older kernel */
-}
-
 /**
  * vram_if_possible:
  * @fd: xe device fd
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 4dd0ad573..be92ec5ed 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -80,7 +80,6 @@ unsigned int xe_number_gt(int fd);
 uint64_t all_memory_regions(int fd);
 uint64_t system_memory(int fd);
 uint64_t vram_memory(int fd, int gt);
-uint64_t visible_vram_memory(int fd, int gt);
 uint64_t vram_if_possible(int fd, int gt);
 struct drm_xe_engine_class_instance *xe_hw_engines(int fd);
 struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx);
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index 270b58bf5..91bc6664d 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -220,7 +220,7 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
 	}
 
 	spin->handle = xe_bo_create(fd, spin->vm, bo_size,
-				    vram_if_possible(fd, 0) |
+				    vram_if_possible(fd, 0),
 				    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	xe_spin = xe_bo_map(fd, spin->handle, bo_size);
 	addr = intel_allocator_alloc_with_strategy(ahnd, spin->handle, bo_size, 0, ALLOC_STRATEGY_LOW_TO_HIGH);
@@ -299,7 +299,7 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
 
 	vm = xe_vm_create(fd, 0, 0);
 
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, hwe->gt_id) |
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, hwe->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	spin = xe_bo_map(fd, bo, 0x1000);
 
diff --git a/tests/intel/api_intel_allocator.c b/tests/intel/api_intel_allocator.c
index 158fd86a1..2d01da7d0 100644
--- a/tests/intel/api_intel_allocator.c
+++ b/tests/intel/api_intel_allocator.c
@@ -469,7 +469,7 @@ static void __simple_allocs(int fd)
 		size = (rand() % 4 + 1) * 0x1000;
 		if (is_xe)
 			handles[i] = xe_bo_create(fd, 0, size,
-						  system_memory(fd));
+						  system_memory(fd), 0);
 		else
 			handles[i] = gem_create(fd, size);
 
diff --git a/tests/intel/kms_big_fb.c b/tests/intel/kms_big_fb.c
index 9c2b8dc79..fde73bac0 100644
--- a/tests/intel/kms_big_fb.c
+++ b/tests/intel/kms_big_fb.c
@@ -780,7 +780,7 @@ test_size_overflow(data_t *data)
 		bo = xe_bo_create(data->drm_fd, 0,
 				  ALIGN(((1ULL << 32) - 4096),
 					xe_get_default_alignment(data->drm_fd)),
-				  vram_if_possible(data->drm_fd, 0));
+				  vram_if_possible(data->drm_fd, 0), 0);
 	igt_require(bo);
 
 	ret = __kms_addfb(data->drm_fd, bo,
@@ -840,7 +840,7 @@ test_size_offset_overflow(data_t *data)
 		bo = xe_bo_create(data->drm_fd, 0,
 				  ALIGN(((1ULL << 32) - 4096),
 					xe_get_default_alignment(data->drm_fd)),
-				  vram_if_possible(data->drm_fd, 0));
+				  vram_if_possible(data->drm_fd, 0), 0);
 	igt_require(bo);
 
 	offsets[0] = 0;
@@ -928,7 +928,7 @@ test_addfb(data_t *data)
 	else
 		bo = xe_bo_create(data->drm_fd, 0,
 				  ALIGN(size, xe_get_default_alignment(data->drm_fd)),
-				  vram_if_possible(data->drm_fd, 0));
+				  vram_if_possible(data->drm_fd, 0), 0);
 	igt_require(bo);
 
 	if (is_i915_device(data->drm_fd) && intel_display_ver(data->devid) < 4)
diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c
index 5ae28615f..87f29170a 100644
--- a/tests/intel/kms_ccs.c
+++ b/tests/intel/kms_ccs.c
@@ -435,7 +435,7 @@ static void test_bad_ccs_plane(data_t *data, int width, int height, int ccs_plan
 		bad_ccs_bo = is_i915_device(data->drm_fd) ?
 				gem_create(data->drm_fd, fb.size) :
 				xe_bo_create(data->drm_fd, 0, fb.size,
-					     vram_if_possible(data->drm_fd, 0) |
+					     vram_if_possible(data->drm_fd, 0),
 					     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		f.handles[ccs_plane] = bad_ccs_bo;
 	}
diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
index ceecba416..d742d726c 100644
--- a/tests/intel/xe_ccs.c
+++ b/tests/intel/xe_ccs.c
@@ -102,8 +102,8 @@ static void surf_copy(int xe,
 
 	igt_assert(mid->compression);
 	ccscopy = (uint32_t *) malloc(ccssize);
-	ccs = xe_bo_create(xe, 0, ccssize, sysmem);
-	ccs2 = xe_bo_create(xe, 0, ccssize, sysmem);
+	ccs = xe_bo_create(xe, 0, ccssize, sysmem, 0);
+	ccs2 = xe_bo_create(xe, 0, ccssize, sysmem, 0);
 
 	blt_ctrl_surf_copy_init(xe, &surf);
 	surf.print_bb = param.print_bb;
@@ -111,7 +111,7 @@ static void surf_copy(int xe,
 				 uc_mocs, BLT_INDIRECT_ACCESS);
 	blt_set_ctrl_surf_object(&surf.dst, ccs, sysmem, ccssize, uc_mocs, DIRECT_ACCESS);
 	bb_size = xe_get_default_alignment(xe);
-	bb1 = xe_bo_create(xe, 0, bb_size, sysmem);
+	bb1 = xe_bo_create(xe, 0, bb_size, sysmem, 0);
 	blt_set_batch(&surf.bb, bb1, bb_size, sysmem);
 	blt_ctrl_surf_copy(xe, ctx, NULL, ahnd, &surf);
 	intel_ctx_xe_sync(ctx, true);
@@ -166,7 +166,7 @@ static void surf_copy(int xe,
 	blt_set_copy_object(&blt.dst, dst);
 	blt_set_object_ext(&ext.src, mid->compression_type, mid->x2, mid->y2, SURFACE_TYPE_2D);
 	blt_set_object_ext(&ext.dst, 0, dst->x2, dst->y2, SURFACE_TYPE_2D);
-	bb2 = xe_bo_create(xe, 0, bb_size, sysmem);
+	bb2 = xe_bo_create(xe, 0, bb_size, sysmem, 0);
 	blt_set_batch(&blt.bb, bb2, bb_size, sysmem);
 	blt_block_copy(xe, ctx, NULL, ahnd, &blt, &ext);
 	intel_ctx_xe_sync(ctx, true);
@@ -297,7 +297,7 @@ static void block_copy(int xe,
 	uint8_t uc_mocs = intel_get_uc_mocs_index(xe);
 	int result;
 
-	bb = xe_bo_create(xe, 0, bb_size, region1);
+	bb = xe_bo_create(xe, 0, bb_size, region1, 0);
 
 	if (!blt_uses_extended_block_copy(xe))
 		pext = NULL;
@@ -418,7 +418,7 @@ static void block_multicopy(int xe,
 	uint8_t uc_mocs = intel_get_uc_mocs_index(xe);
 	int result;
 
-	bb = xe_bo_create(xe, 0, bb_size, region1);
+	bb = xe_bo_create(xe, 0, bb_size, region1, 0);
 
 	if (!blt_uses_extended_block_copy(xe))
 		pext3 = NULL;
diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c
index 715f7d3b5..a51cc4c0d 100644
--- a/tests/intel/xe_copy_basic.c
+++ b/tests/intel/xe_copy_basic.c
@@ -52,7 +52,7 @@ mem_copy(int fd, uint32_t src_handle, uint32_t dst_handle, const intel_ctx_t *ct
 	uint32_t bb;
 	int result;
 
-	bb = xe_bo_create(fd, 0, bb_size, region);
+	bb = xe_bo_create(fd, 0, bb_size, region, 0);
 
 	blt_mem_init(fd, &mem);
 	blt_set_mem_object(&mem.src, src_handle, size, 0, width, height,
@@ -102,7 +102,7 @@ mem_set(int fd, uint32_t dst_handle, const intel_ctx_t *ctx, uint32_t size,
 	uint32_t bb;
 	uint8_t *result;
 
-	bb = xe_bo_create(fd, 0, bb_size, region);
+	bb = xe_bo_create(fd, 0, bb_size, region, 0);
 	blt_mem_init(fd, &mem);
 	blt_set_mem_object(&mem.dst, dst_handle, size, 0, width, height, region,
 			   dst_mocs, M_LINEAR, COMPRESSION_DISABLED);
@@ -132,8 +132,8 @@ static void copy_test(int fd, uint32_t size, enum blt_cmd_type cmd, uint32_t reg
 	uint32_t bo_size = ALIGN(size, xe_get_default_alignment(fd));
 	intel_ctx_t *ctx;
 
-	src_handle = xe_bo_create(fd, 0, bo_size, region);
-	dst_handle = xe_bo_create(fd, 0, bo_size, region);
+	src_handle = xe_bo_create(fd, 0, bo_size, region, 0);
+	dst_handle = xe_bo_create(fd, 0, bo_size, region, 0);
 	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	exec_queue = xe_exec_queue_create(fd, vm, &inst, 0);
 	ctx = intel_ctx_xe(fd, vm, exec_queue, 0, 0, 0);
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index 4242e1a67..4326b15e8 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -18,13 +18,13 @@
 
 #define PAGE_SIZE 0x1000
 
-static int __create_bo(int fd, uint32_t vm, uint64_t size, uint32_t flags,
+static int __create_bo(int fd, uint32_t vm, uint64_t size, uint32_t placement,
 		       uint32_t *handlep)
 {
 	struct drm_xe_gem_create create = {
 		.vm_id = vm,
 		.size = size,
-		.flags = flags,
+		.placement = placement,
 	};
 	int ret = 0;
 
diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
index 9318647af..aeb4c4995 100644
--- a/tests/intel/xe_dma_buf_sync.c
+++ b/tests/intel/xe_dma_buf_sync.c
@@ -120,7 +120,7 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
 			xe_get_default_alignment(fd[0]));
 	for (i = 0; i < n_bo; ++i) {
 		bo[i] = xe_bo_create(fd[0], 0, bo_size,
-				     vram_if_possible(fd[0], hwe0->gt_id) |
+				     vram_if_possible(fd[0], hwe0->gt_id),
 				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		dma_buf_fd[i] = prime_handle_to_fd(fd[0], bo[i]);
 		import_bo[i] = prime_fd_to_handle(fd[1], dma_buf_fd[i]);
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index 8f737a533..6bca5a6f1 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -85,7 +85,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
 		pre_size = info.region_mem[memregion->instance + 1].active;
 
-		bo = xe_bo_create(fd, vm, bo_size, region);
+		bo = xe_bo_create(fd, vm, bo_size, region, 0);
 		data = xe_bo_map(fd, bo, bo_size);
 
 		for (i = 0; i < N_EXEC_QUEUES; i++) {
@@ -185,7 +185,7 @@ static void test_shared(int xe)
 		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
 		pre_size = info.region_mem[memregion->instance + 1].shared;
 
-		bo = xe_bo_create(xe, 0, BO_SIZE, region);
+		bo = xe_bo_create(xe, 0, BO_SIZE, region, 0);
 
 		flink.handle = bo;
 		ret = igt_ioctl(xe, DRM_IOCTL_GEM_FLINK, &flink);
@@ -232,7 +232,7 @@ static void test_total_resident(int xe)
 		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
 		pre_size = info.region_mem[memregion->instance + 1].shared;
 
-		handle = xe_bo_create(xe, vm, BO_SIZE, region);
+		handle = xe_bo_create(xe, vm, BO_SIZE, region, 0);
 		xe_vm_bind_sync(xe, vm, handle, 0, addr, BO_SIZE);
 
 		ret = igt_parse_drm_fdinfo(xe, &info, NULL, 0, NULL, 0);
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index a9d501d5f..436a2be02 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -101,16 +101,19 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 			if (flags & MULTI_VM) {
 				__bo = bo[i] = xe_bo_create(fd, 0,
 							    bo_size,
-							    visible_vram_memory(fd, eci->gt_id));
+							    vram_memory(fd, eci->gt_id),
+							    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 			} else if (flags & THREADED) {
 				__bo = bo[i] = xe_bo_create(fd, vm,
 							    bo_size,
-							    visible_vram_memory(fd, eci->gt_id));
+							    vram_memory(fd, eci->gt_id),
+							    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 			} else {
 				__bo = bo[i] = xe_bo_create(fd, _vm,
 							    bo_size,
-							    visible_vram_memory(fd, eci->gt_id) |
-							    system_memory(fd));
+							    vram_memory(fd, eci->gt_id) |
+							    system_memory(fd),
+							    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 			}
 		} else {
 			__bo = bo[i % (n_execs / 2)];
@@ -277,16 +280,19 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
 			if (flags & MULTI_VM) {
 				__bo = bo[i] = xe_bo_create(fd, 0,
 							    bo_size,
-							    visible_vram_memory(fd, eci->gt_id));
+							    vram_memory(fd, eci->gt_id),
+							    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 			} else if (flags & THREADED) {
 				__bo = bo[i] = xe_bo_create(fd, vm,
 							    bo_size,
-							    visible_vram_memory(fd, eci->gt_id));
+							    vram_memory(fd, eci->gt_id),
+							    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 			} else {
 				__bo = bo[i] = xe_bo_create(fd, _vm,
 							    bo_size,
-							    visible_vram_memory(fd, eci->gt_id) |
-							    system_memory(fd));
+							    vram_memory(fd, eci->gt_id) |
+							    system_memory(fd),
+							    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 			}
 		} else {
 			__bo = bo[i % (n_execs / 2)];
diff --git a/tests/intel/xe_evict_ccs.c b/tests/intel/xe_evict_ccs.c
index 1dc12eedd..4cafbf02e 100644
--- a/tests/intel/xe_evict_ccs.c
+++ b/tests/intel/xe_evict_ccs.c
@@ -82,7 +82,8 @@ static void copy_obj(struct blt_copy_data *blt,
 	w = src_obj->x2;
 	h = src_obj->y2;
 
-	bb = xe_bo_create(fd, 0, bb_size, visible_vram_memory(fd, 0));
+	bb = xe_bo_create(fd, 0, bb_size, vram_memory(fd, 0),
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	blt->color_depth = CD_32bit;
 	blt->print_bb = params.print_bb;
@@ -275,7 +276,7 @@ static void evict_single(int fd, int child, const struct config *config)
 
 		if (config->flags & TEST_SIMPLE) {
 			big_obj = xe_bo_create(fd, vm, kb_left * SZ_1K,
-					       vram_memory(fd, 0));
+					       vram_memory(fd, 0), 0);
 			break;
 		}
 
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index 388bb6185..fa3d7a338 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -70,7 +70,7 @@ static void test_all_active(int fd, int gt, int class)
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
@@ -225,7 +225,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		}
 		memset(data, 0, bo_size);
 	} else {
-		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
+		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt),
 				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
@@ -454,7 +454,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 			igt_assert(data);
 		}
 	} else {
-		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
+		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt),
 				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index ca287b2e5..23acdd434 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -136,11 +136,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	} else {
 		uint32_t bo_flags;
 
-		bo_flags = vram_if_possible(fd, eci->gt_id) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
+		bo_flags = DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
 		if (flags & DEFER_ALLOC)
 			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
 
-		bo = xe_bo_create(fd, n_vm == 1 ? vm[0] : 0, bo_size, bo_flags);
+		bo = xe_bo_create(fd, n_vm == 1 ? vm[0] : 0, bo_size,
+				  vram_if_possible(fd, eci->gt_id), bo_flags);
 		if (!(flags & DEFER_BIND))
 			data = xe_bo_map(fd, bo, bo_size);
 	}
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 07a27fd29..98a98256e 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -142,7 +142,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		}
 	} else {
 		bo = xe_bo_create(fd, flags & VM_FOR_BO ? vm : 0,
-				  bo_size, vram_if_possible(fd, eci->gt_id) |
+				  bo_size, vram_if_possible(fd, eci->gt_id),
 				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index bfd61c4ea..3eb448ef4 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -153,11 +153,11 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & PREFETCH)
 			bo = xe_bo_create(fd, 0, bo_size,
 					  all_memory_regions(fd) |
-					  vram_if_possible(fd, 0) |
+					  vram_if_possible(fd, 0),
 					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		else
 			bo = xe_bo_create(fd, 0, bo_size,
-					  vram_if_possible(fd, eci->gt_id) |
+					  vram_if_possible(fd, eci->gt_id),
 					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 3affb19ae..d8b8e0355 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -51,7 +51,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, eci->gt_id) |
+			  vram_if_possible(fd, eci->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	spin = xe_bo_map(fd, bo, bo_size);
 
@@ -182,7 +182,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
@@ -370,7 +370,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, eci->gt_id) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
+			  vram_if_possible(fd, eci->gt_id),
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
@@ -537,7 +538,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, eci->gt_id) |
+			  vram_if_possible(fd, eci->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 	memset(data, 0, bo_size);
@@ -664,7 +665,7 @@ static void submit_jobs(struct gt_thread_data *t)
 	uint32_t bo;
 	uint32_t *data;
 
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0) |
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 	data[0] = MI_BATCH_BUFFER_END;
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 884183202..9ee5edeb4 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -82,7 +82,7 @@ static void store(int fd)
 
 	hw_engine = xe_hw_engine(fd, 1);
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, hw_engine->gt_id) |
+			  vram_if_possible(fd, hw_engine->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	xe_vm_bind_async(fd, vm, hw_engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
@@ -152,7 +152,7 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
 
 	for (i = 0; i < count; i++) {
 		bo[i] = xe_bo_create(fd, vm, bo_size,
-				     vram_if_possible(fd, eci->gt_id) |
+				     vram_if_possible(fd, eci->gt_id),
 				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		bo_map[i] = xe_bo_map(fd, bo[i], bo_size);
 		dst_offset[i] = intel_allocator_alloc_with_strategy(ahnd, bo[i],
@@ -238,7 +238,7 @@ static void store_all(int fd, int gt, int class)
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, 0) |
+			  vram_if_possible(fd, 0),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index ebc41dadd..f37fc612a 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -107,7 +107,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 		}
 	} else {
 		bo = xe_bo_create(fd, vm, bo_size,
-				  vram_if_possible(fd, gt) |
+				  vram_if_possible(fd, gt),
 				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
@@ -309,7 +309,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		}
 	} else {
 		bo = xe_bo_create(fd, 0, bo_size,
-				  vram_if_possible(fd, eci->gt_id) |
+				  vram_if_possible(fd, eci->gt_id),
 				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
@@ -513,7 +513,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		}
 	} else {
 		bo = xe_bo_create(fd, vm, bo_size,
-				  vram_if_possible(fd, eci->gt_id) |
+				  vram_if_possible(fd, eci->gt_id),
 				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data = xe_bo_map(fd, bo, bo_size);
 	}
diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
index 9c69be3ef..655e9a3ea 100644
--- a/tests/intel/xe_exercise_blt.c
+++ b/tests/intel/xe_exercise_blt.c
@@ -125,7 +125,7 @@ static void fast_copy_emit(int xe, const intel_ctx_t *ctx,
 	uint32_t bb, width = param.width, height = param.height;
 	int result;
 
-	bb = xe_bo_create(xe, 0, bb_size, region1);
+	bb = xe_bo_create(xe, 0, bb_size, region1, 0);
 
 	blt_copy_init(xe, &bltinit);
 	src = blt_create_object(&bltinit, region1, width, height, bpp, 0,
@@ -184,7 +184,7 @@ static void fast_copy(int xe, const intel_ctx_t *ctx,
 	uint32_t width = param.width, height = param.height;
 	int result;
 
-	bb = xe_bo_create(xe, 0, bb_size, region1);
+	bb = xe_bo_create(xe, 0, bb_size, region1, 0);
 
 	blt_copy_init(xe, &blt);
 	src = blt_create_object(&blt, region1, width, height, bpp, 0,
diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
index 4234475e0..8d7b677b4 100644
--- a/tests/intel/xe_guc_pc.c
+++ b/tests/intel/xe_guc_pc.c
@@ -66,7 +66,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, eci->gt_id) |
+			  vram_if_possible(fd, eci->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
index a3a315297..00bd17d4c 100644
--- a/tests/intel/xe_intel_bb.c
+++ b/tests/intel/xe_intel_bb.c
@@ -396,7 +396,7 @@ static void create_in_region(struct buf_ops *bops, uint64_t region)
 		intel_bb_set_debug(ibb, true);
 
 	size = xe_min_page_size(xe, system_memory(xe));
-	handle = xe_bo_create(xe, 0, size, system_memory(xe));
+	handle = xe_bo_create(xe, 0, size, system_memory(xe), 0);
 	intel_buf_init_full(bops, handle, &buf,
 			    width/4, height, 32, 0,
 			    I915_TILING_NONE, 0,
diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
index a4b53ad48..965644e22 100644
--- a/tests/intel/xe_mmap.c
+++ b/tests/intel/xe_mmap.c
@@ -45,14 +45,14 @@
  * @vram-system:	system vram
  */
 static void
-test_mmap(int fd, uint32_t flags)
+test_mmap(int fd, uint32_t placement, uint32_t flags)
 {
 	uint32_t bo;
 	void *map;
 
-	igt_require_f(flags, "Device doesn't support such memory region\n");
+	igt_require_f(placement, "Device doesn't support such memory region\n");
 
-	bo = xe_bo_create(fd, 0, 4096, flags);
+	bo = xe_bo_create(fd, 0, 4096, placement, flags);
 
 	map = xe_bo_map(fd, bo, 4096);
 	strcpy(map, "Write some data to the BO!");
@@ -73,7 +73,7 @@ static void test_bad_flags(int fd)
 	uint64_t size = xe_get_default_alignment(fd);
 	struct drm_xe_gem_mmap_offset mmo = {
 		.handle = xe_bo_create(fd, 0, size,
-				       vram_if_possible(fd, 0) |
+				       vram_if_possible(fd, 0),
 				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
 		.flags = -1u,
 	};
@@ -94,7 +94,7 @@ static void test_bad_extensions(int fd)
 	struct xe_user_extension ext;
 	struct drm_xe_gem_mmap_offset mmo = {
 		.handle = xe_bo_create(fd, 0, size,
-				       vram_if_possible(fd, 0) |
+				       vram_if_possible(fd, 0),
 				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
 	};
 
@@ -116,7 +116,7 @@ static void test_bad_object(int fd)
 	uint64_t size = xe_get_default_alignment(fd);
 	struct drm_xe_gem_mmap_offset mmo = {
 		.handle = xe_bo_create(fd, 0, size,
-				       vram_if_possible(fd, 0) |
+				       vram_if_possible(fd, 0),
 				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
 	};
 
@@ -163,12 +163,14 @@ static void test_small_bar(int fd)
 
 	/* 2BIG invalid case */
 	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + 4096,
-				      visible_vram_memory(fd, 0), &bo),
+				      vram_memory(fd, 0),
+				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM,
+				      &bo),
 		       0);
 
 	/* Normal operation */
-	bo = xe_bo_create(fd, 0, visible_size / 4,
-			  visible_vram_memory(fd, 0));
+	bo = xe_bo_create(fd, 0, visible_size / 4, vram_memory(fd, 0),
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	mmo = xe_bo_mmap_offset(fd, bo);
 	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
@@ -180,8 +182,9 @@ static void test_small_bar(int fd)
 
 	/* Normal operation with system memory spilling */
 	bo = xe_bo_create(fd, 0, visible_size,
-			  visible_vram_memory(fd, 0) |
-			  system_memory(fd));
+			  vram_memory(fd, 0) |
+			  system_memory(fd),
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	mmo = xe_bo_mmap_offset(fd, bo);
 	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
@@ -192,8 +195,7 @@ static void test_small_bar(int fd)
 	gem_close(fd, bo);
 
 	/* Bogus operation with SIGBUS */
-	bo = xe_bo_create(fd, 0, visible_size + 4096,
-			  vram_memory(fd, 0));
+	bo = xe_bo_create(fd, 0, visible_size + 4096, vram_memory(fd, 0), 0);
 	mmo = xe_bo_mmap_offset(fd, bo);
 	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
@@ -210,13 +212,15 @@ igt_main
 		fd = drm_open_driver(DRIVER_XE);
 
 	igt_subtest("system")
-		test_mmap(fd, system_memory(fd));
+		test_mmap(fd, system_memory(fd), 0);
 
 	igt_subtest("vram")
-		test_mmap(fd, visible_vram_memory(fd, 0));
+		test_mmap(fd, vram_memory(fd, 0),
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	igt_subtest("vram-system")
-		test_mmap(fd, visible_vram_memory(fd, 0) | system_memory(fd));
+		test_mmap(fd, vram_memory(fd, 0) | system_memory(fd),
+			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	igt_subtest("bad-flags")
 		test_bad_flags(fd);
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 88ef39783..5e3349247 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -71,7 +71,7 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
 				  (unsigned int) vm[i]);
 
 			bo[i][j] = xe_bo_create(fd, vm[i], bo_size,
-						vram_memory(fd, 0));
+						vram_memory(fd, 0), 0);
 			xe_vm_bind(fd, vm[i], bo[i][j], 0, 0x40000 + j*bo_size,
 				   bo_size, NULL, 0);
 		}
diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
index 406bd4b8d..8ef557a46 100644
--- a/tests/intel/xe_perf_pmu.c
+++ b/tests/intel/xe_perf_pmu.c
@@ -103,7 +103,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0), 0);
 	spin = xe_bo_map(fd, bo, bo_size);
 
 	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
@@ -223,7 +223,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0), 0);
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < num_placements; i++) {
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index 9fd3527f7..2e5c61b59 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -272,7 +272,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
 
 	bo = xe_bo_create(device.fd_xe, vm, bo_size,
-			  vram_if_possible(device.fd_xe, eci->gt_id) |
+			  vram_if_possible(device.fd_xe, eci->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(device.fd_xe, bo, bo_size);
 
@@ -381,15 +381,15 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 		.data = 0,
 	};
 	uint64_t vram_used_mb = 0, vram_total_mb = 0, threshold;
-	uint32_t bo, flags;
+	uint32_t bo, placement;
 	int handle, i;
 	bool active;
 	void *map;
 
 	igt_require(xe_has_vram(device.fd_xe));
 
-	flags = vram_memory(device.fd_xe, 0);
-	igt_require_f(flags, "Device doesn't support vram memory region\n");
+	placement = vram_memory(device.fd_xe, 0);
+	igt_require_f(placement, "Device doesn't support vram memory region\n");
 
 	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 	igt_assert_neq(query.size, 0);
@@ -410,7 +410,7 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 	threshold = vram_used_mb + (SIZE / 1024 /1024);
 	igt_require(threshold < vram_total_mb);
 
-	bo = xe_bo_create(device.fd_xe, 0, SIZE, flags);
+	bo = xe_bo_create(device.fd_xe, 0, SIZE, placement, 0);
 	map = xe_bo_map(device.fd_xe, bo, SIZE);
 	memset(map, 0, SIZE);
 	munmap(map, SIZE);
diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
index 40a1693b8..6c9a95429 100644
--- a/tests/intel/xe_pm_residency.c
+++ b/tests/intel/xe_pm_residency.c
@@ -101,7 +101,7 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
 	bo_size = xe_get_default_alignment(fd);
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, hwe->gt_id) |
+			  vram_if_possible(fd, hwe->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 	syncobj = syncobj_create(fd, 0);
diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
index 2c2f2898c..9a263d326 100644
--- a/tests/intel/xe_prime_self_import.c
+++ b/tests/intel/xe_prime_self_import.c
@@ -105,7 +105,7 @@ static void test_with_fd_dup(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
+	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
 			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	dma_buf_fd1 = prime_handle_to_fd(fd1, handle);
@@ -139,9 +139,9 @@ static void test_with_two_bos(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle1 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
+	handle1 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
 			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
-	handle2 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
+	handle2 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
 			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	dma_buf_fd = prime_handle_to_fd(fd1, handle1);
@@ -178,7 +178,7 @@ static void test_with_one_bo_two_files(void)
 	fd2 = drm_open_driver(DRIVER_XE);
 
 	handle_orig = xe_bo_create(fd1, 0, bo_size,
-				   vram_if_possible(fd1, 0) |
+				   vram_if_possible(fd1, 0),
 				   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	dma_buf_fd1 = prime_handle_to_fd(fd1, handle_orig);
 
@@ -211,7 +211,7 @@ static void test_with_one_bo(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
-	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
+	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
 			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	dma_buf_fd = prime_handle_to_fd(fd1, handle);
@@ -299,7 +299,7 @@ static void *thread_fn_reimport_vs_close(void *p)
 	fds[0] = drm_open_driver(DRIVER_XE);
 
 	handle = xe_bo_create(fds[0], 0, bo_size,
-			      vram_if_possible(fds[0], 0) |
+			      vram_if_possible(fds[0], 0),
 			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
 	fds[1] = prime_handle_to_fd(fds[0], handle);
@@ -343,7 +343,7 @@ static void *thread_fn_export_vs_close(void *p)
 	igt_until_timeout(g_time_out) {
 		/* We want to race gem close against prime export on handle one.*/
 		handle = xe_bo_create(fd, 0, bo_size,
-				      vram_if_possible(fd, 0) |
+				      vram_if_possible(fd, 0),
 				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		if (handle != 1)
 			gem_close(fd, handle);
@@ -441,7 +441,7 @@ static void test_llseek_size(void)
 		int bufsz = xe_get_default_alignment(fd) << i;
 
 		handle = xe_bo_create(fd, 0, bufsz,
-				      vram_if_possible(fd, 0) |
+				      vram_if_possible(fd, 0),
 				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		dma_buf_fd = prime_handle_to_fd(fd, handle);
 
@@ -471,7 +471,7 @@ static void test_llseek_bad(void)
 	fd = drm_open_driver(DRIVER_XE);
 
 	handle = xe_bo_create(fd, 0, bo_size,
-			      vram_if_possible(fd, 0) |
+			      vram_if_possible(fd, 0),
 			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	dma_buf_fd = prime_handle_to_fd(fd, handle);
 
diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
index c1b161f9c..6abe700da 100644
--- a/tests/intel/xe_spin_batch.c
+++ b/tests/intel/xe_spin_batch.c
@@ -169,7 +169,7 @@ static void xe_spin_fixed_duration(int fd)
 	exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
 	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
 	bo_size = ALIGN(sizeof(*spin) + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0), 0);
 	spin = xe_bo_map(fd, bo, bo_size);
 	spin_addr = intel_allocator_alloc_with_strategy(ahnd, bo, bo_size, 0,
 							ALLOC_STRATEGY_LOW_TO_HIGH);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index eb2e0078d..ec804febd 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -52,7 +52,7 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
 	batch_size = ALIGN(batch_size + xe_cs_prefetch_size(fd),
 			   xe_get_default_alignment(fd));
 	batch_bo = xe_bo_create(fd, vm, batch_size,
-				vram_if_possible(fd, 0) |
+				vram_if_possible(fd, 0),
 				DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	batch_map = xe_bo_map(fd, batch_bo, batch_size);
 
@@ -117,7 +117,7 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
 		vms = malloc(sizeof(*vms) * n_addrs);
 		igt_assert(vms);
 	}
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0) |
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	map = xe_bo_map(fd, bo, bo_size);
 	memset(map, 0, bo_size);
@@ -269,7 +269,7 @@ static void test_partial_unbinds(int fd)
 {
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	size_t bo_size = 3 * xe_get_default_alignment(fd);
-	uint32_t bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
+	uint32_t bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0), 0);
 	uint64_t unbind_size = bo_size / 3;
 	uint64_t addr = 0x1a0000;
 
@@ -318,7 +318,7 @@ static void unbind_all(int fd, int n_vmas)
 	};
 
 	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
-	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
+	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0), 0);
 
 	for (i = 0; i < n_vmas; ++i)
 		xe_vm_bind_async(fd, vm, 0, bo, 0, addr + i * bo_size,
@@ -424,7 +424,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 
 	for (i = 0; i < n_bo; ++i) {
 		bo[i] = xe_bo_create(fd, vm, bo_size,
-				     vram_if_possible(fd, eci->gt_id) |
+				     vram_if_possible(fd, eci->gt_id),
 				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		data[i] = xe_bo_map(fd, bo[i], bo_size);
 	}
@@ -604,7 +604,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, eci->gt_id) |
+			  vram_if_possible(fd, eci->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
@@ -786,7 +786,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 			xe_get_default_alignment(fd));
 
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, eci->gt_id) |
+			  vram_if_possible(fd, eci->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
@@ -985,7 +985,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 			    xe_visible_vram_size(fd, 0));
 
 		bo = xe_bo_create(fd, vm, bo_size,
-				  vram_if_possible(fd, eci->gt_id) |
+				  vram_if_possible(fd, eci->gt_id),
 				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		map = xe_bo_map(fd, bo, bo_size);
 	}
@@ -1278,7 +1278,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_assert(map != MAP_FAILED);
 	} else {
 		bo = xe_bo_create(fd, vm, bo_size,
-				  vram_if_possible(fd, eci->gt_id) |
+				  vram_if_possible(fd, eci->gt_id),
 				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 		map = xe_bo_map(fd, bo, bo_size);
 	}
@@ -1582,9 +1582,9 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_assert(map0 != MAP_FAILED);
 		igt_assert(map1 != MAP_FAILED);
 	} else {
-		bo0 = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
+		bo0 = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id), 0);
 		map0 = xe_bo_map(fd, bo0, bo_size);
-		bo1 = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
+		bo1 = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id), 0);
 		map1 = xe_bo_map(fd, bo1, bo_size);
 	}
 	memset(map0, 0, bo_size);
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index 46048f9d5..b4550f6c4 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -22,8 +22,6 @@
  * Description: Test waitfences functionality
  */
 
-#define MY_FLAG	vram_if_possible(fd, 0)
-
 uint64_t wait_fence = 0;
 
 static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
@@ -64,19 +62,19 @@ waitfence(int fd, enum waittype wt)
 	int64_t timeout;
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
-	bo_1 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
+	bo_1 = xe_bo_create(fd, vm, 0x40000, vram_if_possible(fd, 0), 0);
 	do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
-	bo_2 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
+	bo_2 = xe_bo_create(fd, vm, 0x40000, vram_if_possible(fd, 0), 0);
 	do_bind(fd, vm, bo_2, 0, 0xc0000000, 0x40000, 2);
-	bo_3 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
+	bo_3 = xe_bo_create(fd, vm, 0x40000, vram_if_possible(fd, 0), 0);
 	do_bind(fd, vm, bo_3, 0, 0x180000000, 0x40000, 3);
-	bo_4 = xe_bo_create(fd, vm, 0x10000, MY_FLAG);
+	bo_4 = xe_bo_create(fd, vm, 0x10000, vram_if_possible(fd, 0), 0);
 	do_bind(fd, vm, bo_4, 0, 0x140000000, 0x10000, 4);
-	bo_5 = xe_bo_create(fd, vm, 0x100000, MY_FLAG);
+	bo_5 = xe_bo_create(fd, vm, 0x100000, vram_if_possible(fd, 0), 0);
 	do_bind(fd, vm, bo_5, 0, 0x100000000, 0x100000, 5);
-	bo_6 = xe_bo_create(fd, vm, 0x1c0000, MY_FLAG);
+	bo_6 = xe_bo_create(fd, vm, 0x1c0000, vram_if_possible(fd, 0), 0);
 	do_bind(fd, vm, bo_6, 0, 0xc0040000, 0x1c0000, 6);
-	bo_7 = xe_bo_create(fd, vm, 0x10000, MY_FLAG);
+	bo_7 = xe_bo_create(fd, vm, 0x10000, vram_if_possible(fd, 0), 0);
 	do_bind(fd, vm, bo_7, 0, 0xeffff0000, 0x10000, 7);
 
 	if (wt == RELTIME) {
@@ -134,7 +132,7 @@ invalid_flag(int fd)
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
-	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
+	bo = xe_bo_create(fd, vm, 0x40000, vram_if_possible(fd, 0), 0);
 
 	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
 
@@ -159,7 +157,7 @@ invalid_ops(int fd)
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
-	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
+	bo = xe_bo_create(fd, vm, 0x40000, vram_if_possible(fd, 0), 0);
 
 	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
 
@@ -184,7 +182,7 @@ invalid_engine(int fd)
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
-	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
+	bo = xe_bo_create(fd, vm, 0x40000, vram_if_possible(fd, 0), 0);
 
 	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
 
diff --git a/tests/kms_addfb_basic.c b/tests/kms_addfb_basic.c
index 4f293c2ee..6814e1b8c 100644
--- a/tests/kms_addfb_basic.c
+++ b/tests/kms_addfb_basic.c
@@ -199,7 +199,7 @@ static void invalid_tests(int fd)
 			handle = gem_create_in_memory_regions(fd, size, REGION_SMEM);
 		} else {
 			igt_require(xe_has_vram(fd));
-			handle = xe_bo_create(fd, 0, size, system_memory(fd));
+			handle = xe_bo_create(fd, 0, size, system_memory(fd), 0);
 		}
 
 		f.handles[0] = handle;
diff --git a/tests/kms_getfb.c b/tests/kms_getfb.c
index 1f9e813d8..6f8592d3a 100644
--- a/tests/kms_getfb.c
+++ b/tests/kms_getfb.c
@@ -149,7 +149,7 @@ static void get_ccs_fb(int fd, struct drm_mode_fb_cmd2 *ret)
 	if (is_i915_device(fd))
 		add.handles[0] = gem_buffer_create_fb_obj(fd, size);
 	else
-		add.handles[0] = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0));
+		add.handles[0] = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0), 0);
 	igt_require(add.handles[0] != 0);
 
 	if (!HAS_FLATCCS(devid))
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 07/13] xe: s/hw_engine/engine
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (5 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 06/13] drm-uapi/xe: Separate bo_create placement from flags Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-21 18:15   ` Kamil Konieczny
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 08/13] drm-uapi/xe: Align with drm_xe_query_engine_info Francois Dugast
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

HW engine is redundant after exec_queue name was created.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 benchmarks/gem_wsim.c              |  8 ++---
 lib/xe/xe_query.c                  | 36 ++++++++++----------
 lib/xe/xe_query.h                  | 22 ++++++------
 tests/intel/xe_create.c            |  4 +--
 tests/intel/xe_dma_buf_sync.c      |  2 +-
 tests/intel/xe_drm_fdinfo.c        |  2 +-
 tests/intel/xe_evict.c             |  2 +-
 tests/intel/xe_exec_balancer.c     | 28 ++++++++--------
 tests/intel/xe_exec_basic.c        | 12 +++----
 tests/intel/xe_exec_compute_mode.c |  8 ++---
 tests/intel/xe_exec_fault_mode.c   |  8 ++---
 tests/intel/xe_exec_reset.c        | 44 ++++++++++++------------
 tests/intel/xe_exec_store.c        | 18 +++++-----
 tests/intel/xe_exec_threads.c      | 24 ++++++-------
 tests/intel/xe_guc_pc.c            |  4 +--
 tests/intel/xe_huc_copy.c          |  2 +-
 tests/intel/xe_intel_bb.c          |  2 +-
 tests/intel/xe_noexec_ping_pong.c  |  2 +-
 tests/intel/xe_perf_pmu.c          |  6 ++--
 tests/intel/xe_pm.c                | 14 ++++----
 tests/intel/xe_pm_residency.c      |  2 +-
 tests/intel/xe_query.c             |  6 ++--
 tests/intel/xe_spin_batch.c        | 10 +++---
 tests/intel/xe_vm.c                | 54 +++++++++++++++---------------
 24 files changed, 160 insertions(+), 160 deletions(-)

diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
index d134b2dea..d451b8733 100644
--- a/benchmarks/gem_wsim.c
+++ b/benchmarks/gem_wsim.c
@@ -542,7 +542,7 @@ static struct intel_engine_data *query_engines(void)
 	if (is_xe) {
 		struct drm_xe_engine_class_instance *hwe;
 
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			engines.engines[engines.nengines].class = hwe->engine_class;
 			engines.engines[engines.nengines].instance = hwe->engine_instance;
 			engines.nengines++;
@@ -669,7 +669,7 @@ xe_get_engine(enum intel_engine_id engine)
 		igt_assert(0);
 	};
 
-	xe_for_each_hw_engine(fd, hwe1) {
+	xe_for_each_engine(fd, hwe1) {
 		if (hwe.engine_class == hwe1->engine_class &&
 		    hwe.engine_instance  == hwe1->engine_instance) {
 			hwe = *hwe1;
@@ -688,8 +688,8 @@ xe_get_default_engine(void)
 	struct drm_xe_engine_class_instance default_hwe, *hwe;
 
 	/* select RCS0 | CCS0 or first available engine */
-	default_hwe = *xe_hw_engine(fd, 0);
-	xe_for_each_hw_engine(fd, hwe) {
+	default_hwe = *xe_engine(fd, 0);
+	xe_for_each_engine(fd, hwe) {
 		if ((hwe->engine_class == DRM_XE_ENGINE_CLASS_RENDER ||
 		     hwe->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE) &&
 		    hwe->engine_instance == 0) {
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index fa17b46b6..ef7aaa6a1 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -75,7 +75,7 @@ static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
 static struct drm_xe_engine_class_instance *
 xe_query_engines_new(int fd, unsigned int *num_engines)
 {
-	struct drm_xe_engine_class_instance *hw_engines;
+	struct drm_xe_engine_class_instance *engines;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
 		.query = DRM_XE_DEVICE_QUERY_ENGINES,
@@ -86,15 +86,15 @@ xe_query_engines_new(int fd, unsigned int *num_engines)
 	igt_assert(num_engines);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	hw_engines = malloc(query.size);
-	igt_assert(hw_engines);
+	engines = malloc(query.size);
+	igt_assert(engines);
 
-	query.data = to_user_pointer(hw_engines);
+	query.data = to_user_pointer(engines);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	*num_engines = query.size / sizeof(*hw_engines);
+	*num_engines = query.size / sizeof(*engines);
 
-	return hw_engines;
+	return engines;
 }
 
 static struct drm_xe_query_mem_regions *xe_query_mem_regions_new(int fd)
@@ -221,7 +221,7 @@ static void xe_device_free(struct xe_device *xe_dev)
 {
 	free(xe_dev->config);
 	free(xe_dev->gt_list);
-	free(xe_dev->hw_engines);
+	free(xe_dev->engines);
 	free(xe_dev->mem_regions);
 	free(xe_dev->vram_size);
 	free(xe_dev);
@@ -253,7 +253,7 @@ struct xe_device *xe_device_get(int fd)
 	xe_dev->dev_id = xe_dev->config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
 	xe_dev->gt_list = xe_query_gt_list_new(fd);
 	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
-	xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
+	xe_dev->engines = xe_query_engines_new(fd, &xe_dev->number_engines);
 	xe_dev->mem_regions = xe_query_mem_regions_new(fd);
 	xe_dev->vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->vram_size));
 	xe_dev->visible_vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->visible_vram_size));
@@ -422,29 +422,29 @@ uint64_t vram_if_possible(int fd, int gt)
 }
 
 /**
- * xe_hw_engines:
+ * xe_engines:
  * @fd: xe device fd
  *
  * Returns engines array of xe device @fd.
  */
-xe_dev_FN(xe_hw_engines, hw_engines, struct drm_xe_engine_class_instance *);
+xe_dev_FN(xe_engines, engines, struct drm_xe_engine_class_instance *);
 
 /**
- * xe_hw_engine:
+ * xe_engine:
  * @fd: xe device fd
  * @idx: engine index
  *
  * Returns engine instance of xe device @fd and @idx.
  */
-struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx)
+struct drm_xe_engine_class_instance *xe_engine(int fd, int idx)
 {
 	struct xe_device *xe_dev;
 
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
-	igt_assert(idx >= 0 && idx < xe_dev->number_hw_engines);
+	igt_assert(idx >= 0 && idx < xe_dev->number_engines);
 
-	return &xe_dev->hw_engines[idx];
+	return &xe_dev->engines[idx];
 }
 
 /**
@@ -529,12 +529,12 @@ uint32_t xe_min_page_size(int fd, uint64_t region)
 xe_dev_FN(xe_config, config, struct drm_xe_query_config *);
 
 /**
- * xe_number_hw_engine:
+ * xe_number_engine:
  * @fd: xe device fd
  *
  * Returns number of hw engines of xe device @fd.
  */
-xe_dev_FN(xe_number_hw_engines, number_hw_engines, unsigned int);
+xe_dev_FN(xe_number_engines, number_engines, unsigned int);
 
 /**
  * xe_has_vram:
@@ -657,8 +657,8 @@ bool xe_has_engine_class(int fd, uint16_t engine_class)
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
 
-	for (int i = 0; i < xe_dev->number_hw_engines; i++)
-		if (xe_dev->hw_engines[i].engine_class == engine_class)
+	for (int i = 0; i < xe_dev->number_engines; i++)
+		if (xe_dev->engines[i].engine_class == engine_class)
 			return true;
 
 	return false;
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index be92ec5ed..bf9f2b955 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -32,11 +32,11 @@ struct xe_device {
 	/** @gt_list: bitmask of all memory regions */
 	uint64_t memory_regions;
 
-	/** @hw_engines: array of hardware engines */
-	struct drm_xe_engine_class_instance *hw_engines;
+	/** @engines: array of hardware engines */
+	struct drm_xe_engine_class_instance *engines;
 
-	/** @number_hw_engines: length of hardware engines array */
-	unsigned int number_hw_engines;
+	/** @number_engines: length of hardware engines array */
+	unsigned int number_engines;
 
 	/** @mem_regions: regions memory information and usage */
 	struct drm_xe_query_mem_regions *mem_regions;
@@ -60,10 +60,10 @@ struct xe_device {
 	uint16_t dev_id;
 };
 
-#define xe_for_each_hw_engine(__fd, __hwe) \
-	for (int __i = 0; __i < xe_number_hw_engines(__fd) && \
-	     (__hwe = xe_hw_engine(__fd, __i)); ++__i)
-#define xe_for_each_hw_engine_class(__class) \
+#define xe_for_each_engine(__fd, __hwe) \
+	for (int __i = 0; __i < xe_number_engines(__fd) && \
+	     (__hwe = xe_engine(__fd, __i)); ++__i)
+#define xe_for_each_engine_class(__class) \
 	for (__class = 0; __class < DRM_XE_ENGINE_CLASS_COMPUTE + 1; \
 	     ++__class)
 #define xe_for_each_gt(__fd, __gt) \
@@ -81,14 +81,14 @@ uint64_t all_memory_regions(int fd);
 uint64_t system_memory(int fd);
 uint64_t vram_memory(int fd, int gt);
 uint64_t vram_if_possible(int fd, int gt);
-struct drm_xe_engine_class_instance *xe_hw_engines(int fd);
-struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx);
+struct drm_xe_engine_class_instance *xe_engines(int fd);
+struct drm_xe_engine_class_instance *xe_engine(int fd, int idx);
 struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
 const char *xe_region_name(uint64_t region);
 uint16_t xe_region_class(int fd, uint64_t region);
 uint32_t xe_min_page_size(int fd, uint64_t region);
 struct drm_xe_query_config *xe_config(int fd);
-unsigned int xe_number_hw_engines(int fd);
+unsigned int xe_number_engines(int fd);
 bool xe_has_vram(int fd);
 uint64_t xe_vram_size(int fd, int gt);
 uint64_t xe_visible_vram_size(int fd, int gt);
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index 4326b15e8..9d71b7463 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -139,7 +139,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
 	int nproc = sysconf(_SC_NPROCESSORS_ONLN), seconds;
 
 	fd = drm_reopen_driver(fd);
-	num_engines = xe_number_hw_engines(fd);
+	num_engines = xe_number_engines(fd);
 	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
@@ -156,7 +156,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
 
 		for (i = 0; i < exec_queues_per_process; i++) {
 			idx = rand() % num_engines;
-			hwe = xe_hw_engine(fd, idx);
+			hwe = xe_engine(fd, idx);
 			err = __xe_exec_queue_create(fd, vm, hwe, 0, &exec_queue);
 			igt_debug("[%2d] Create exec_queue: err=%d, exec_queue=%u [idx = %d]\n",
 				  n, err, exec_queue, i);
diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
index aeb4c4995..dfa957243 100644
--- a/tests/intel/xe_dma_buf_sync.c
+++ b/tests/intel/xe_dma_buf_sync.c
@@ -229,7 +229,7 @@ igt_main
 	igt_fixture {
 		fd = drm_open_driver(DRIVER_XE);
 
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			if (hwe0 == NULL) {
 				hwe0 = hwe;
 			} else {
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index 6bca5a6f1..d50cc6df1 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -313,7 +313,7 @@ igt_main
 
 	igt_describe("Create and compare active memory consumption by client");
 	igt_subtest("drm-active")
-		test_active(xe, xe_hw_engine(xe, 0));
+		test_active(xe, xe_engine(xe, 0));
 
 	igt_fixture {
 		drm_close_driver(xe);
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 436a2be02..2e2960b9b 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -759,7 +759,7 @@ igt_main
 		vram_size = xe_visible_vram_size(fd, 0);
 		igt_assert(vram_size);
 
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			if (hwe->engine_class != DRM_XE_ENGINE_CLASS_COPY)
 				break;
 	}
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index fa3d7a338..ea06c23cd 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -57,7 +57,7 @@ static void test_all_active(int fd, int gt, int class)
 	struct drm_xe_engine_class_instance eci[MAX_INSTANCE];
 	int i, num_placements = 0;
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class != class || hwe->gt_id != gt)
 			continue;
 
@@ -199,7 +199,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class != class || hwe->gt_id != gt)
 			continue;
 
@@ -426,7 +426,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class != class || hwe->gt_id != gt)
 			continue;
 
@@ -632,25 +632,25 @@ igt_main
 
 	igt_subtest("virtual-all-active")
 		xe_for_each_gt(fd, gt)
-			xe_for_each_hw_engine_class(class)
+			xe_for_each_engine_class(class)
 				test_all_active(fd, gt, class);
 
 	for (const struct section *s = sections; s->name; s++) {
 		igt_subtest_f("once-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_exec(fd, gt, class, 1, 1,
 						  s->flags);
 
 		igt_subtest_f("twice-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_exec(fd, gt, class, 1, 2,
 						  s->flags);
 
 		igt_subtest_f("many-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_exec(fd, gt, class, 1,
 						  s->flags & (REBIND | INVALIDATE) ?
 						  64 : 1024,
@@ -658,7 +658,7 @@ igt_main
 
 		igt_subtest_f("many-execqueues-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_exec(fd, gt, class, 16,
 						  s->flags & (REBIND | INVALIDATE) ?
 						  64 : 1024,
@@ -666,23 +666,23 @@ igt_main
 
 		igt_subtest_f("no-exec-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_exec(fd, gt, class, 1, 0,
 						  s->flags);
 
 		igt_subtest_f("once-cm-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_cm(fd, gt, class, 1, 1, s->flags);
 
 		igt_subtest_f("twice-cm-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_cm(fd, gt, class, 1, 2, s->flags);
 
 		igt_subtest_f("many-cm-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_cm(fd, gt, class, 1,
 						s->flags & (REBIND | INVALIDATE) ?
 						64 : 1024,
@@ -690,7 +690,7 @@ igt_main
 
 		igt_subtest_f("many-execqueues-cm-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_cm(fd, gt, class, 16,
 						s->flags & (REBIND | INVALIDATE) ?
 						64 : 1024,
@@ -698,7 +698,7 @@ igt_main
 
 		igt_subtest_f("no-exec-cm-%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_cm(fd, gt, class, 1, 0, s->flags);
 	}
 
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index 23acdd434..46b9dc2e0 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -336,36 +336,36 @@ igt_main
 
 	for (const struct section *s = sections; s->name; s++) {
 		igt_subtest_f("once-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1, 1, 1, s->flags);
 
 		igt_subtest_f("twice-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1, 2, 1, s->flags);
 
 		igt_subtest_f("many-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1,
 					  s->flags & (REBIND | INVALIDATE) ?
 					  64 : 1024, 1,
 					  s->flags);
 
 		igt_subtest_f("many-execqueues-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 16,
 					  s->flags & (REBIND | INVALIDATE) ?
 					  64 : 1024, 1,
 					  s->flags);
 
 		igt_subtest_f("many-execqueues-many-vm-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 16,
 					  s->flags & (REBIND | INVALIDATE) ?
 					  64 : 1024, 16,
 					  s->flags);
 
 		igt_subtest_f("no-exec-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1, 0, 1, s->flags);
 	}
 
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 98a98256e..a9f69deef 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -321,15 +321,15 @@ igt_main
 
 	for (const struct section *s = sections; s->name; s++) {
 		igt_subtest_f("once-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1, 1, s->flags);
 
 		igt_subtest_f("twice-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1, 2, s->flags);
 
 		igt_subtest_f("many-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1,
 					  s->flags & (REBIND | INVALIDATE) ?
 					  64 : 128,
@@ -339,7 +339,7 @@ igt_main
 			continue;
 
 		igt_subtest_f("many-execqueues-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 16,
 					  s->flags & (REBIND | INVALIDATE) ?
 					  64 : 128,
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index 3eb448ef4..4c85fce76 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -386,22 +386,22 @@ igt_main
 
 	for (const struct section *s = sections; s->name; s++) {
 		igt_subtest_f("once-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1, 1, s->flags);
 
 		igt_subtest_f("twice-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1, 2, s->flags);
 
 		igt_subtest_f("many-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 1,
 					  s->flags & (REBIND | INVALIDATE) ?
 					  64 : 128,
 					  s->flags);
 
 		igt_subtest_f("many-execqueues-%s", s->name)
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				test_exec(fd, hwe, 16,
 					  s->flags & (REBIND | INVALIDATE) ?
 					  64 : 128,
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index d8b8e0355..988e63438 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -168,7 +168,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (flags & CLOSE_FD)
 		fd = drm_open_driver(DRIVER_XE);
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class != class || hwe->gt_id != gt)
 			continue;
 
@@ -790,106 +790,106 @@ igt_main
 		fd = drm_open_driver(DRIVER_XE);
 
 	igt_subtest("spin")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_spin(fd, hwe);
 
 	igt_subtest("cancel")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_legacy_mode(fd, hwe, 1, 1, CANCEL);
 
 	igt_subtest("execqueue-reset")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_legacy_mode(fd, hwe, 2, 2, EXEC_QUEUE_RESET);
 
 	igt_subtest("cat-error")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_legacy_mode(fd, hwe, 2, 2, CAT_ERROR);
 
 	igt_subtest("gt-reset")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_legacy_mode(fd, hwe, 2, 2, GT_RESET);
 
 	igt_subtest("close-fd-no-exec")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_legacy_mode(-1, hwe, 16, 0, CLOSE_FD);
 
 	igt_subtest("close-fd")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_legacy_mode(-1, hwe, 16, 256, CLOSE_FD);
 
 	igt_subtest("close-execqueues-close-fd")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_legacy_mode(-1, hwe, 16, 256, CLOSE_FD |
 					 CLOSE_EXEC_QUEUES);
 
 	igt_subtest("cm-execqueue-reset")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_compute_mode(fd, hwe, 2, 2, EXEC_QUEUE_RESET);
 
 	igt_subtest("cm-cat-error")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_compute_mode(fd, hwe, 2, 2, CAT_ERROR);
 
 	igt_subtest("cm-gt-reset")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_compute_mode(fd, hwe, 2, 2, GT_RESET);
 
 	igt_subtest("cm-close-fd-no-exec")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_compute_mode(-1, hwe, 16, 0, CLOSE_FD);
 
 	igt_subtest("cm-close-fd")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_compute_mode(-1, hwe, 16, 256, CLOSE_FD);
 
 	igt_subtest("cm-close-execqueues-close-fd")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_compute_mode(-1, hwe, 16, 256, CLOSE_FD |
 					  CLOSE_EXEC_QUEUES);
 
 	for (const struct section *s = sections; s->name; s++) {
 		igt_subtest_f("%s-cancel", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_balancer(fd, gt, class, 1, 1,
 						      CANCEL | s->flags);
 
 		igt_subtest_f("%s-execqueue-reset", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_balancer(fd, gt, class, MAX_INSTANCE + 1,
 						      MAX_INSTANCE + 1,
 						      EXEC_QUEUE_RESET | s->flags);
 
 		igt_subtest_f("%s-cat-error", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_balancer(fd, gt, class, MAX_INSTANCE + 1,
 						      MAX_INSTANCE + 1,
 						      CAT_ERROR | s->flags);
 
 		igt_subtest_f("%s-gt-reset", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_balancer(fd, gt, class, MAX_INSTANCE + 1,
 						      MAX_INSTANCE + 1,
 						      GT_RESET | s->flags);
 
 		igt_subtest_f("%s-close-fd-no-exec", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_balancer(-1, gt, class, 16, 0,
 						      CLOSE_FD | s->flags);
 
 		igt_subtest_f("%s-close-fd", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_balancer(-1, gt, class, 16, 256,
 						      CLOSE_FD | s->flags);
 
 		igt_subtest_f("%s-close-execqueues-close-fd", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					test_balancer(-1, gt, class, 16, 256, CLOSE_FD |
 						      CLOSE_EXEC_QUEUES | s->flags);
 	}
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 9ee5edeb4..0b7b3d3e9 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -63,7 +63,7 @@ static void store(int fd)
 		.syncs = to_user_pointer(&sync),
 	};
 	struct data *data;
-	struct drm_xe_engine_class_instance *hw_engine;
+	struct drm_xe_engine_class_instance *engine;
 	uint32_t vm;
 	uint32_t exec_queue;
 	uint32_t syncobj;
@@ -80,16 +80,16 @@ static void store(int fd)
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
 
-	hw_engine = xe_hw_engine(fd, 1);
+	engine = xe_engine(fd, 1);
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, hw_engine->gt_id),
+			  vram_if_possible(fd, engine->gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
-	xe_vm_bind_async(fd, vm, hw_engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
+	xe_vm_bind_async(fd, vm, engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
 	data = xe_bo_map(fd, bo, bo_size);
 	store_dword_batch(data, addr, value);
 
-	exec_queue = xe_exec_queue_create(fd, vm, hw_engine, 0);
+	exec_queue = xe_exec_queue_create(fd, vm, engine, 0);
 	exec.exec_queue_id = exec_queue;
 	exec.address = data->addr;
 	sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL;
@@ -242,7 +242,7 @@ static void store_all(int fd, int gt, int class)
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	data = xe_bo_map(fd, bo, bo_size);
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class != class || hwe->gt_id != gt)
 			continue;
 		eci[num_placements++] = *hwe;
@@ -309,16 +309,16 @@ igt_main
 
 	igt_subtest("basic-all") {
 		xe_for_each_gt(fd, gt)
-			xe_for_each_hw_engine_class(class)
+			xe_for_each_engine_class(class)
 				store_all(fd, gt, class);
 	}
 
 	igt_subtest("cachelines")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			store_cachelines(fd, hwe, 0);
 
 	igt_subtest("page-sized")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			store_cachelines(fd, hwe, PAGES);
 
 	igt_fixture {
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index f37fc612a..8a01b150d 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -81,7 +81,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 		owns_vm = true;
 	}
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class != class || hwe->gt_id != gt)
 			continue;
 
@@ -969,22 +969,22 @@ static void threads(int fd, int flags)
 	uint64_t userptr = 0x00007000eadbe000;
 	pthread_mutex_t mutex;
 	pthread_cond_t cond;
-	int n_hw_engines = 0, class;
+	int n_engines = 0, class;
 	uint64_t i = 0;
 	uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
 	bool go = false;
 	int n_threads = 0;
 	int gt;
 
-	xe_for_each_hw_engine(fd, hwe)
-		++n_hw_engines;
+	xe_for_each_engine(fd, hwe)
+		++n_engines;
 
 	if (flags & BALANCER) {
 		xe_for_each_gt(fd, gt)
-			xe_for_each_hw_engine_class(class) {
+			xe_for_each_engine_class(class) {
 				int num_placements = 0;
 
-				xe_for_each_hw_engine(fd, hwe) {
+				xe_for_each_engine(fd, hwe) {
 					if (hwe->engine_class != class ||
 					    hwe->gt_id != gt)
 						continue;
@@ -992,11 +992,11 @@ static void threads(int fd, int flags)
 				}
 
 				if (num_placements > 1)
-					n_hw_engines += 2;
+					n_engines += 2;
 			}
 	}
 
-	threads_data = calloc(n_hw_engines, sizeof(*threads_data));
+	threads_data = calloc(n_engines, sizeof(*threads_data));
 	igt_assert(threads_data);
 
 	pthread_mutex_init(&mutex, 0);
@@ -1012,7 +1012,7 @@ static void threads(int fd, int flags)
 					       0);
 	}
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		threads_data[i].mutex = &mutex;
 		threads_data[i].cond = &cond;
 #define ADDRESS_SHIFT	39
@@ -1045,10 +1045,10 @@ static void threads(int fd, int flags)
 
 	if (flags & BALANCER) {
 		xe_for_each_gt(fd, gt)
-			xe_for_each_hw_engine_class(class) {
+			xe_for_each_engine_class(class) {
 				int num_placements = 0;
 
-				xe_for_each_hw_engine(fd, hwe) {
+				xe_for_each_engine(fd, hwe) {
 					if (hwe->engine_class != class ||
 					    hwe->gt_id != gt)
 						continue;
@@ -1123,7 +1123,7 @@ static void threads(int fd, int flags)
 	pthread_cond_broadcast(&cond);
 	pthread_mutex_unlock(&mutex);
 
-	for (i = 0; i < n_hw_engines; ++i)
+	for (i = 0; i < n_engines; ++i)
 		pthread_join(threads_data[i].thread, NULL);
 
 	if (vm_legacy_mode)
diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
index 8d7b677b4..dd768ecdc 100644
--- a/tests/intel/xe_guc_pc.c
+++ b/tests/intel/xe_guc_pc.c
@@ -415,7 +415,7 @@ igt_main
 
 	igt_subtest("freq_fixed_exec") {
 		xe_for_each_gt(fd, gt) {
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				igt_fork(child, ncpus) {
 					igt_debug("Execution Started\n");
 					exec_basic(fd, hwe, MAX_N_EXEC_QUEUES, 16);
@@ -437,7 +437,7 @@ igt_main
 
 	igt_subtest("freq_range_exec") {
 		xe_for_each_gt(fd, gt) {
-			xe_for_each_hw_engine(fd, hwe)
+			xe_for_each_engine(fd, hwe)
 				igt_fork(child, ncpus) {
 					igt_debug("Execution Started\n");
 					exec_basic(fd, hwe, MAX_N_EXEC_QUEUES, 16);
diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
index eda9e5216..dbc5afc17 100644
--- a/tests/intel/xe_huc_copy.c
+++ b/tests/intel/xe_huc_copy.c
@@ -158,7 +158,7 @@ test_huc_copy(int fd)
 
 	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class == DRM_XE_ENGINE_CLASS_VIDEO_DECODE &&
 		    !(tested_gts & BIT(hwe->gt_id))) {
 			tested_gts |= BIT(hwe->gt_id);
diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
index 00bd17d4c..e7a566f62 100644
--- a/tests/intel/xe_intel_bb.c
+++ b/tests/intel/xe_intel_bb.c
@@ -192,7 +192,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
 
 	if (new_context) {
 		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
-		ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
+		ctx = xe_exec_queue_create(xe, vm, xe_engine(xe, 0), 0);
 		intel_bb_destroy(ibb);
 		ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
 		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 5e3349247..e27cc4582 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -98,7 +98,7 @@ igt_simple_main
 
 	fd = drm_open_driver(DRIVER_XE);
 
-	test_ping_pong(fd, xe_hw_engine(fd, 0));
+	test_ping_pong(fd, xe_engine(fd, 0));
 
 	drm_close_driver(fd);
 }
diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
index 8ef557a46..63a8eb9b2 100644
--- a/tests/intel/xe_perf_pmu.c
+++ b/tests/intel/xe_perf_pmu.c
@@ -209,7 +209,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
 
 	config = engine_group_get_config(gt, class);
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class != class || hwe->gt_id != gt)
 			continue;
 
@@ -315,13 +315,13 @@ igt_main
 	for (const struct section *s = sections; s->name; s++) {
 		igt_subtest_f("%s", s->name)
 			xe_for_each_gt(fd, gt)
-				xe_for_each_hw_engine_class(class)
+				xe_for_each_engine_class(class)
 					if (class == s->class)
 						test_engine_group_busyness(fd, gt, class, s->name);
 	}
 
 	igt_subtest("any-engine-group-busy")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_any_engine_busyness(fd, hwe);
 
 	igt_fixture {
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index 2e5c61b59..d78ca31a8 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -471,7 +471,7 @@ igt_main
 		igt_device_get_pci_slot_name(device.fd_xe, device.pci_slot_name);
 
 		/* Always perform initial once-basic exec checking for health */
-		xe_for_each_hw_engine(device.fd_xe, hwe)
+		xe_for_each_engine(device.fd_xe, hwe)
 			test_exec(device, hwe, 1, 1, NO_SUSPEND, NO_RPM);
 
 		igt_pm_get_d3cold_allowed(device.pci_slot_name, &d3cold_allowed);
@@ -486,7 +486,7 @@ igt_main
 		}
 
 		igt_subtest_f("%s-basic-exec", s->name) {
-			xe_for_each_hw_engine(device.fd_xe, hwe)
+			xe_for_each_engine(device.fd_xe, hwe)
 				test_exec(device, hwe, 1, 2, s->state,
 					  NO_RPM);
 		}
@@ -494,13 +494,13 @@ igt_main
 		igt_subtest_f("%s-exec-after", s->name) {
 			igt_system_suspend_autoresume(s->state,
 						      SUSPEND_TEST_NONE);
-			xe_for_each_hw_engine(device.fd_xe, hwe)
+			xe_for_each_engine(device.fd_xe, hwe)
 				test_exec(device, hwe, 1, 2, NO_SUSPEND,
 					  NO_RPM);
 		}
 
 		igt_subtest_f("%s-multiple-execs", s->name) {
-			xe_for_each_hw_engine(device.fd_xe, hwe)
+			xe_for_each_engine(device.fd_xe, hwe)
 				test_exec(device, hwe, 16, 32, s->state,
 					  NO_RPM);
 		}
@@ -508,7 +508,7 @@ igt_main
 		for (const struct d_state *d = d_states; d->name; d++) {
 			igt_subtest_f("%s-%s-basic-exec", s->name, d->name) {
 				igt_assert(setup_d3(device, d->state));
-				xe_for_each_hw_engine(device.fd_xe, hwe)
+				xe_for_each_engine(device.fd_xe, hwe)
 					test_exec(device, hwe, 1, 2, s->state,
 						  NO_RPM);
 			}
@@ -523,14 +523,14 @@ igt_main
 
 		igt_subtest_f("%s-basic-exec", d->name) {
 			igt_assert(setup_d3(device, d->state));
-			xe_for_each_hw_engine(device.fd_xe, hwe)
+			xe_for_each_engine(device.fd_xe, hwe)
 				test_exec(device, hwe, 1, 1,
 					  NO_SUSPEND, d->state);
 		}
 
 		igt_subtest_f("%s-multiple-execs", d->name) {
 			igt_assert(setup_d3(device, d->state));
-			xe_for_each_hw_engine(device.fd_xe, hwe)
+			xe_for_each_engine(device.fd_xe, hwe)
 				test_exec(device, hwe, 16, 32,
 					  NO_SUSPEND, d->state);
 		}
diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
index 6c9a95429..4f590c83c 100644
--- a/tests/intel/xe_pm_residency.c
+++ b/tests/intel/xe_pm_residency.c
@@ -346,7 +346,7 @@ igt_main
 	igt_describe("Validate idle residency on exec");
 	igt_subtest("idle-residency-on-exec") {
 		xe_for_each_gt(fd, gt) {
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				if (gt == hwe->gt_id && !hwe->engine_instance)
 					idle_residency_on_exec(fd, hwe);
 			}
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 4a23dcb60..48042337a 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -181,7 +181,7 @@ test_query_engines(int fd)
 	struct drm_xe_engine_class_instance *hwe;
 	int i = 0;
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		igt_assert(hwe);
 		igt_info("engine %d: %s, engine instance: %d, tile: TILE-%d\n", i++,
 			 xe_engine_class_string(hwe->engine_class), hwe->engine_instance,
@@ -602,7 +602,7 @@ static void test_query_engine_cycles(int fd)
 
 	igt_require(query_engine_cycles_supported(fd));
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		igt_assert(hwe);
 		__engine_cycles(fd, hwe);
 	}
@@ -626,7 +626,7 @@ static void test_engine_cycles_invalid(int fd)
 	igt_require(query_engine_cycles_supported(fd));
 
 	/* get one engine */
-	xe_for_each_hw_engine(fd, hwe)
+	xe_for_each_engine(fd, hwe)
 		break;
 
 	/* sanity check engine selection is valid */
diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
index 6abe700da..2e2a0ed0e 100644
--- a/tests/intel/xe_spin_batch.c
+++ b/tests/intel/xe_spin_batch.c
@@ -72,8 +72,8 @@ static void spin_basic_all(int fd)
 
 	vm = xe_vm_create(fd, 0, 0);
 	ahnd = intel_allocator_open(fd, vm, INTEL_ALLOCATOR_RELOC);
-	spin = malloc(sizeof(*spin) * xe_number_hw_engines(fd));
-	xe_for_each_hw_engine(fd, hwe) {
+	spin = malloc(sizeof(*spin) * xe_number_engines(fd));
+	xe_for_each_engine(fd, hwe) {
 		igt_debug("Run on engine: %s:%d\n",
 			  xe_engine_class_string(hwe->engine_class), hwe->engine_instance);
 		spin[i] = igt_spin_new(fd, .ahnd = ahnd, .vm = vm, .hwe = hwe);
@@ -104,7 +104,7 @@ static void spin_all(int fd, int gt, int class)
 
 	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
 
-	xe_for_each_hw_engine(fd, hwe) {
+	xe_for_each_engine(fd, hwe) {
 		if (hwe->engine_class != class || hwe->gt_id != gt)
 			continue;
 		eci[num_placements++] = *hwe;
@@ -217,7 +217,7 @@ igt_main
 		spin_basic(fd);
 
 	igt_subtest("spin-batch")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			spin(fd, hwe);
 
 	igt_subtest("spin-basic-all")
@@ -225,7 +225,7 @@ igt_main
 
 	igt_subtest("spin-all") {
 		xe_for_each_gt(fd, gt)
-			xe_for_each_hw_engine_class(class)
+			xe_for_each_engine_class(class)
 				spin_all(fd, gt, class);
 	}
 
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index ec804febd..ea93d7b2e 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -1858,7 +1858,7 @@ igt_main
 	igt_fixture {
 		fd = drm_open_driver(DRIVER_XE);
 
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			if (hwe->engine_class != DRM_XE_ENGINE_CLASS_COPY) {
 				hwe_non_copy = hwe;
 				break;
@@ -1890,45 +1890,45 @@ igt_main
 		userptr_invalid(fd);
 
 	igt_subtest("shared-pte-page")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			shared_pte_page(fd, hwe, 4,
 					xe_get_default_alignment(fd));
 
 	igt_subtest("shared-pde-page")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			shared_pte_page(fd, hwe, 4, 0x1000ul * 512);
 
 	igt_subtest("shared-pde2-page")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			shared_pte_page(fd, hwe, 4, 0x1000ul * 512 * 512);
 
 	igt_subtest("shared-pde3-page")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			shared_pte_page(fd, hwe, 4, 0x1000ul * 512 * 512 * 512);
 
 	igt_subtest("bind-execqueues-independent")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_bind_execqueues_independent(fd, hwe, 0);
 
 	igt_subtest("bind-execqueues-conflict")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_bind_execqueues_independent(fd, hwe, CONFLICT);
 
 	igt_subtest("bind-array-twice")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_bind_array(fd, hwe, 2, 0);
 
 	igt_subtest("bind-array-many")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_bind_array(fd, hwe, 16, 0);
 
 	igt_subtest("bind-array-exec_queue-twice")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_bind_array(fd, hwe, 2,
 					BIND_ARRAY_BIND_EXEC_QUEUE_FLAG);
 
 	igt_subtest("bind-array-exec_queue-many")
-		xe_for_each_hw_engine(fd, hwe)
+		xe_for_each_engine(fd, hwe)
 			test_bind_array(fd, hwe, 16,
 					BIND_ARRAY_BIND_EXEC_QUEUE_FLAG);
 
@@ -1936,41 +1936,41 @@ igt_main
 	     bind_size = bind_size << 1) {
 		igt_subtest_f("large-binds-%lld",
 			      (long long)bind_size)
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				test_large_binds(fd, hwe, 4, 16, bind_size, 0);
 				break;
 			}
 		igt_subtest_f("large-split-binds-%lld",
 			      (long long)bind_size)
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				test_large_binds(fd, hwe, 4, 16, bind_size,
 						 LARGE_BIND_FLAG_SPLIT);
 				break;
 			}
 		igt_subtest_f("large-misaligned-binds-%lld",
 			      (long long)bind_size)
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				test_large_binds(fd, hwe, 4, 16, bind_size,
 						 LARGE_BIND_FLAG_MISALIGNED);
 				break;
 			}
 		igt_subtest_f("large-split-misaligned-binds-%lld",
 			      (long long)bind_size)
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				test_large_binds(fd, hwe, 4, 16, bind_size,
 						 LARGE_BIND_FLAG_SPLIT |
 						 LARGE_BIND_FLAG_MISALIGNED);
 				break;
 			}
 		igt_subtest_f("large-userptr-binds-%lld", (long long)bind_size)
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				test_large_binds(fd, hwe, 4, 16, bind_size,
 						 LARGE_BIND_FLAG_USERPTR);
 				break;
 			}
 		igt_subtest_f("large-userptr-split-binds-%lld",
 			      (long long)bind_size)
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				test_large_binds(fd, hwe, 4, 16, bind_size,
 						 LARGE_BIND_FLAG_SPLIT |
 						 LARGE_BIND_FLAG_USERPTR);
@@ -1978,7 +1978,7 @@ igt_main
 			}
 		igt_subtest_f("large-userptr-misaligned-binds-%lld",
 			      (long long)bind_size)
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				test_large_binds(fd, hwe, 4, 16, bind_size,
 						 LARGE_BIND_FLAG_MISALIGNED |
 						 LARGE_BIND_FLAG_USERPTR);
@@ -1986,7 +1986,7 @@ igt_main
 			}
 		igt_subtest_f("large-userptr-split-misaligned-binds-%lld",
 			      (long long)bind_size)
-			xe_for_each_hw_engine(fd, hwe) {
+			xe_for_each_engine(fd, hwe) {
 				test_large_binds(fd, hwe, 4, 16, bind_size,
 						 LARGE_BIND_FLAG_SPLIT |
 						 LARGE_BIND_FLAG_MISALIGNED |
@@ -1997,13 +1997,13 @@ igt_main
 
 	bind_size = (0x1ull << 21) + (0x1ull << 20);
 	igt_subtest_f("mixed-binds-%lld", (long long)bind_size)
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			test_large_binds(fd, hwe, 4, 16, bind_size, 0);
 			break;
 		}
 
 	igt_subtest_f("mixed-misaligned-binds-%lld", (long long)bind_size)
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			test_large_binds(fd, hwe, 4, 16, bind_size,
 					 LARGE_BIND_FLAG_MISALIGNED);
 			break;
@@ -2011,14 +2011,14 @@ igt_main
 
 	bind_size = (0x1ull << 30) + (0x1ull << 29) + (0x1ull << 20);
 	igt_subtest_f("mixed-binds-%lld", (long long)bind_size)
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			test_large_binds(fd, hwe, 4, 16, bind_size, 0);
 			break;
 		}
 
 	bind_size = (0x1ull << 30) + (0x1ull << 29) + (0x1ull << 20);
 	igt_subtest_f("mixed-misaligned-binds-%lld", (long long)bind_size)
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			test_large_binds(fd, hwe, 4, 16, bind_size,
 					 LARGE_BIND_FLAG_MISALIGNED);
 			break;
@@ -2026,7 +2026,7 @@ igt_main
 
 	bind_size = (0x1ull << 21) + (0x1ull << 20);
 	igt_subtest_f("mixed-userptr-binds-%lld", (long long) bind_size)
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			test_large_binds(fd, hwe, 4, 16, bind_size,
 					 LARGE_BIND_FLAG_USERPTR);
 			break;
@@ -2034,7 +2034,7 @@ igt_main
 
 	igt_subtest_f("mixed-userptr-misaligned-binds-%lld",
 		      (long long)bind_size)
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			test_large_binds(fd, hwe, 4, 16, bind_size,
 					 LARGE_BIND_FLAG_MISALIGNED |
 					 LARGE_BIND_FLAG_USERPTR);
@@ -2043,7 +2043,7 @@ igt_main
 
 	bind_size = (0x1ull << 30) + (0x1ull << 29) + (0x1ull << 20);
 	igt_subtest_f("mixed-userptr-binds-%lld", (long long)bind_size)
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			test_large_binds(fd, hwe, 4, 16, bind_size,
 					 LARGE_BIND_FLAG_USERPTR);
 			break;
@@ -2052,7 +2052,7 @@ igt_main
 	bind_size = (0x1ull << 30) + (0x1ull << 29) + (0x1ull << 20);
 	igt_subtest_f("mixed-userptr-misaligned-binds-%lld",
 		      (long long)bind_size)
-		xe_for_each_hw_engine(fd, hwe) {
+		xe_for_each_engine(fd, hwe) {
 			test_large_binds(fd, hwe, 4, 16, bind_size,
 					 LARGE_BIND_FLAG_MISALIGNED |
 					 LARGE_BIND_FLAG_USERPTR);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 08/13] drm-uapi/xe: Align with drm_xe_query_engine_info
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (6 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 07/13] xe: s/hw_engine/engine Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 09/13] drm-uapi/xe: Reject bo creation of unaligned size Francois Dugast
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with kernel commit ("drm/xe: Make DRM_XE_DEVICE_QUERY_ENGINES future proof")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 benchmarks/gem_wsim.c             |  2 +-
 include/drm-uapi/xe_drm.h         | 24 +++++++++++++++++++++++-
 lib/xe/xe_query.c                 | 16 ++++++++--------
 lib/xe/xe_query.h                 |  8 ++++----
 tests/intel/xe_create.c           |  7 ++++---
 tests/intel/xe_drm_fdinfo.c       |  5 +++--
 tests/intel/xe_exec_store.c       |  8 ++++----
 tests/intel/xe_intel_bb.c         |  3 ++-
 tests/intel/xe_noexec_ping_pong.c |  5 +++--
 9 files changed, 52 insertions(+), 26 deletions(-)

diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
index d451b8733..5d5353c94 100644
--- a/benchmarks/gem_wsim.c
+++ b/benchmarks/gem_wsim.c
@@ -688,7 +688,7 @@ xe_get_default_engine(void)
 	struct drm_xe_engine_class_instance default_hwe, *hwe;
 
 	/* select RCS0 | CCS0 or first available engine */
-	default_hwe = *xe_engine(fd, 0);
+	default_hwe = xe_engine(fd, 0)->instance;
 	xe_for_each_engine(fd, hwe) {
 		if ((hwe->engine_class == DRM_XE_ENGINE_CLASS_RENDER ||
 		     hwe->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE) &&
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 200f018e1..7aff66830 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -124,7 +124,14 @@ struct xe_user_extension {
 #define DRM_IOCTL_XE_EXEC_QUEUE_GET_PROPERTY	DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_GET_PROPERTY, struct drm_xe_exec_queue_get_property)
 #define DRM_IOCTL_XE_WAIT_USER_FENCE		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
 
-/** struct drm_xe_engine_class_instance - instance of an engine class */
+/**
+ * struct drm_xe_engine_class_instance - instance of an engine class
+ *
+ * It is returned as part of the @drm_xe_query_engine_info, but it also is
+ * used as the input of engine selection for both @drm_xe_exec_queue_create
+ * and @drm_xe_query_engine_cycles
+ *
+ */
 struct drm_xe_engine_class_instance {
 #define DRM_XE_ENGINE_CLASS_RENDER		0
 #define DRM_XE_ENGINE_CLASS_COPY		1
@@ -145,6 +152,21 @@ struct drm_xe_engine_class_instance {
 	__u16 pad;
 };
 
+/**
+ * struct drm_xe_query_engine_info - describe hardware engine
+ *
+ * If a query is made with a struct @drm_xe_device_query where .query
+ * is equal to %DRM_XE_DEVICE_QUERY_ENGINES, then the reply uses an array of
+ * struct @drm_xe_query_engine_info in .data.
+ */
+struct drm_xe_query_engine_info {
+	/** @instance: The @drm_xe_engine_class_instance */
+	struct drm_xe_engine_class_instance instance;
+
+	/** @reserved: Reserved */
+	__u64 reserved[5];
+};
+
 /**
  * enum drm_xe_memory_class - Supported memory classes.
  */
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index ef7aaa6a1..f9dec1f7a 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -72,10 +72,10 @@ static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
 	return regions;
 }
 
-static struct drm_xe_engine_class_instance *
-xe_query_engines_new(int fd, unsigned int *num_engines)
+static struct drm_xe_query_engine_info *
+xe_query_engines(int fd, unsigned int *num_engines)
 {
-	struct drm_xe_engine_class_instance *engines;
+	struct drm_xe_query_engine_info *engines;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
 		.query = DRM_XE_DEVICE_QUERY_ENGINES,
@@ -253,7 +253,7 @@ struct xe_device *xe_device_get(int fd)
 	xe_dev->dev_id = xe_dev->config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
 	xe_dev->gt_list = xe_query_gt_list_new(fd);
 	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
-	xe_dev->engines = xe_query_engines_new(fd, &xe_dev->number_engines);
+	xe_dev->engines = xe_query_engines(fd, &xe_dev->number_engines);
 	xe_dev->mem_regions = xe_query_mem_regions_new(fd);
 	xe_dev->vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->vram_size));
 	xe_dev->visible_vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->visible_vram_size));
@@ -427,16 +427,16 @@ uint64_t vram_if_possible(int fd, int gt)
  *
  * Returns engines array of xe device @fd.
  */
-xe_dev_FN(xe_engines, engines, struct drm_xe_engine_class_instance *);
+xe_dev_FN(xe_engines, engines, struct drm_xe_query_engine_info *);
 
 /**
  * xe_engine:
  * @fd: xe device fd
  * @idx: engine index
  *
- * Returns engine instance of xe device @fd and @idx.
+ * Returns engine info of xe device @fd and @idx.
  */
-struct drm_xe_engine_class_instance *xe_engine(int fd, int idx)
+struct drm_xe_query_engine_info *xe_engine(int fd, int idx)
 {
 	struct xe_device *xe_dev;
 
@@ -658,7 +658,7 @@ bool xe_has_engine_class(int fd, uint16_t engine_class)
 	igt_assert(xe_dev);
 
 	for (int i = 0; i < xe_dev->number_engines; i++)
-		if (xe_dev->engines[i].engine_class == engine_class)
+		if (xe_dev->engines[i].instance.engine_class == engine_class)
 			return true;
 
 	return false;
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index bf9f2b955..fede00036 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -33,7 +33,7 @@ struct xe_device {
 	uint64_t memory_regions;
 
 	/** @engines: array of hardware engines */
-	struct drm_xe_engine_class_instance *engines;
+	struct drm_xe_query_engine_info *engines;
 
 	/** @number_engines: length of hardware engines array */
 	unsigned int number_engines;
@@ -62,7 +62,7 @@ struct xe_device {
 
 #define xe_for_each_engine(__fd, __hwe) \
 	for (int __i = 0; __i < xe_number_engines(__fd) && \
-	     (__hwe = xe_engine(__fd, __i)); ++__i)
+	     (__hwe = &xe_engine(__fd, __i)->instance); ++__i)
 #define xe_for_each_engine_class(__class) \
 	for (__class = 0; __class < DRM_XE_ENGINE_CLASS_COMPUTE + 1; \
 	     ++__class)
@@ -81,8 +81,8 @@ uint64_t all_memory_regions(int fd);
 uint64_t system_memory(int fd);
 uint64_t vram_memory(int fd, int gt);
 uint64_t vram_if_possible(int fd, int gt);
-struct drm_xe_engine_class_instance *xe_engines(int fd);
-struct drm_xe_engine_class_instance *xe_engine(int fd, int idx);
+struct drm_xe_query_engine_info *xe_engines(int fd);
+struct drm_xe_query_engine_info *xe_engine(int fd, int idx);
 struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
 const char *xe_region_name(uint64_t region);
 uint16_t xe_region_class(int fd, uint64_t region);
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index 9d71b7463..b04a3443f 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -148,7 +148,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
 	igt_nsec_elapsed(&tv);
 
 	igt_fork(n, nproc) {
-		struct drm_xe_engine_class_instance *hwe;
+		struct drm_xe_query_engine_info *engine;
 		uint32_t exec_queue, exec_queues[exec_queues_per_process];
 		int idx, err, i;
 
@@ -156,8 +156,9 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
 
 		for (i = 0; i < exec_queues_per_process; i++) {
 			idx = rand() % num_engines;
-			hwe = xe_engine(fd, idx);
-			err = __xe_exec_queue_create(fd, vm, hwe, 0, &exec_queue);
+			engine = xe_engine(fd, idx);
+			err = __xe_exec_queue_create(fd, vm, &engine->instance,
+						     0, &exec_queue);
 			igt_debug("[%2d] Create exec_queue: err=%d, exec_queue=%u [idx = %d]\n",
 				  n, err, exec_queue, i);
 			if (err)
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index d50cc6df1..cec3e0825 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -40,7 +40,7 @@ IGT_TEST_DESCRIPTION("Read and verify drm client memory consumption using fdinfo
 #define BO_SIZE (65536)
 
 /* Subtests */
-static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
+static void test_active(int fd, struct drm_xe_query_engine_info *engine)
 {
 	struct drm_xe_query_mem_region *memregion;
 	uint64_t memreg = all_memory_regions(fd), region;
@@ -89,7 +89,8 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 		data = xe_bo_map(fd, bo, bo_size);
 
 		for (i = 0; i < N_EXEC_QUEUES; i++) {
-			exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
+			exec_queues[i] = xe_exec_queue_create(fd, vm,
+							      &engine->instance, 0);
 			bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
 			syncobjs[i] = syncobj_create(fd, 0);
 		}
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 0b7b3d3e9..48e843af5 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -63,7 +63,7 @@ static void store(int fd)
 		.syncs = to_user_pointer(&sync),
 	};
 	struct data *data;
-	struct drm_xe_engine_class_instance *engine;
+	struct drm_xe_query_engine_info *engine;
 	uint32_t vm;
 	uint32_t exec_queue;
 	uint32_t syncobj;
@@ -82,14 +82,14 @@ static void store(int fd)
 
 	engine = xe_engine(fd, 1);
 	bo = xe_bo_create(fd, vm, bo_size,
-			  vram_if_possible(fd, engine->gt_id),
+			  vram_if_possible(fd, engine->instance.gt_id),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
-	xe_vm_bind_async(fd, vm, engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
+	xe_vm_bind_async(fd, vm, engine->instance.gt_id, bo, 0, addr, bo_size, &sync, 1);
 	data = xe_bo_map(fd, bo, bo_size);
 	store_dword_batch(data, addr, value);
 
-	exec_queue = xe_exec_queue_create(fd, vm, engine, 0);
+	exec_queue = xe_exec_queue_create(fd, vm, &engine->instance, 0);
 	exec.exec_queue_id = exec_queue;
 	exec.address = data->addr;
 	sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL;
diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
index e7a566f62..b64812f9d 100644
--- a/tests/intel/xe_intel_bb.c
+++ b/tests/intel/xe_intel_bb.c
@@ -192,7 +192,8 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
 
 	if (new_context) {
 		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
-		ctx = xe_exec_queue_create(xe, vm, xe_engine(xe, 0), 0);
+		ctx = xe_exec_queue_create(xe, vm, &xe_engine(xe, 0)->instance,
+					   0);
 		intel_bb_destroy(ibb);
 		ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
 		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index e27cc4582..585af413d 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -43,7 +43,7 @@
   *	there is worked queued on one of the VM's compute exec_queues.
  */
 
-static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
+static void test_ping_pong(int fd, struct drm_xe_query_engine_info *engine)
 {
 	size_t vram_size = xe_vram_size(fd, 0);
 	size_t align = xe_get_default_alignment(fd);
@@ -75,7 +75,8 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
 			xe_vm_bind(fd, vm[i], bo[i][j], 0, 0x40000 + j*bo_size,
 				   bo_size, NULL, 0);
 		}
-		exec_queues[i] = xe_exec_queue_create(fd, vm[i], eci, 0);
+		exec_queues[i] = xe_exec_queue_create(fd, vm[i],
+						      &engine->instance, 0);
 	}
 
 	igt_info("Now sleeping for %ds.\n", SECONDS_TO_WAIT);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 09/13] drm-uapi/xe: Reject bo creation of unaligned size
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (7 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 08/13] drm-uapi/xe: Align with drm_xe_query_engine_info Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-17 18:44   ` Kamil Konieczny
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 10/13] drm-uapi/xe: Align on a common way to return arrays (memory regions) Francois Dugast
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev

Align with kernel commit ("drm/xe/uapi: Reject bo creation of unaligned size")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h          | 17 +++++++++--------
 tests/intel/xe_mmap.c              | 22 ++++++++++++----------
 tests/intel/xe_prime_self_import.c | 26 +++++++++++++++++++++++++-
 tests/intel/xe_vm.c                | 13 ++++++-------
 4 files changed, 52 insertions(+), 26 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 7aff66830..aa66b62e2 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -206,11 +206,13 @@ struct drm_xe_query_mem_region {
 	 *
 	 * When the kernel allocates memory for this region, the
 	 * underlying pages will be at least @min_page_size in size.
-	 *
-	 * Important note: When userspace allocates a GTT address which
-	 * can point to memory allocated from this region, it must also
-	 * respect this minimum alignment. This is enforced by the
-	 * kernel.
+	 * Buffer objects with an allowable placement in this region must be
+	 * created with a size aligned to this value.
+	 * GPU virtual address mappings of (parts of) buffer objects that
+	 * may be placed in this region must also have their GPU virtual
+	 * address and range aligned to this value.
+	 * Affected IOCTLS will return %-EINVAL if alignment restrictions are
+	 * not met.
 	 */
 	__u32 min_page_size;
 	/**
@@ -515,9 +517,8 @@ struct drm_xe_gem_create {
 	__u64 extensions;
 
 	/**
-	 * @size: Requested size for the object
-	 *
-	 * The (page-aligned) allocated size for the object will be returned.
+	 * @size: Size of the object to be created, must match region
+	 * (system or vram) minimum alignment (&min_page_size).
 	 */
 	__u64 size;
 
diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
index 965644e22..d6c8d5114 100644
--- a/tests/intel/xe_mmap.c
+++ b/tests/intel/xe_mmap.c
@@ -47,17 +47,18 @@
 static void
 test_mmap(int fd, uint32_t placement, uint32_t flags)
 {
+	size_t bo_size = xe_get_default_alignment(fd);
 	uint32_t bo;
 	void *map;
 
 	igt_require_f(placement, "Device doesn't support such memory region\n");
 
-	bo = xe_bo_create(fd, 0, 4096, placement, flags);
+	bo = xe_bo_create(fd, 0, bo_size, placement, flags);
 
-	map = xe_bo_map(fd, bo, 4096);
+	map = xe_bo_map(fd, bo, bo_size);
 	strcpy(map, "Write some data to the BO!");
 
-	munmap(map, 4096);
+	munmap(map, bo_size);
 
 	gem_close(fd, bo);
 }
@@ -156,13 +157,14 @@ static void trap_sigbus(uint32_t *ptr)
  */
 static void test_small_bar(int fd)
 {
+	size_t page_size = xe_get_default_alignment(fd);
 	uint32_t visible_size = xe_visible_vram_size(fd, 0);
 	uint32_t bo;
 	uint64_t mmo;
 	uint32_t *map;
 
 	/* 2BIG invalid case */
-	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + 4096,
+	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + page_size,
 				      vram_memory(fd, 0),
 				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM,
 				      &bo),
@@ -172,12 +174,12 @@ static void test_small_bar(int fd)
 	bo = xe_bo_create(fd, 0, visible_size / 4, vram_memory(fd, 0),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	mmo = xe_bo_mmap_offset(fd, bo);
-	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
+	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
 
 	map[0] = 0xdeadbeaf;
 
-	munmap(map, 4096);
+	munmap(map, page_size);
 	gem_close(fd, bo);
 
 	/* Normal operation with system memory spilling */
@@ -186,18 +188,18 @@ static void test_small_bar(int fd)
 			  system_memory(fd),
 			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	mmo = xe_bo_mmap_offset(fd, bo);
-	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
+	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
 
 	map[0] = 0xdeadbeaf;
 
-	munmap(map, 4096);
+	munmap(map, page_size);
 	gem_close(fd, bo);
 
 	/* Bogus operation with SIGBUS */
-	bo = xe_bo_create(fd, 0, visible_size + 4096, vram_memory(fd, 0), 0);
+	bo = xe_bo_create(fd, 0, visible_size + page_size, vram_memory(fd, 0), 0);
 	mmo = xe_bo_mmap_offset(fd, bo);
-	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
+	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
 	igt_assert(map != MAP_FAILED);
 
 	trap_sigbus(map);
diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
index 9a263d326..504e6a13d 100644
--- a/tests/intel/xe_prime_self_import.c
+++ b/tests/intel/xe_prime_self_import.c
@@ -61,13 +61,19 @@ static int g_time_out = 5;
 static pthread_barrier_t g_barrier;
 static size_t bo_size;
 
+static size_t get_min_bo_size(int fd1, int fd2)
+{
+	return 4 * max(xe_get_default_alignment(fd1),
+		       xe_get_default_alignment(fd2));
+}
+
 static void
 check_bo(int fd1, uint32_t handle1, int fd2, uint32_t handle2)
 {
+	size_t bo_size = get_min_bo_size(fd1, fd2);
 	char *ptr1, *ptr2;
 	int i;
 
-
 	ptr1 = xe_bo_map(fd1, handle1, bo_size);
 	ptr2 = xe_bo_map(fd2, handle2, bo_size);
 
@@ -97,6 +103,7 @@ check_bo(int fd1, uint32_t handle1, int fd2, uint32_t handle2)
 static void test_with_fd_dup(void)
 {
 	int fd1, fd2;
+	size_t bo_size;
 	uint32_t handle, handle_import;
 	int dma_buf_fd1, dma_buf_fd2;
 
@@ -105,6 +112,8 @@ static void test_with_fd_dup(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
+	bo_size = get_min_bo_size(fd1, fd2);
+
 	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
 			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
@@ -131,6 +140,7 @@ static void test_with_fd_dup(void)
 static void test_with_two_bos(void)
 {
 	int fd1, fd2;
+	size_t bo_size;
 	uint32_t handle1, handle2, handle_import;
 	int dma_buf_fd;
 
@@ -139,6 +149,8 @@ static void test_with_two_bos(void)
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
+	bo_size = get_min_bo_size(fd1, fd2);
+
 	handle1 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
 			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 	handle2 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
@@ -171,12 +183,15 @@ static void test_with_two_bos(void)
 static void test_with_one_bo_two_files(void)
 {
 	int fd1, fd2;
+	size_t bo_size;
 	uint32_t handle_import, handle_open, handle_orig, flink_name;
 	int dma_buf_fd1, dma_buf_fd2;
 
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
+	bo_size = get_min_bo_size(fd1, fd2);
+
 	handle_orig = xe_bo_create(fd1, 0, bo_size,
 				   vram_if_possible(fd1, 0),
 				   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
@@ -205,12 +220,15 @@ static void test_with_one_bo_two_files(void)
 static void test_with_one_bo(void)
 {
 	int fd1, fd2;
+	size_t bo_size;
 	uint32_t handle, handle_import1, handle_import2, handle_selfimport;
 	int dma_buf_fd;
 
 	fd1 = drm_open_driver(DRIVER_XE);
 	fd2 = drm_open_driver(DRIVER_XE);
 
+	bo_size = get_min_bo_size(fd1, fd2);
+
 	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
 			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
 
@@ -279,6 +297,7 @@ static void *thread_fn_reimport_vs_close(void *p)
 	pthread_t *threads;
 	int r, i, num_threads;
 	int fds[2];
+	size_t bo_size;
 	int obj_count;
 	void *status;
 	uint32_t handle;
@@ -298,6 +317,8 @@ static void *thread_fn_reimport_vs_close(void *p)
 
 	fds[0] = drm_open_driver(DRIVER_XE);
 
+	bo_size = xe_get_default_alignment(fds[0]);
+
 	handle = xe_bo_create(fds[0], 0, bo_size,
 			      vram_if_possible(fds[0], 0),
 			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
@@ -336,6 +357,7 @@ static void *thread_fn_export_vs_close(void *p)
 	struct drm_prime_handle prime_h2f;
 	struct drm_gem_close close_bo;
 	int fd = (uintptr_t)p;
+	size_t bo_size = xe_get_default_alignment(fd);
 	uint32_t handle;
 
 	pthread_barrier_wait(&g_barrier);
@@ -463,6 +485,7 @@ static void test_llseek_size(void)
 static void test_llseek_bad(void)
 {
 	int fd;
+	size_t bo_size;
 	uint32_t handle;
 	int dma_buf_fd;
 
@@ -470,6 +493,7 @@ static void test_llseek_bad(void)
 
 	fd = drm_open_driver(DRIVER_XE);
 
+	bo_size = 4 * xe_get_default_alignment(fd);
 	handle = xe_bo_create(fd, 0, bo_size,
 			      vram_if_possible(fd, 0),
 			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index ea93d7b2e..2c563c64f 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -1310,11 +1310,10 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
 		t.fd = fd;
 		t.vm = vm;
-#define PAGE_SIZE	4096
-		t.addr = addr + PAGE_SIZE / 2;
+		t.addr = addr + page_size / 2;
 		t.eci = eci;
 		t.exit = &exit;
-		t.map = map + PAGE_SIZE / 2;
+		t.map = map + page_size / 2;
 		t.barrier = &barrier;
 		pthread_barrier_init(&barrier, NULL, 2);
 		pthread_create(&t.thread, 0, hammer_thread, &t);
@@ -1367,8 +1366,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_assert_eq(data->data, 0xc0ffee);
 	}
 	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
-		memset(map, 0, PAGE_SIZE / 2);
-		memset(map + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
+		memset(map, 0, page_size / 2);
+		memset(map + page_size, 0, bo_size - page_size);
 	} else {
 		memset(map, 0, bo_size);
 	}
@@ -1417,8 +1416,8 @@ try_again_after_invalidate:
 		}
 	}
 	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
-		memset(map, 0, PAGE_SIZE / 2);
-		memset(map + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
+		memset(map, 0, page_size / 2);
+		memset(map + page_size, 0, bo_size - page_size);
 	} else {
 		memset(map, 0, bo_size);
 	}
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 10/13] drm-uapi/xe: Align on a common way to return arrays (memory regions)
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (8 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 09/13] drm-uapi/xe: Reject bo creation of unaligned size Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-17 18:46   ` Kamil Konieczny
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 11/13] drm-uapi/xe: Align on a common way to return arrays (gt) Francois Dugast
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Align on a common way to return
arrays (memory regions)")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h   | 22 ++++++++---------
 lib/xe/xe_query.c           | 48 ++++++++++++++++++-------------------
 lib/xe/xe_query.h           |  4 ++--
 lib/xe/xe_util.c            |  6 ++---
 tests/intel/xe_create.c     |  2 +-
 tests/intel/xe_drm_fdinfo.c |  8 +++----
 tests/intel/xe_pm.c         | 12 +++++-----
 tests/intel/xe_query.c      | 44 +++++++++++++++++-----------------
 tests/kms_plane.c           |  2 +-
 9 files changed, 74 insertions(+), 74 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index aa66b62e2..61de386f5 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -182,10 +182,10 @@ enum drm_xe_memory_class {
 };
 
 /**
- * struct drm_xe_query_mem_region - Describes some region as known to
+ * struct drm_xe_mem_region - Describes some region as known to
  * the driver.
  */
-struct drm_xe_query_mem_region {
+struct drm_xe_mem_region {
 	/**
 	 * @mem_class: The memory class describing this region.
 	 *
@@ -315,19 +315,19 @@ struct drm_xe_query_engine_cycles {
 };
 
 /**
- * struct drm_xe_query_mem_regions - describe memory regions
+ * struct drm_xe_query_mem_region - describe memory regions
  *
  * If a query is made with a struct drm_xe_device_query where .query
- * is equal to DRM_XE_DEVICE_QUERY_MEM_REGIONS, then the reply uses
- * struct drm_xe_query_mem_regions in .data.
+ * is equal to DRM_XE_DEVICE_QUERY_MEM_REGION, then the reply uses
+ * struct drm_xe_query_mem_region in .data.
  */
-struct drm_xe_query_mem_regions {
-	/** @num_regions: number of memory regions returned in @regions */
-	__u32 num_regions;
+struct drm_xe_query_mem_region {
+	/** @num_mem_regions: number of memory regions returned in @mem_regions */
+	__u32 num_mem_regions;
 	/** @pad: MBZ */
 	__u32 pad;
-	/** @regions: The returned regions for this device */
-	struct drm_xe_query_mem_region regions[];
+	/** @mem_regions: The returned memory regions for this device */
+	struct drm_xe_mem_region mem_regions[];
 };
 
 /**
@@ -493,7 +493,7 @@ struct drm_xe_device_query {
 	__u64 extensions;
 
 #define DRM_XE_DEVICE_QUERY_ENGINES		0
-#define DRM_XE_DEVICE_QUERY_MEM_REGIONS		1
+#define DRM_XE_DEVICE_QUERY_MEM_REGION		1
 #define DRM_XE_DEVICE_QUERY_CONFIG		2
 #define DRM_XE_DEVICE_QUERY_GT_LIST		3
 #define DRM_XE_DEVICE_QUERY_HWCONFIG		4
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index f9dec1f7a..4aeeee928 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -97,12 +97,12 @@ xe_query_engines(int fd, unsigned int *num_engines)
 	return engines;
 }
 
-static struct drm_xe_query_mem_regions *xe_query_mem_regions_new(int fd)
+static struct drm_xe_query_mem_region *xe_query_mem_regions_new(int fd)
 {
-	struct drm_xe_query_mem_regions *mem_regions;
+	struct drm_xe_query_mem_region *mem_regions;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
+		.query = DRM_XE_DEVICE_QUERY_MEM_REGION,
 		.size = 0,
 		.data = 0,
 	};
@@ -129,44 +129,44 @@ static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list,
 	return region;
 }
 
-static uint64_t gt_vram_size(const struct drm_xe_query_mem_regions *mem_regions,
+static uint64_t gt_vram_size(const struct drm_xe_query_mem_region *mem_regions,
 			     const struct drm_xe_query_gt_list *gt_list, int gt)
 {
 	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
 
-	if (XE_IS_CLASS_VRAM(&mem_regions->regions[region_idx]))
-		return mem_regions->regions[region_idx].total_size;
+	if (XE_IS_CLASS_VRAM(&mem_regions->mem_regions[region_idx]))
+		return mem_regions->mem_regions[region_idx].total_size;
 
 	return 0;
 }
 
-static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_regions *mem_regions,
+static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_region *mem_regions,
 				     const struct drm_xe_query_gt_list *gt_list, int gt)
 {
 	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
 
-	if (XE_IS_CLASS_VRAM(&mem_regions->regions[region_idx]))
-		return mem_regions->regions[region_idx].cpu_visible_size;
+	if (XE_IS_CLASS_VRAM(&mem_regions->mem_regions[region_idx]))
+		return mem_regions->mem_regions[region_idx].cpu_visible_size;
 
 	return 0;
 }
 
-static bool __mem_has_vram(struct drm_xe_query_mem_regions *mem_regions)
+static bool __mem_has_vram(struct drm_xe_query_mem_region *mem_regions)
 {
-	for (int i = 0; i < mem_regions->num_regions; i++)
-		if (XE_IS_CLASS_VRAM(&mem_regions->regions[i]))
+	for (int i = 0; i < mem_regions->num_mem_regions; i++)
+		if (XE_IS_CLASS_VRAM(&mem_regions->mem_regions[i]))
 			return true;
 
 	return false;
 }
 
-static uint32_t __mem_default_alignment(struct drm_xe_query_mem_regions *mem_regions)
+static uint32_t __mem_default_alignment(struct drm_xe_query_mem_region *mem_regions)
 {
 	uint32_t alignment = XE_DEFAULT_ALIGNMENT;
 
-	for (int i = 0; i < mem_regions->num_regions; i++)
-		if (alignment < mem_regions->regions[i].min_page_size)
-			alignment = mem_regions->regions[i].min_page_size;
+	for (int i = 0; i < mem_regions->num_mem_regions; i++)
+		if (alignment < mem_regions->mem_regions[i].min_page_size)
+			alignment = mem_regions->mem_regions[i].min_page_size;
 
 	return alignment;
 }
@@ -454,16 +454,16 @@ struct drm_xe_query_engine_info *xe_engine(int fd, int idx)
  *
  * Returns memory region structure for @region mask.
  */
-struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region)
+struct drm_xe_mem_region *xe_mem_region(int fd, uint64_t region)
 {
 	struct xe_device *xe_dev;
 	int region_idx = ffs(region) - 1;
 
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
-	igt_assert(xe_dev->mem_regions->num_regions > region_idx);
+	igt_assert(xe_dev->mem_regions->num_mem_regions > region_idx);
 
-	return &xe_dev->mem_regions->regions[region_idx];
+	return &xe_dev->mem_regions->mem_regions[region_idx];
 }
 
 /**
@@ -501,7 +501,7 @@ const char *xe_region_name(uint64_t region)
  */
 uint16_t xe_region_class(int fd, uint64_t region)
 {
-	struct drm_xe_query_mem_region *memreg;
+	struct drm_xe_mem_region *memreg;
 
 	memreg = xe_mem_region(fd, region);
 
@@ -593,21 +593,21 @@ uint64_t xe_vram_available(int fd, int gt)
 {
 	struct xe_device *xe_dev;
 	int region_idx;
-	struct drm_xe_query_mem_region *mem_region;
-	struct drm_xe_query_mem_regions *mem_regions;
+	struct drm_xe_mem_region *mem_region;
+	struct drm_xe_query_mem_region *mem_regions;
 
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
 
 	region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
-	mem_region = &xe_dev->mem_regions->regions[region_idx];
+	mem_region = &xe_dev->mem_regions->mem_regions[region_idx];
 
 	if (XE_IS_CLASS_VRAM(mem_region)) {
 		uint64_t available_vram;
 
 		mem_regions = xe_query_mem_regions_new(fd);
 		pthread_mutex_lock(&cache.cache_mutex);
-		mem_region->used = mem_regions->regions[region_idx].used;
+		mem_region->used = mem_regions->mem_regions[region_idx].used;
 		available_vram = mem_region->total_size - mem_region->used;
 		pthread_mutex_unlock(&cache.cache_mutex);
 		free(mem_regions);
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index fede00036..1c76b0caf 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -39,7 +39,7 @@ struct xe_device {
 	unsigned int number_engines;
 
 	/** @mem_regions: regions memory information and usage */
-	struct drm_xe_query_mem_regions *mem_regions;
+	struct drm_xe_query_mem_region *mem_regions;
 
 	/** @vram_size: array of vram sizes for all gt_list */
 	uint64_t *vram_size;
@@ -83,7 +83,7 @@ uint64_t vram_memory(int fd, int gt);
 uint64_t vram_if_possible(int fd, int gt);
 struct drm_xe_query_engine_info *xe_engines(int fd);
 struct drm_xe_query_engine_info *xe_engine(int fd, int idx);
-struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
+struct drm_xe_mem_region *xe_mem_region(int fd, uint64_t region);
 const char *xe_region_name(uint64_t region);
 uint16_t xe_region_class(int fd, uint64_t region);
 uint32_t xe_min_page_size(int fd, uint64_t region);
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 742e6333e..1bb52b142 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -10,7 +10,7 @@
 #include "xe/xe_query.h"
 #include "xe/xe_util.h"
 
-static bool __region_belongs_to_regions_type(struct drm_xe_query_mem_region *region,
+static bool __region_belongs_to_regions_type(struct drm_xe_mem_region *region,
 					     uint32_t *mem_regions_type,
 					     int num_regions)
 {
@@ -23,7 +23,7 @@ static bool __region_belongs_to_regions_type(struct drm_xe_query_mem_region *reg
 struct igt_collection *
 __xe_get_memory_region_set(int xe, uint32_t *mem_regions_type, int num_regions)
 {
-	struct drm_xe_query_mem_region *memregion;
+	struct drm_xe_mem_region *memregion;
 	struct igt_collection *set = NULL;
 	uint64_t memreg = all_memory_regions(xe), region;
 	int count = 0, pos = 0;
@@ -78,7 +78,7 @@ char *xe_memregion_dynamic_subtest_name(int xe, struct igt_collection *set)
 	igt_assert(name);
 
 	for_each_collection_data(data, set) {
-		struct drm_xe_query_mem_region *memreg;
+		struct drm_xe_mem_region *memreg;
 		int r;
 
 		region = data->value;
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index b04a3443f..19582f94d 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -48,7 +48,7 @@ static int __create_bo(int fd, uint32_t vm, uint64_t size, uint32_t placement,
  */
 static void create_invalid_size(int fd)
 {
-	struct drm_xe_query_mem_region *memregion;
+	struct drm_xe_mem_region *memregion;
 	uint64_t memreg = all_memory_regions(fd), region;
 	uint32_t vm;
 	uint32_t handle;
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index cec3e0825..fc39649ea 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -42,7 +42,7 @@ IGT_TEST_DESCRIPTION("Read and verify drm client memory consumption using fdinfo
 /* Subtests */
 static void test_active(int fd, struct drm_xe_query_engine_info *engine)
 {
-	struct drm_xe_query_mem_region *memregion;
+	struct drm_xe_mem_region *memregion;
 	uint64_t memreg = all_memory_regions(fd), region;
 	struct drm_client_fdinfo info = { };
 	uint32_t vm;
@@ -169,7 +169,7 @@ static void test_active(int fd, struct drm_xe_query_engine_info *engine)
 
 static void test_shared(int xe)
 {
-	struct drm_xe_query_mem_region *memregion;
+	struct drm_xe_mem_region *memregion;
 	uint64_t memreg = all_memory_regions(xe), region;
 	struct drm_client_fdinfo info = { };
 	struct drm_gem_flink flink;
@@ -214,7 +214,7 @@ static void test_shared(int xe)
 
 static void test_total_resident(int xe)
 {
-	struct drm_xe_query_mem_region *memregion;
+	struct drm_xe_mem_region *memregion;
 	uint64_t memreg = all_memory_regions(xe), region;
 	struct drm_client_fdinfo info = { };
 	uint32_t vm;
@@ -262,7 +262,7 @@ static void test_total_resident(int xe)
 
 static void basic(int xe)
 {
-	struct drm_xe_query_mem_region *memregion;
+	struct drm_xe_mem_region *memregion;
 	uint64_t memreg = all_memory_regions(xe), region;
 	struct drm_client_fdinfo info = { };
 	unsigned int ret;
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index d78ca31a8..6cd4175ae 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -373,10 +373,10 @@ NULL));
  */
 static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 {
-	struct drm_xe_query_mem_regions *mem_regions;
+	struct drm_xe_query_mem_region *mem_regions;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
+		.query = DRM_XE_DEVICE_QUERY_MEM_REGION,
 		.size = 0,
 		.data = 0,
 	};
@@ -400,10 +400,10 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 	query.data = to_user_pointer(mem_regions);
 	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	for (i = 0; i < mem_regions->num_regions; i++) {
-		if (mem_regions->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
-			vram_used_mb +=  (mem_regions->regions[i].used / (1024 * 1024));
-			vram_total_mb += (mem_regions->regions[i].total_size / (1024 * 1024));
+	for (i = 0; i < mem_regions->num_mem_regions; i++) {
+		if (mem_regions->mem_regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
+			vram_used_mb +=  (mem_regions->mem_regions[i].used / (1024 * 1024));
+			vram_total_mb += (mem_regions->mem_regions[i].total_size / (1024 * 1024));
 		}
 	}
 
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 48042337a..562ee2736 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -200,10 +200,10 @@ test_query_engines(int fd)
 static void
 test_query_mem_regions(int fd)
 {
-	struct drm_xe_query_mem_regions *mem_regions;
+	struct drm_xe_query_mem_region *mem_regions;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
+		.query = DRM_XE_DEVICE_QUERY_MEM_REGION,
 		.size = 0,
 		.data = 0,
 	};
@@ -218,34 +218,34 @@ test_query_mem_regions(int fd)
 	query.data = to_user_pointer(mem_regions);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	for (i = 0; i < mem_regions->num_regions; i++) {
+	for (i = 0; i < mem_regions->num_mem_regions; i++) {
 		igt_info("mem region %d: %s\t%#llx / %#llx\n", i,
-			mem_regions->regions[i].mem_class ==
+			mem_regions->mem_regions[i].mem_class ==
 			DRM_XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
-			:mem_regions->regions[i].mem_class ==
+			:mem_regions->mem_regions[i].mem_class ==
 			DRM_XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
-			mem_regions->regions[i].used,
-			mem_regions->regions[i].total_size
+			mem_regions->mem_regions[i].used,
+			mem_regions->mem_regions[i].total_size
 		);
 		igt_info("min_page_size=0x%x\n",
-		       mem_regions->regions[i].min_page_size);
+		       mem_regions->mem_regions[i].min_page_size);
 
 		igt_info("visible size=%lluMiB\n",
-			 mem_regions->regions[i].cpu_visible_size >> 20);
+			 mem_regions->mem_regions[i].cpu_visible_size >> 20);
 		igt_info("visible used=%lluMiB\n",
-			 mem_regions->regions[i].cpu_visible_used >> 20);
-
-		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_size,
-				   mem_regions->regions[i].total_size);
-		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_used,
-				   mem_regions->regions[i].cpu_visible_size);
-		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_used,
-				   mem_regions->regions[i].used);
-		igt_assert_lte_u64(mem_regions->regions[i].used,
-				   mem_regions->regions[i].total_size);
-		igt_assert_lte_u64(mem_regions->regions[i].used -
-				   mem_regions->regions[i].cpu_visible_used,
-				   mem_regions->regions[i].total_size);
+			 mem_regions->mem_regions[i].cpu_visible_used >> 20);
+
+		igt_assert_lte_u64(mem_regions->mem_regions[i].cpu_visible_size,
+				   mem_regions->mem_regions[i].total_size);
+		igt_assert_lte_u64(mem_regions->mem_regions[i].cpu_visible_used,
+				   mem_regions->mem_regions[i].cpu_visible_size);
+		igt_assert_lte_u64(mem_regions->mem_regions[i].cpu_visible_used,
+				   mem_regions->mem_regions[i].used);
+		igt_assert_lte_u64(mem_regions->mem_regions[i].used,
+				   mem_regions->mem_regions[i].total_size);
+		igt_assert_lte_u64(mem_regions->mem_regions[i].used -
+				   mem_regions->mem_regions[i].cpu_visible_used,
+				   mem_regions->mem_regions[i].total_size);
 	}
 	dump_hex_debug(mem_regions, query.size);
 	free(mem_regions);
diff --git a/tests/kms_plane.c b/tests/kms_plane.c
index 24df7e8ca..419d4e9be 100644
--- a/tests/kms_plane.c
+++ b/tests/kms_plane.c
@@ -458,7 +458,7 @@ test_plane_panning(data_t *data, enum pipe pipe)
 	}
 
 	if (is_xe_device(data->drm_fd)) {
-		struct drm_xe_query_mem_region *memregion;
+		struct drm_xe_mem_region *memregion;
 		uint64_t memreg = all_memory_regions(data->drm_fd), region;
 
 		xe_for_each_mem_region(data->drm_fd, memreg, region) {
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 11/13] drm-uapi/xe: Align on a common way to return arrays (gt)
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (9 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 10/13] drm-uapi/xe: Align on a common way to return arrays (memory regions) Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 12/13] drm-uapi/xe: Align on a common way to return arrays (engines) Francois Dugast
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Align on a common way to return
arrays (gt)")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h                | 18 ++++----
 lib/xe/xe_query.c                        | 54 ++++++++++++------------
 lib/xe/xe_query.h                        |  8 ++--
 lib/xe/xe_spin.c                         |  6 +--
 tests/intel-ci/xe-fast-feedback.testlist |  2 +-
 tests/intel/xe_query.c                   | 30 ++++++-------
 6 files changed, 59 insertions(+), 59 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 61de386f5..ad4b3f9ae 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -356,14 +356,14 @@ struct drm_xe_query_config {
 };
 
 /**
- * struct drm_xe_query_gt - describe an individual GT.
+ * struct drm_xe_gt - describe an individual GT.
  *
- * To be used with drm_xe_query_gt_list, which will return a list with all the
+ * To be used with drm_xe_query_gt, which will return a list with all the
  * existing GT individual descriptions.
  * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
  * implementing graphics and/or media operations.
  */
-struct drm_xe_query_gt {
+struct drm_xe_gt {
 #define DRM_XE_QUERY_GT_TYPE_MAIN		0
 #define DRM_XE_QUERY_GT_TYPE_MEDIA		1
 	/** @type: GT type: Main or Media */
@@ -390,19 +390,19 @@ struct drm_xe_query_gt {
 };
 
 /**
- * struct drm_xe_query_gt_list - A list with GT description items.
+ * struct drm_xe_query_gt - A list with GT description items.
  *
  * If a query is made with a struct drm_xe_device_query where .query
- * is equal to DRM_XE_DEVICE_QUERY_GT_LIST, then the reply uses struct
- * drm_xe_query_gt_list in .data.
+ * is equal to DRM_XE_DEVICE_QUERY_GT, then the reply uses struct
+ * drm_xe_query_gt in .data.
  */
-struct drm_xe_query_gt_list {
+struct drm_xe_query_gt {
 	/** @num_gt: number of GT items returned in gt_list */
 	__u32 num_gt;
 	/** @pad: MBZ */
 	__u32 pad;
 	/** @gt_list: The GT list returned for this device */
-	struct drm_xe_query_gt gt_list[];
+	struct drm_xe_gt gt_list[];
 };
 
 /**
@@ -495,7 +495,7 @@ struct drm_xe_device_query {
 #define DRM_XE_DEVICE_QUERY_ENGINES		0
 #define DRM_XE_DEVICE_QUERY_MEM_REGION		1
 #define DRM_XE_DEVICE_QUERY_CONFIG		2
-#define DRM_XE_DEVICE_QUERY_GT_LIST		3
+#define DRM_XE_DEVICE_QUERY_GT			3
 #define DRM_XE_DEVICE_QUERY_HWCONFIG		4
 #define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY		5
 #define DRM_XE_DEVICE_QUERY_ENGINE_CYCLES	6
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 4aeeee928..01b5cc715 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -39,35 +39,35 @@ static struct drm_xe_query_config *xe_query_config_new(int fd)
 	return config;
 }
 
-static struct drm_xe_query_gt_list *xe_query_gt_list_new(int fd)
+static struct drm_xe_query_gt *xe_query_gt_new(int fd)
 {
-	struct drm_xe_query_gt_list *gt_list;
+	struct drm_xe_query_gt *gt;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_GT_LIST,
+		.query = DRM_XE_DEVICE_QUERY_GT,
 		.size = 0,
 		.data = 0,
 	};
 
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	gt_list = malloc(query.size);
-	igt_assert(gt_list);
+	gt = malloc(query.size);
+	igt_assert(gt);
 
-	query.data = to_user_pointer(gt_list);
+	query.data = to_user_pointer(gt);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	return gt_list;
+	return gt;
 }
 
-static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
+static uint64_t __memory_regions(const struct drm_xe_query_gt *gt)
 {
 	uint64_t regions = 0;
 	int i;
 
-	for (i = 0; i < gt_list->num_gt; i++)
-		regions |= gt_list->gt_list[i].near_mem_regions |
-			   gt_list->gt_list[i].far_mem_regions;
+	for (i = 0; i < gt->num_gt; i++)
+		regions |= gt->gt_list[i].near_mem_regions |
+			   gt->gt_list[i].far_mem_regions;
 
 	return regions;
 }
@@ -118,7 +118,7 @@ static struct drm_xe_query_mem_region *xe_query_mem_regions_new(int fd)
 	return mem_regions;
 }
 
-static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list, int gt)
+static uint64_t native_region_for_gt(const struct drm_xe_query_gt *gt_list, int gt)
 {
 	uint64_t region;
 
@@ -130,7 +130,7 @@ static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list,
 }
 
 static uint64_t gt_vram_size(const struct drm_xe_query_mem_region *mem_regions,
-			     const struct drm_xe_query_gt_list *gt_list, int gt)
+			     const struct drm_xe_query_gt *gt_list, int gt)
 {
 	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
 
@@ -141,7 +141,7 @@ static uint64_t gt_vram_size(const struct drm_xe_query_mem_region *mem_regions,
 }
 
 static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_region *mem_regions,
-				     const struct drm_xe_query_gt_list *gt_list, int gt)
+				     const struct drm_xe_query_gt *gt_list, int gt)
 {
 	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
 
@@ -220,7 +220,7 @@ static struct xe_device *find_in_cache(int fd)
 static void xe_device_free(struct xe_device *xe_dev)
 {
 	free(xe_dev->config);
-	free(xe_dev->gt_list);
+	free(xe_dev->gt);
 	free(xe_dev->engines);
 	free(xe_dev->mem_regions);
 	free(xe_dev->vram_size);
@@ -251,18 +251,18 @@ struct xe_device *xe_device_get(int fd)
 	xe_dev->config = xe_query_config_new(fd);
 	xe_dev->va_bits = xe_dev->config->info[DRM_XE_QUERY_CONFIG_VA_BITS];
 	xe_dev->dev_id = xe_dev->config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
-	xe_dev->gt_list = xe_query_gt_list_new(fd);
-	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
+	xe_dev->gt = xe_query_gt_new(fd);
+	xe_dev->memory_regions = __memory_regions(xe_dev->gt);
 	xe_dev->engines = xe_query_engines(fd, &xe_dev->number_engines);
 	xe_dev->mem_regions = xe_query_mem_regions_new(fd);
-	xe_dev->vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->vram_size));
-	xe_dev->visible_vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->visible_vram_size));
-	for (int gt = 0; gt < xe_dev->gt_list->num_gt; gt++) {
+	xe_dev->vram_size = calloc(xe_dev->gt->num_gt, sizeof(*xe_dev->vram_size));
+	xe_dev->visible_vram_size = calloc(xe_dev->gt->num_gt, sizeof(*xe_dev->visible_vram_size));
+	for (int gt = 0; gt < xe_dev->gt->num_gt; gt++) {
 		xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_regions,
-						     xe_dev->gt_list, gt);
+						     xe_dev->gt, gt);
 		xe_dev->visible_vram_size[gt] =
 			gt_visible_vram_size(xe_dev->mem_regions,
-					     xe_dev->gt_list, gt);
+					     xe_dev->gt, gt);
 	}
 	xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_regions);
 	xe_dev->has_vram = __mem_has_vram(xe_dev->mem_regions);
@@ -355,9 +355,9 @@ _TYPE _NAME(int fd)			\
  * xe_number_gt:
  * @fd: xe device fd
  *
- * Return number of gt_list for xe device fd.
+ * Return number of gt for xe device fd.
  */
-xe_dev_FN(xe_number_gt, gt_list->num_gt, unsigned int);
+xe_dev_FN(xe_number_gt, gt->num_gt, unsigned int);
 
 /**
  * all_memory_regions:
@@ -393,9 +393,9 @@ uint64_t vram_memory(int fd, int gt)
 
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
-	igt_assert(gt >= 0 && gt < xe_dev->gt_list->num_gt);
+	igt_assert(gt >= 0 && gt < xe_dev->gt->num_gt);
 
-	return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gt_list, gt) : 0;
+	return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gt, gt) : 0;
 }
 
 static uint64_t __xe_visible_vram_size(int fd, int gt)
@@ -599,7 +599,7 @@ uint64_t xe_vram_available(int fd, int gt)
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
 
-	region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
+	region_idx = ffs(native_region_for_gt(xe_dev->gt, gt)) - 1;
 	mem_region = &xe_dev->mem_regions->mem_regions[region_idx];
 
 	if (XE_IS_CLASS_VRAM(mem_region)) {
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 1c76b0caf..b65b442f4 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -26,8 +26,8 @@ struct xe_device {
 	/** @config: xe configuration */
 	struct drm_xe_query_config *config;
 
-	/** @gt_list: gt info */
-	struct drm_xe_query_gt_list *gt_list;
+	/** @gt: gt info */
+	struct drm_xe_query_gt *gt;
 
 	/** @gt_list: bitmask of all memory regions */
 	uint64_t memory_regions;
@@ -41,10 +41,10 @@ struct xe_device {
 	/** @mem_regions: regions memory information and usage */
 	struct drm_xe_query_mem_region *mem_regions;
 
-	/** @vram_size: array of vram sizes for all gt_list */
+	/** @vram_size: array of vram sizes for all gt */
 	uint64_t *vram_size;
 
-	/** @visible_vram_size: array of visible vram sizes for all gt_list */
+	/** @visible_vram_size: array of visible vram sizes for all gt */
 	uint64_t *visible_vram_size;
 
 	/** @default_alignment: safe alignment regardless region location */
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index 91bc6664d..20021f5ee 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -20,10 +20,10 @@ static uint32_t read_timestamp_frequency(int fd, int gt_id)
 {
 	struct xe_device *dev = xe_device_get(fd);
 
-	igt_assert(dev && dev->gt_list && dev->gt_list->num_gt);
-	igt_assert(gt_id >= 0 && gt_id <= dev->gt_list->num_gt);
+	igt_assert(dev && dev->gt && dev->gt->num_gt);
+	igt_assert(gt_id >= 0 && gt_id <= dev->gt->num_gt);
 
-	return dev->gt_list->gt_list[gt_id].clock_freq;
+	return dev->gt->gt_list[gt_id].clock_freq;
 }
 
 static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index f11761ac8..530280720 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -116,7 +116,7 @@ igt@xe_prime_self_import@basic-with_fd_dup
 #igt@xe_prime_self_import@basic-llseek-size
 igt@xe_query@query-engines
 igt@xe_query@query-mem-usage
-igt@xe_query@query-gt-list
+igt@xe_query@query-gt
 igt@xe_query@query-config
 igt@xe_query@query-hwconfig
 igt@xe_query@query-topology
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 562ee2736..b79e9ea48 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -252,17 +252,17 @@ test_query_mem_regions(int fd)
 }
 
 /**
- * SUBTEST: query-gt-list
+ * SUBTEST: query-gt
  * Test category: functionality test
  * Description: Display information about available GT components for xe device.
  */
 static void
-test_query_gt_list(int fd)
+test_query_gt(int fd)
 {
-	struct drm_xe_query_gt_list *gt_list;
+	struct drm_xe_query_gt *gt;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_GT_LIST,
+		.query = DRM_XE_DEVICE_QUERY_GT,
 		.size = 0,
 		.data = 0,
 	};
@@ -271,20 +271,20 @@ test_query_gt_list(int fd)
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 	igt_assert_neq(query.size, 0);
 
-	gt_list = malloc(query.size);
-	igt_assert(gt_list);
+	gt = malloc(query.size);
+	igt_assert(gt);
 
-	query.data = to_user_pointer(gt_list);
+	query.data = to_user_pointer(gt);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	for (i = 0; i < gt_list->num_gt; i++) {
-		igt_info("type: %d\n", gt_list->gt_list[i].type);
-		igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
-		igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
+	for (i = 0; i < gt->num_gt; i++) {
+		igt_info("type: %d\n", gt->gt_list[i].type);
+		igt_info("gt_id: %d\n", gt->gt_list[i].gt_id);
+		igt_info("clock_freq: %u\n", gt->gt_list[i].clock_freq);
 		igt_info("near_mem_regions: 0x%016llx\n",
-		       gt_list->gt_list[i].near_mem_regions);
+		       gt->gt_list[i].near_mem_regions);
 		igt_info("far_mem_regions: 0x%016llx\n",
-		       gt_list->gt_list[i].far_mem_regions);
+		       gt->gt_list[i].far_mem_regions);
 	}
 }
 
@@ -671,8 +671,8 @@ igt_main
 	igt_subtest("query-mem-usage")
 		test_query_mem_regions(xe);
 
-	igt_subtest("query-gt-list")
-		test_query_gt_list(xe);
+	igt_subtest("query-gt")
+		test_query_gt(xe);
 
 	igt_subtest("query-config")
 		test_query_config(xe);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 12/13] drm-uapi/xe: Align on a common way to return arrays (engines)
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (10 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 11/13] drm-uapi/xe: Align on a common way to return arrays (gt) Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 13/13] drm-uapi/xe: Add Tile ID information to the GT info query Francois Dugast
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Align on a common way to return
arrays (engines)")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h         | 78 +++++++++++++++++++------------
 lib/xe/xe_query.c                 | 24 ++++------
 lib/xe/xe_query.h                 | 11 ++---
 tests/intel/xe_create.c           |  2 +-
 tests/intel/xe_drm_fdinfo.c       |  2 +-
 tests/intel/xe_exec_store.c       |  2 +-
 tests/intel/xe_noexec_ping_pong.c |  2 +-
 7 files changed, 65 insertions(+), 56 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index ad4b3f9ae..eebf9a08b 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -127,9 +127,9 @@ struct xe_user_extension {
 /**
  * struct drm_xe_engine_class_instance - instance of an engine class
  *
- * It is returned as part of the @drm_xe_query_engine_info, but it also is
- * used as the input of engine selection for both @drm_xe_exec_queue_create
- * and @drm_xe_query_engine_cycles
+ * It is returned as part of the @drm_xe_engine, but it also is used as
+ * the input of engine selection for both @drm_xe_exec_queue_create and
+ * @drm_xe_query_engine_cycles
  *
  */
 struct drm_xe_engine_class_instance {
@@ -153,13 +153,9 @@ struct drm_xe_engine_class_instance {
 };
 
 /**
- * struct drm_xe_query_engine_info - describe hardware engine
- *
- * If a query is made with a struct @drm_xe_device_query where .query
- * is equal to %DRM_XE_DEVICE_QUERY_ENGINES, then the reply uses an array of
- * struct @drm_xe_query_engine_info in .data.
+ * struct drm_xe_engine - describe hardware engine
  */
-struct drm_xe_query_engine_info {
+struct drm_xe_engine {
 	/** @instance: The @drm_xe_engine_class_instance */
 	struct drm_xe_engine_class_instance instance;
 
@@ -167,6 +163,22 @@ struct drm_xe_query_engine_info {
 	__u64 reserved[5];
 };
 
+/**
+ * struct drm_xe_query_engine - describe engines
+ *
+ * If a query is made with a struct @drm_xe_device_query where .query
+ * is equal to %DRM_XE_DEVICE_QUERY_ENGINES, then the reply uses an array of
+ * struct @drm_xe_query_engine in .data.
+ */
+struct drm_xe_query_engine {
+	/** @num_engines: number of engines returned in @engines */
+	__u32 num_engines;
+	/** @pad: MBZ */
+	__u32 pad;
+	/** @engines: The returned engines for this device */
+	struct drm_xe_engine engines[];
+};
+
 /**
  * enum drm_xe_memory_class - Supported memory classes.
  */
@@ -465,28 +477,32 @@ struct drm_xe_query_topology_mask {
  *
  * .. code-block:: C
  *
- *	struct drm_xe_engine_class_instance *hwe;
- *	struct drm_xe_device_query query = {
- *		.extensions = 0,
- *		.query = DRM_XE_DEVICE_QUERY_ENGINES,
- *		.size = 0,
- *		.data = 0,
- *	};
- *	ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
- *	hwe = malloc(query.size);
- *	query.data = (uintptr_t)hwe;
- *	ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
- *	int num_engines = query.size / sizeof(*hwe);
- *	for (int i = 0; i < num_engines; i++) {
- *		printf("Engine %d: %s\n", i,
- *			hwe[i].engine_class == DRM_XE_ENGINE_CLASS_RENDER ? "RENDER":
- *			hwe[i].engine_class == DRM_XE_ENGINE_CLASS_COPY ? "COPY":
- *			hwe[i].engine_class == DRM_XE_ENGINE_CLASS_VIDEO_DECODE ? "VIDEO_DECODE":
- *			hwe[i].engine_class == DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE ? "VIDEO_ENHANCE":
- *			hwe[i].engine_class == DRM_XE_ENGINE_CLASS_COMPUTE ? "COMPUTE":
- *			"UNKNOWN");
- *	}
- *	free(hwe);
+ *     struct drm_xe_query_engine *engines;
+ *     struct drm_xe_device_query query = {
+ *         .extensions = 0,
+ *         .query = DRM_XE_DEVICE_QUERY_ENGINES,
+ *         .size = 0,
+ *         .data = 0,
+ *     };
+ *     ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
+ *     engines = malloc(query.size);
+ *     query.data = (uintptr_t)engines;
+ *     ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
+ *     for (int i = 0; i < engines->num_engines; i++) {
+ *         printf("Engine %d: %s\n", i,
+ *             engines->engines[i].instance.engine_class ==
+ *                 DRM_XE_ENGINE_CLASS_RENDER ? "RENDER":
+ *             engines->engines[i].instance.engine_class ==
+ *                 DRM_XE_ENGINE_CLASS_COPY ? "COPY":
+ *             engines->engines[i].instance.engine_class ==
+ *                 DRM_XE_ENGINE_CLASS_VIDEO_DECODE ? "VIDEO_DECODE":
+ *             engines->engines[i].instance.engine_class ==
+ *                 DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE ? "VIDEO_ENHANCE":
+ *             engines->engines[i].instance.engine_class ==
+ *                 DRM_XE_ENGINE_CLASS_COMPUTE ? "COMPUTE":
+ *             "UNKNOWN");
+ *     }
+ *     free(engines);
  */
 struct drm_xe_device_query {
 	/** @extensions: Pointer to the first extension struct, if any */
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 01b5cc715..adbbdaac4 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -72,10 +72,9 @@ static uint64_t __memory_regions(const struct drm_xe_query_gt *gt)
 	return regions;
 }
 
-static struct drm_xe_query_engine_info *
-xe_query_engines(int fd, unsigned int *num_engines)
+static struct drm_xe_query_engine *xe_query_engines(int fd)
 {
-	struct drm_xe_query_engine_info *engines;
+	struct drm_xe_query_engine *engines;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
 		.query = DRM_XE_DEVICE_QUERY_ENGINES,
@@ -83,7 +82,6 @@ xe_query_engines(int fd, unsigned int *num_engines)
 		.data = 0,
 	};
 
-	igt_assert(num_engines);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
 	engines = malloc(query.size);
@@ -92,8 +90,6 @@ xe_query_engines(int fd, unsigned int *num_engines)
 	query.data = to_user_pointer(engines);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	*num_engines = query.size / sizeof(*engines);
-
 	return engines;
 }
 
@@ -253,7 +249,7 @@ struct xe_device *xe_device_get(int fd)
 	xe_dev->dev_id = xe_dev->config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
 	xe_dev->gt = xe_query_gt_new(fd);
 	xe_dev->memory_regions = __memory_regions(xe_dev->gt);
-	xe_dev->engines = xe_query_engines(fd, &xe_dev->number_engines);
+	xe_dev->engines = xe_query_engines(fd);
 	xe_dev->mem_regions = xe_query_mem_regions_new(fd);
 	xe_dev->vram_size = calloc(xe_dev->gt->num_gt, sizeof(*xe_dev->vram_size));
 	xe_dev->visible_vram_size = calloc(xe_dev->gt->num_gt, sizeof(*xe_dev->visible_vram_size));
@@ -427,7 +423,7 @@ uint64_t vram_if_possible(int fd, int gt)
  *
  * Returns engines array of xe device @fd.
  */
-xe_dev_FN(xe_engines, engines, struct drm_xe_query_engine_info *);
+xe_dev_FN(xe_engines, engines->engines, struct drm_xe_engine *);
 
 /**
  * xe_engine:
@@ -436,15 +432,15 @@ xe_dev_FN(xe_engines, engines, struct drm_xe_query_engine_info *);
  *
  * Returns engine info of xe device @fd and @idx.
  */
-struct drm_xe_query_engine_info *xe_engine(int fd, int idx)
+struct drm_xe_engine *xe_engine(int fd, int idx)
 {
 	struct xe_device *xe_dev;
 
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
-	igt_assert(idx >= 0 && idx < xe_dev->number_engines);
+	igt_assert(idx >= 0 && idx < xe_dev->engines->num_engines);
 
-	return &xe_dev->engines[idx];
+	return &xe_dev->engines->engines[idx];
 }
 
 /**
@@ -534,7 +530,7 @@ xe_dev_FN(xe_config, config, struct drm_xe_query_config *);
  *
  * Returns number of hw engines of xe device @fd.
  */
-xe_dev_FN(xe_number_engines, number_engines, unsigned int);
+xe_dev_FN(xe_number_engines, engines->num_engines, unsigned int);
 
 /**
  * xe_has_vram:
@@ -657,8 +653,8 @@ bool xe_has_engine_class(int fd, uint16_t engine_class)
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
 
-	for (int i = 0; i < xe_dev->number_engines; i++)
-		if (xe_dev->engines[i].instance.engine_class == engine_class)
+	for (int i = 0; i < xe_dev->engines->num_engines; i++)
+		if (xe_dev->engines->engines[i].instance.engine_class == engine_class)
 			return true;
 
 	return false;
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index b65b442f4..45a481083 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -32,11 +32,8 @@ struct xe_device {
 	/** @gt_list: bitmask of all memory regions */
 	uint64_t memory_regions;
 
-	/** @engines: array of hardware engines */
-	struct drm_xe_query_engine_info *engines;
-
-	/** @number_engines: length of hardware engines array */
-	unsigned int number_engines;
+	/** @engines: hardware engines */
+	struct drm_xe_query_engine *engines;
 
 	/** @mem_regions: regions memory information and usage */
 	struct drm_xe_query_mem_region *mem_regions;
@@ -81,8 +78,8 @@ uint64_t all_memory_regions(int fd);
 uint64_t system_memory(int fd);
 uint64_t vram_memory(int fd, int gt);
 uint64_t vram_if_possible(int fd, int gt);
-struct drm_xe_query_engine_info *xe_engines(int fd);
-struct drm_xe_query_engine_info *xe_engine(int fd, int idx);
+struct drm_xe_engine *xe_engines(int fd);
+struct drm_xe_engine *xe_engine(int fd, int idx);
 struct drm_xe_mem_region *xe_mem_region(int fd, uint64_t region);
 const char *xe_region_name(uint64_t region);
 uint16_t xe_region_class(int fd, uint64_t region);
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index 19582f94d..307868d0b 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -148,7 +148,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
 	igt_nsec_elapsed(&tv);
 
 	igt_fork(n, nproc) {
-		struct drm_xe_query_engine_info *engine;
+		struct drm_xe_engine *engine;
 		uint32_t exec_queue, exec_queues[exec_queues_per_process];
 		int idx, err, i;
 
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index fc39649ea..ec457b1c1 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -40,7 +40,7 @@ IGT_TEST_DESCRIPTION("Read and verify drm client memory consumption using fdinfo
 #define BO_SIZE (65536)
 
 /* Subtests */
-static void test_active(int fd, struct drm_xe_query_engine_info *engine)
+static void test_active(int fd, struct drm_xe_engine *engine)
 {
 	struct drm_xe_mem_region *memregion;
 	uint64_t memreg = all_memory_regions(fd), region;
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 48e843af5..2927214e3 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -63,7 +63,7 @@ static void store(int fd)
 		.syncs = to_user_pointer(&sync),
 	};
 	struct data *data;
-	struct drm_xe_query_engine_info *engine;
+	struct drm_xe_engine *engine;
 	uint32_t vm;
 	uint32_t exec_queue;
 	uint32_t syncobj;
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 585af413d..9659272b5 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -43,7 +43,7 @@
   *	there is worked queued on one of the VM's compute exec_queues.
  */
 
-static void test_ping_pong(int fd, struct drm_xe_query_engine_info *engine)
+static void test_ping_pong(int fd, struct drm_xe_engine *engine)
 {
 	size_t vram_size = xe_vram_size(fd, 0);
 	size_t align = xe_get_default_alignment(fd);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] [PATCH v1 13/13] drm-uapi/xe: Add Tile ID information to the GT info query
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (11 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 12/13] drm-uapi/xe: Align on a common way to return arrays (engines) Francois Dugast
@ 2023-11-16 14:53 ` Francois Dugast
  2023-11-16 15:20 ` [igt-dev] ✗ Fi.CI.BUILD: failure for uAPI Alignment - Cleanup and future proof Patchwork
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-16 14:53 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with kernel commit ("drm/xe/uapi: Add Tile ID information to the GT info query")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index eebf9a08b..6aa00ad6e 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -380,6 +380,8 @@ struct drm_xe_gt {
 #define DRM_XE_QUERY_GT_TYPE_MEDIA		1
 	/** @type: GT type: Main or Media */
 	__u16 type;
+	/** @tile_id: Tile ID where this GT lives (Information only) */
+	__u16 tile_id;
 	/** @gt_id: Unique ID of this GT within the PCI Device */
 	__u16 gt_id;
 	/** @clock_freq: A clock frequency for timestamp */
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [igt-dev] ✗ Fi.CI.BUILD: failure for uAPI Alignment - Cleanup and future proof
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (12 preceding siblings ...)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 13/13] drm-uapi/xe: Add Tile ID information to the GT info query Francois Dugast
@ 2023-11-16 15:20 ` Patchwork
  2023-11-17 18:12 ` [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - Cleanup and future proof (rev2) Patchwork
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2023-11-16 15:20 UTC (permalink / raw)
  To: Francois Dugast; +Cc: igt-dev

== Series Details ==

Series: uAPI Alignment - Cleanup and future proof
URL   : https://patchwork.freedesktop.org/series/126537/
State : failure

== Summary ==

Applying: drm-uapi/xe: Extend drm_xe_vm_bind_op
Applying: xe_ioctl: Converge bo_create to the most used version
Patch failed at 0002 xe_ioctl: Converge bo_create to the most used version
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - Cleanup and future proof (rev2)
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (13 preceding siblings ...)
  2023-11-16 15:20 ` [igt-dev] ✗ Fi.CI.BUILD: failure for uAPI Alignment - Cleanup and future proof Patchwork
@ 2023-11-17 18:12 ` Patchwork
  2023-11-17 19:53 ` [igt-dev] ✗ CI.xeBAT: failure " Patchwork
  2023-11-18 14:55 ` [igt-dev] ✗ Fi.CI.IGT: " Patchwork
  16 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2023-11-17 18:12 UTC (permalink / raw)
  To: Francois Dugast; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 12012 bytes --]

== Series Details ==

Series: uAPI Alignment - Cleanup and future proof (rev2)
URL   : https://patchwork.freedesktop.org/series/126537/
State : success

== Summary ==

CI Bug Log - changes from IGT_7594 -> IGTPW_10212
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/index.html

Participating hosts (38 -> 38)
------------------------------

  Additional (3): fi-kbl-soraka bat-dg2-8 bat-mtlp-8 
  Missing    (3): fi-hsw-4770 bat-adls-5 fi-snb-2520m 

Known issues
------------

  Here are the changes found in IGTPW_10212 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@debugfs_test@basic-hwmon:
    - bat-mtlp-8:         NOTRUN -> [SKIP][1] ([i915#9318])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@debugfs_test@basic-hwmon.html

  * igt@gem_exec_suspend@basic-s0@smem:
    - bat-dg2-8:          NOTRUN -> [INCOMPLETE][2] ([i915#8797] / [i915#9275])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@gem_exec_suspend@basic-s0@smem.html

  * igt@gem_huc_copy@huc-copy:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][3] ([fdo#109271] / [i915#2190])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/fi-kbl-soraka/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@basic:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][4] ([fdo#109271] / [i915#4613]) +3 other tests skip
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/fi-kbl-soraka/igt@gem_lmem_swapping@basic.html

  * igt@gem_lmem_swapping@verify-random:
    - bat-mtlp-8:         NOTRUN -> [SKIP][5] ([i915#4613]) +3 other tests skip
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@gem_lmem_swapping@verify-random.html

  * igt@gem_mmap@basic:
    - bat-mtlp-8:         NOTRUN -> [SKIP][6] ([i915#4083])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@gem_mmap@basic.html
    - bat-dg2-8:          NOTRUN -> [SKIP][7] ([i915#4083])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@gem_mmap@basic.html

  * igt@gem_mmap_gtt@basic:
    - bat-mtlp-8:         NOTRUN -> [SKIP][8] ([i915#4077]) +2 other tests skip
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@gem_mmap_gtt@basic.html
    - bat-dg2-8:          NOTRUN -> [SKIP][9] ([i915#4077]) +2 other tests skip
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@gem_mmap_gtt@basic.html

  * igt@gem_render_tiled_blits@basic:
    - bat-mtlp-8:         NOTRUN -> [SKIP][10] ([i915#4079]) +1 other test skip
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@gem_render_tiled_blits@basic.html

  * igt@gem_tiled_pread_basic:
    - bat-dg2-8:          NOTRUN -> [SKIP][11] ([i915#4079]) +1 other test skip
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@gem_tiled_pread_basic.html

  * igt@i915_pm_rps@basic-api:
    - bat-mtlp-8:         NOTRUN -> [SKIP][12] ([i915#6621])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@i915_pm_rps@basic-api.html
    - bat-dg2-8:          NOTRUN -> [SKIP][13] ([i915#6621])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@i915_pm_rps@basic-api.html

  * igt@i915_selftest@live@gt_pm:
    - fi-kbl-soraka:      NOTRUN -> [DMESG-FAIL][14] ([i915#1886])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/fi-kbl-soraka/igt@i915_selftest@live@gt_pm.html

  * igt@i915_suspend@basic-s3-without-i915:
    - bat-mtlp-8:         NOTRUN -> [SKIP][15] ([i915#6645])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@i915_suspend@basic-s3-without-i915.html
    - bat-dg2-8:          NOTRUN -> [SKIP][16] ([i915#6645])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@i915_suspend@basic-s3-without-i915.html

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - bat-mtlp-8:         NOTRUN -> [SKIP][17] ([i915#5190])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
    - bat-dg2-8:          NOTRUN -> [SKIP][18] ([i915#5190])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_addfb_basic@basic-y-tiled-legacy:
    - bat-mtlp-8:         NOTRUN -> [SKIP][19] ([i915#4212]) +8 other tests skip
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@kms_addfb_basic@basic-y-tiled-legacy.html
    - bat-dg2-8:          NOTRUN -> [SKIP][20] ([i915#4215] / [i915#5190])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@kms_addfb_basic@basic-y-tiled-legacy.html

  * igt@kms_addfb_basic@framebuffer-vs-set-tiling:
    - bat-dg2-8:          NOTRUN -> [SKIP][21] ([i915#4212]) +6 other tests skip
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@kms_addfb_basic@framebuffer-vs-set-tiling.html

  * igt@kms_addfb_basic@tile-pitch-mismatch:
    - bat-dg2-8:          NOTRUN -> [SKIP][22] ([i915#4212] / [i915#5608])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@kms_addfb_basic@tile-pitch-mismatch.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy:
    - bat-mtlp-8:         NOTRUN -> [SKIP][23] ([i915#4213]) +1 other test skip
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy.html
    - bat-dg2-8:          NOTRUN -> [SKIP][24] ([i915#4103] / [i915#4213] / [i915#5608]) +1 other test skip
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy.html

  * igt@kms_dsc@dsc-basic:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][25] ([fdo#109271]) +9 other tests skip
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/fi-kbl-soraka/igt@kms_dsc@dsc-basic.html
    - bat-mtlp-8:         NOTRUN -> [SKIP][26] ([i915#3555] / [i915#3840] / [i915#9159])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@kms_dsc@dsc-basic.html

  * igt@kms_force_connector_basic@force-load-detect:
    - bat-mtlp-8:         NOTRUN -> [SKIP][27] ([fdo#109285])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@kms_force_connector_basic@force-load-detect.html
    - bat-dg2-8:          NOTRUN -> [SKIP][28] ([fdo#109285])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@kms_force_connector_basic@force-load-detect.html

  * igt@kms_force_connector_basic@prune-stale-modes:
    - bat-mtlp-8:         NOTRUN -> [SKIP][29] ([i915#5274])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@kms_force_connector_basic@prune-stale-modes.html
    - bat-dg2-8:          NOTRUN -> [SKIP][30] ([i915#5274])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@kms_force_connector_basic@prune-stale-modes.html

  * igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-d-edp-1:
    - bat-rplp-1:         [PASS][31] -> [ABORT][32] ([i915#8668])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/bat-rplp-1/igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-d-edp-1.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-rplp-1/igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-d-edp-1.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - bat-mtlp-8:         NOTRUN -> [SKIP][33] ([i915#3555] / [i915#8809])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@kms_setmode@basic-clone-single-crtc.html
    - bat-dg2-8:          NOTRUN -> [SKIP][34] ([i915#3555] / [i915#4098])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@prime_vgem@basic-fence-flip:
    - bat-dg2-8:          NOTRUN -> [SKIP][35] ([i915#3708])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@prime_vgem@basic-fence-flip.html

  * igt@prime_vgem@basic-fence-mmap:
    - bat-dg2-8:          NOTRUN -> [SKIP][36] ([i915#3708] / [i915#4077]) +1 other test skip
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@prime_vgem@basic-fence-mmap.html
    - bat-mtlp-8:         NOTRUN -> [SKIP][37] ([i915#3708] / [i915#4077]) +1 other test skip
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@prime_vgem@basic-fence-mmap.html

  * igt@prime_vgem@basic-fence-read:
    - bat-mtlp-8:         NOTRUN -> [SKIP][38] ([i915#3708]) +2 other tests skip
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-mtlp-8/igt@prime_vgem@basic-fence-read.html

  * igt@prime_vgem@basic-write:
    - bat-dg2-8:          NOTRUN -> [SKIP][39] ([i915#3291] / [i915#3708]) +2 other tests skip
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/bat-dg2-8/igt@prime_vgem@basic-write.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [i915#1886]: https://gitlab.freedesktop.org/drm/intel/issues/1886
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3840]: https://gitlab.freedesktop.org/drm/intel/issues/3840
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4098]: https://gitlab.freedesktop.org/drm/intel/issues/4098
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
  [i915#4215]: https://gitlab.freedesktop.org/drm/intel/issues/4215
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
  [i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
  [i915#5608]: https://gitlab.freedesktop.org/drm/intel/issues/5608
  [i915#6621]: https://gitlab.freedesktop.org/drm/intel/issues/6621
  [i915#6645]: https://gitlab.freedesktop.org/drm/intel/issues/6645
  [i915#8668]: https://gitlab.freedesktop.org/drm/intel/issues/8668
  [i915#8797]: https://gitlab.freedesktop.org/drm/intel/issues/8797
  [i915#8809]: https://gitlab.freedesktop.org/drm/intel/issues/8809
  [i915#9159]: https://gitlab.freedesktop.org/drm/intel/issues/9159
  [i915#9275]: https://gitlab.freedesktop.org/drm/intel/issues/9275
  [i915#9318]: https://gitlab.freedesktop.org/drm/intel/issues/9318
  [i915#9648]: https://gitlab.freedesktop.org/drm/intel/issues/9648
  [i915#9673]: https://gitlab.freedesktop.org/drm/intel/issues/9673


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_7594 -> IGTPW_10212

  CI-20190529: 20190529
  CI_DRM_13887: b63ef5eb1dc5416d9791b25b968fbbe2421ac2b8 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_10212: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/index.html
  IGT_7594: 8478eefdaa3eef02b4370339ef0d1970d44a67a2 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git


Testlist changes
----------------

+igt@kms_feature_discovery@display
+igt@xe_query@query-gt
-igt@kms_feature_discovery@display-1x
-igt@xe_query@query-gt-list

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/index.html

[-- Attachment #2: Type: text/html, Size: 14923 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 09/13] drm-uapi/xe: Reject bo creation of unaligned size
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 09/13] drm-uapi/xe: Reject bo creation of unaligned size Francois Dugast
@ 2023-11-17 18:44   ` Kamil Konieczny
  2023-11-28 20:31     ` Francois Dugast
  0 siblings, 1 reply; 30+ messages in thread
From: Kamil Konieczny @ 2023-11-17 18:44 UTC (permalink / raw)
  To: igt-dev

Hi Francois,
On 2023-11-16 at 14:53:44 +0000, Francois Dugast wrote:
> Align with kernel commit ("drm/xe/uapi: Reject bo creation of unaligned size")
> 
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
>  include/drm-uapi/xe_drm.h          | 17 +++++++++--------
>  tests/intel/xe_mmap.c              | 22 ++++++++++++----------
>  tests/intel/xe_prime_self_import.c | 26 +++++++++++++++++++++++++-

There are now compilation warnings:

[1315/1777] Compiling C object tests/xe_prime_self_import.p/intel_xe_prime_self_import.c.o
../tests/intel/xe_prime_self_import.c: In function 'check_bo':
../tests/intel/xe_prime_self_import.c:73:16: warning: declaration of 'bo_size' shadows a global declaration [-Wshadow]
   73 |         size_t bo_size = get_min_bo_size(fd1, fd2);
      |                ^~~~~~~

so please delete global var bo_size and remove it also from fixup.

Regards,
Kamil

>  tests/intel/xe_vm.c                | 13 ++++++-------
>  4 files changed, 52 insertions(+), 26 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 7aff66830..aa66b62e2 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -206,11 +206,13 @@ struct drm_xe_query_mem_region {
>  	 *
>  	 * When the kernel allocates memory for this region, the
>  	 * underlying pages will be at least @min_page_size in size.
> -	 *
> -	 * Important note: When userspace allocates a GTT address which
> -	 * can point to memory allocated from this region, it must also
> -	 * respect this minimum alignment. This is enforced by the
> -	 * kernel.
> +	 * Buffer objects with an allowable placement in this region must be
> +	 * created with a size aligned to this value.
> +	 * GPU virtual address mappings of (parts of) buffer objects that
> +	 * may be placed in this region must also have their GPU virtual
> +	 * address and range aligned to this value.
> +	 * Affected IOCTLS will return %-EINVAL if alignment restrictions are
> +	 * not met.
>  	 */
>  	__u32 min_page_size;
>  	/**
> @@ -515,9 +517,8 @@ struct drm_xe_gem_create {
>  	__u64 extensions;
>  
>  	/**
> -	 * @size: Requested size for the object
> -	 *
> -	 * The (page-aligned) allocated size for the object will be returned.
> +	 * @size: Size of the object to be created, must match region
> +	 * (system or vram) minimum alignment (&min_page_size).
>  	 */
>  	__u64 size;
>  
> diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
> index 965644e22..d6c8d5114 100644
> --- a/tests/intel/xe_mmap.c
> +++ b/tests/intel/xe_mmap.c
> @@ -47,17 +47,18 @@
>  static void
>  test_mmap(int fd, uint32_t placement, uint32_t flags)
>  {
> +	size_t bo_size = xe_get_default_alignment(fd);
>  	uint32_t bo;
>  	void *map;
>  
>  	igt_require_f(placement, "Device doesn't support such memory region\n");
>  
> -	bo = xe_bo_create(fd, 0, 4096, placement, flags);
> +	bo = xe_bo_create(fd, 0, bo_size, placement, flags);
>  
> -	map = xe_bo_map(fd, bo, 4096);
> +	map = xe_bo_map(fd, bo, bo_size);
>  	strcpy(map, "Write some data to the BO!");
>  
> -	munmap(map, 4096);
> +	munmap(map, bo_size);
>  
>  	gem_close(fd, bo);
>  }
> @@ -156,13 +157,14 @@ static void trap_sigbus(uint32_t *ptr)
>   */
>  static void test_small_bar(int fd)
>  {
> +	size_t page_size = xe_get_default_alignment(fd);
>  	uint32_t visible_size = xe_visible_vram_size(fd, 0);
>  	uint32_t bo;
>  	uint64_t mmo;
>  	uint32_t *map;
>  
>  	/* 2BIG invalid case */
> -	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + 4096,
> +	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + page_size,
>  				      vram_memory(fd, 0),
>  				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM,
>  				      &bo),
> @@ -172,12 +174,12 @@ static void test_small_bar(int fd)
>  	bo = xe_bo_create(fd, 0, visible_size / 4, vram_memory(fd, 0),
>  			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	mmo = xe_bo_mmap_offset(fd, bo);
> -	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
> +	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
>  	igt_assert(map != MAP_FAILED);
>  
>  	map[0] = 0xdeadbeaf;
>  
> -	munmap(map, 4096);
> +	munmap(map, page_size);
>  	gem_close(fd, bo);
>  
>  	/* Normal operation with system memory spilling */
> @@ -186,18 +188,18 @@ static void test_small_bar(int fd)
>  			  system_memory(fd),
>  			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	mmo = xe_bo_mmap_offset(fd, bo);
> -	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
> +	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
>  	igt_assert(map != MAP_FAILED);
>  
>  	map[0] = 0xdeadbeaf;
>  
> -	munmap(map, 4096);
> +	munmap(map, page_size);
>  	gem_close(fd, bo);
>  
>  	/* Bogus operation with SIGBUS */
> -	bo = xe_bo_create(fd, 0, visible_size + 4096, vram_memory(fd, 0), 0);
> +	bo = xe_bo_create(fd, 0, visible_size + page_size, vram_memory(fd, 0), 0);
>  	mmo = xe_bo_mmap_offset(fd, bo);
> -	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
> +	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
>  	igt_assert(map != MAP_FAILED);
>  
>  	trap_sigbus(map);
> diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
> index 9a263d326..504e6a13d 100644
> --- a/tests/intel/xe_prime_self_import.c
> +++ b/tests/intel/xe_prime_self_import.c
> @@ -61,13 +61,19 @@ static int g_time_out = 5;
>  static pthread_barrier_t g_barrier;
>  static size_t bo_size;
>  
> +static size_t get_min_bo_size(int fd1, int fd2)
> +{
> +	return 4 * max(xe_get_default_alignment(fd1),
> +		       xe_get_default_alignment(fd2));
> +}
> +
>  static void
>  check_bo(int fd1, uint32_t handle1, int fd2, uint32_t handle2)
>  {
> +	size_t bo_size = get_min_bo_size(fd1, fd2);
>  	char *ptr1, *ptr2;
>  	int i;
>  
> -
>  	ptr1 = xe_bo_map(fd1, handle1, bo_size);
>  	ptr2 = xe_bo_map(fd2, handle2, bo_size);
>  
> @@ -97,6 +103,7 @@ check_bo(int fd1, uint32_t handle1, int fd2, uint32_t handle2)
>  static void test_with_fd_dup(void)
>  {
>  	int fd1, fd2;
> +	size_t bo_size;
>  	uint32_t handle, handle_import;
>  	int dma_buf_fd1, dma_buf_fd2;
>  
> @@ -105,6 +112,8 @@ static void test_with_fd_dup(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> +	bo_size = get_min_bo_size(fd1, fd2);
> +
>  	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
>  			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
> @@ -131,6 +140,7 @@ static void test_with_fd_dup(void)
>  static void test_with_two_bos(void)
>  {
>  	int fd1, fd2;
> +	size_t bo_size;
>  	uint32_t handle1, handle2, handle_import;
>  	int dma_buf_fd;
>  
> @@ -139,6 +149,8 @@ static void test_with_two_bos(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> +	bo_size = get_min_bo_size(fd1, fd2);
> +
>  	handle1 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
>  			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	handle2 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
> @@ -171,12 +183,15 @@ static void test_with_two_bos(void)
>  static void test_with_one_bo_two_files(void)
>  {
>  	int fd1, fd2;
> +	size_t bo_size;
>  	uint32_t handle_import, handle_open, handle_orig, flink_name;
>  	int dma_buf_fd1, dma_buf_fd2;
>  
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> +	bo_size = get_min_bo_size(fd1, fd2);
> +
>  	handle_orig = xe_bo_create(fd1, 0, bo_size,
>  				   vram_if_possible(fd1, 0),
>  				   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> @@ -205,12 +220,15 @@ static void test_with_one_bo_two_files(void)
>  static void test_with_one_bo(void)
>  {
>  	int fd1, fd2;
> +	size_t bo_size;
>  	uint32_t handle, handle_import1, handle_import2, handle_selfimport;
>  	int dma_buf_fd;
>  
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> +	bo_size = get_min_bo_size(fd1, fd2);
> +
>  	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
>  			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
> @@ -279,6 +297,7 @@ static void *thread_fn_reimport_vs_close(void *p)
>  	pthread_t *threads;
>  	int r, i, num_threads;
>  	int fds[2];
> +	size_t bo_size;
>  	int obj_count;
>  	void *status;
>  	uint32_t handle;
> @@ -298,6 +317,8 @@ static void *thread_fn_reimport_vs_close(void *p)
>  
>  	fds[0] = drm_open_driver(DRIVER_XE);
>  
> +	bo_size = xe_get_default_alignment(fds[0]);
> +
>  	handle = xe_bo_create(fds[0], 0, bo_size,
>  			      vram_if_possible(fds[0], 0),
>  			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> @@ -336,6 +357,7 @@ static void *thread_fn_export_vs_close(void *p)
>  	struct drm_prime_handle prime_h2f;
>  	struct drm_gem_close close_bo;
>  	int fd = (uintptr_t)p;
> +	size_t bo_size = xe_get_default_alignment(fd);
>  	uint32_t handle;
>  
>  	pthread_barrier_wait(&g_barrier);
> @@ -463,6 +485,7 @@ static void test_llseek_size(void)
>  static void test_llseek_bad(void)
>  {
>  	int fd;
> +	size_t bo_size;
>  	uint32_t handle;
>  	int dma_buf_fd;
>  
> @@ -470,6 +493,7 @@ static void test_llseek_bad(void)
>  
>  	fd = drm_open_driver(DRIVER_XE);
>  
> +	bo_size = 4 * xe_get_default_alignment(fd);
>  	handle = xe_bo_create(fd, 0, bo_size,
>  			      vram_if_possible(fd, 0),
>  			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index ea93d7b2e..2c563c64f 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -1310,11 +1310,10 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
>  	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
>  		t.fd = fd;
>  		t.vm = vm;
> -#define PAGE_SIZE	4096
> -		t.addr = addr + PAGE_SIZE / 2;
> +		t.addr = addr + page_size / 2;
>  		t.eci = eci;
>  		t.exit = &exit;
> -		t.map = map + PAGE_SIZE / 2;
> +		t.map = map + page_size / 2;
>  		t.barrier = &barrier;
>  		pthread_barrier_init(&barrier, NULL, 2);
>  		pthread_create(&t.thread, 0, hammer_thread, &t);
> @@ -1367,8 +1366,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
>  		igt_assert_eq(data->data, 0xc0ffee);
>  	}
>  	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
> -		memset(map, 0, PAGE_SIZE / 2);
> -		memset(map + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
> +		memset(map, 0, page_size / 2);
> +		memset(map + page_size, 0, bo_size - page_size);
>  	} else {
>  		memset(map, 0, bo_size);
>  	}
> @@ -1417,8 +1416,8 @@ try_again_after_invalidate:
>  		}
>  	}
>  	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
> -		memset(map, 0, PAGE_SIZE / 2);
> -		memset(map + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
> +		memset(map, 0, page_size / 2);
> +		memset(map + page_size, 0, bo_size - page_size);
>  	} else {
>  		memset(map, 0, bo_size);
>  	}
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 10/13] drm-uapi/xe: Align on a common way to return arrays (memory regions)
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 10/13] drm-uapi/xe: Align on a common way to return arrays (memory regions) Francois Dugast
@ 2023-11-17 18:46   ` Kamil Konieczny
  0 siblings, 0 replies; 30+ messages in thread
From: Kamil Konieczny @ 2023-11-17 18:46 UTC (permalink / raw)
  To: igt-dev

Hi Francois,
On 2023-11-16 at 14:53:45 +0000, Francois Dugast wrote:
> Align with commit ("drm/xe/uapi: Align on a common way to return
> arrays (memory regions)")
> 
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>

checkpatch.pl complains about missing space after ':':

ERROR: spaces required around that ':' (ctx:ExV)
#376: FILE: tests/intel/xe_query.c:225:
+                       :mem_regions->mem_regions[i].mem_class ==
                        ^

Regards,
Kamil

> ---
>  include/drm-uapi/xe_drm.h   | 22 ++++++++---------
>  lib/xe/xe_query.c           | 48 ++++++++++++++++++-------------------
>  lib/xe/xe_query.h           |  4 ++--
>  lib/xe/xe_util.c            |  6 ++---
>  tests/intel/xe_create.c     |  2 +-
>  tests/intel/xe_drm_fdinfo.c |  8 +++----
>  tests/intel/xe_pm.c         | 12 +++++-----
>  tests/intel/xe_query.c      | 44 +++++++++++++++++-----------------
>  tests/kms_plane.c           |  2 +-
>  9 files changed, 74 insertions(+), 74 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index aa66b62e2..61de386f5 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -182,10 +182,10 @@ enum drm_xe_memory_class {
>  };
>  
>  /**
> - * struct drm_xe_query_mem_region - Describes some region as known to
> + * struct drm_xe_mem_region - Describes some region as known to
>   * the driver.
>   */
> -struct drm_xe_query_mem_region {
> +struct drm_xe_mem_region {
>  	/**
>  	 * @mem_class: The memory class describing this region.
>  	 *
> @@ -315,19 +315,19 @@ struct drm_xe_query_engine_cycles {
>  };
>  
>  /**
> - * struct drm_xe_query_mem_regions - describe memory regions
> + * struct drm_xe_query_mem_region - describe memory regions
>   *
>   * If a query is made with a struct drm_xe_device_query where .query
> - * is equal to DRM_XE_DEVICE_QUERY_MEM_REGIONS, then the reply uses
> - * struct drm_xe_query_mem_regions in .data.
> + * is equal to DRM_XE_DEVICE_QUERY_MEM_REGION, then the reply uses
> + * struct drm_xe_query_mem_region in .data.
>   */
> -struct drm_xe_query_mem_regions {
> -	/** @num_regions: number of memory regions returned in @regions */
> -	__u32 num_regions;
> +struct drm_xe_query_mem_region {
> +	/** @num_mem_regions: number of memory regions returned in @mem_regions */
> +	__u32 num_mem_regions;
>  	/** @pad: MBZ */
>  	__u32 pad;
> -	/** @regions: The returned regions for this device */
> -	struct drm_xe_query_mem_region regions[];
> +	/** @mem_regions: The returned memory regions for this device */
> +	struct drm_xe_mem_region mem_regions[];
>  };
>  
>  /**
> @@ -493,7 +493,7 @@ struct drm_xe_device_query {
>  	__u64 extensions;
>  
>  #define DRM_XE_DEVICE_QUERY_ENGINES		0
> -#define DRM_XE_DEVICE_QUERY_MEM_REGIONS		1
> +#define DRM_XE_DEVICE_QUERY_MEM_REGION		1
>  #define DRM_XE_DEVICE_QUERY_CONFIG		2
>  #define DRM_XE_DEVICE_QUERY_GT_LIST		3
>  #define DRM_XE_DEVICE_QUERY_HWCONFIG		4
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index f9dec1f7a..4aeeee928 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -97,12 +97,12 @@ xe_query_engines(int fd, unsigned int *num_engines)
>  	return engines;
>  }
>  
> -static struct drm_xe_query_mem_regions *xe_query_mem_regions_new(int fd)
> +static struct drm_xe_query_mem_region *xe_query_mem_regions_new(int fd)
>  {
> -	struct drm_xe_query_mem_regions *mem_regions;
> +	struct drm_xe_query_mem_region *mem_regions;
>  	struct drm_xe_device_query query = {
>  		.extensions = 0,
> -		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
> +		.query = DRM_XE_DEVICE_QUERY_MEM_REGION,
>  		.size = 0,
>  		.data = 0,
>  	};
> @@ -129,44 +129,44 @@ static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list,
>  	return region;
>  }
>  
> -static uint64_t gt_vram_size(const struct drm_xe_query_mem_regions *mem_regions,
> +static uint64_t gt_vram_size(const struct drm_xe_query_mem_region *mem_regions,
>  			     const struct drm_xe_query_gt_list *gt_list, int gt)
>  {
>  	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
>  
> -	if (XE_IS_CLASS_VRAM(&mem_regions->regions[region_idx]))
> -		return mem_regions->regions[region_idx].total_size;
> +	if (XE_IS_CLASS_VRAM(&mem_regions->mem_regions[region_idx]))
> +		return mem_regions->mem_regions[region_idx].total_size;
>  
>  	return 0;
>  }
>  
> -static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_regions *mem_regions,
> +static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_region *mem_regions,
>  				     const struct drm_xe_query_gt_list *gt_list, int gt)
>  {
>  	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
>  
> -	if (XE_IS_CLASS_VRAM(&mem_regions->regions[region_idx]))
> -		return mem_regions->regions[region_idx].cpu_visible_size;
> +	if (XE_IS_CLASS_VRAM(&mem_regions->mem_regions[region_idx]))
> +		return mem_regions->mem_regions[region_idx].cpu_visible_size;
>  
>  	return 0;
>  }
>  
> -static bool __mem_has_vram(struct drm_xe_query_mem_regions *mem_regions)
> +static bool __mem_has_vram(struct drm_xe_query_mem_region *mem_regions)
>  {
> -	for (int i = 0; i < mem_regions->num_regions; i++)
> -		if (XE_IS_CLASS_VRAM(&mem_regions->regions[i]))
> +	for (int i = 0; i < mem_regions->num_mem_regions; i++)
> +		if (XE_IS_CLASS_VRAM(&mem_regions->mem_regions[i]))
>  			return true;
>  
>  	return false;
>  }
>  
> -static uint32_t __mem_default_alignment(struct drm_xe_query_mem_regions *mem_regions)
> +static uint32_t __mem_default_alignment(struct drm_xe_query_mem_region *mem_regions)
>  {
>  	uint32_t alignment = XE_DEFAULT_ALIGNMENT;
>  
> -	for (int i = 0; i < mem_regions->num_regions; i++)
> -		if (alignment < mem_regions->regions[i].min_page_size)
> -			alignment = mem_regions->regions[i].min_page_size;
> +	for (int i = 0; i < mem_regions->num_mem_regions; i++)
> +		if (alignment < mem_regions->mem_regions[i].min_page_size)
> +			alignment = mem_regions->mem_regions[i].min_page_size;
>  
>  	return alignment;
>  }
> @@ -454,16 +454,16 @@ struct drm_xe_query_engine_info *xe_engine(int fd, int idx)
>   *
>   * Returns memory region structure for @region mask.
>   */
> -struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region)
> +struct drm_xe_mem_region *xe_mem_region(int fd, uint64_t region)
>  {
>  	struct xe_device *xe_dev;
>  	int region_idx = ffs(region) - 1;
>  
>  	xe_dev = find_in_cache(fd);
>  	igt_assert(xe_dev);
> -	igt_assert(xe_dev->mem_regions->num_regions > region_idx);
> +	igt_assert(xe_dev->mem_regions->num_mem_regions > region_idx);
>  
> -	return &xe_dev->mem_regions->regions[region_idx];
> +	return &xe_dev->mem_regions->mem_regions[region_idx];
>  }
>  
>  /**
> @@ -501,7 +501,7 @@ const char *xe_region_name(uint64_t region)
>   */
>  uint16_t xe_region_class(int fd, uint64_t region)
>  {
> -	struct drm_xe_query_mem_region *memreg;
> +	struct drm_xe_mem_region *memreg;
>  
>  	memreg = xe_mem_region(fd, region);
>  
> @@ -593,21 +593,21 @@ uint64_t xe_vram_available(int fd, int gt)
>  {
>  	struct xe_device *xe_dev;
>  	int region_idx;
> -	struct drm_xe_query_mem_region *mem_region;
> -	struct drm_xe_query_mem_regions *mem_regions;
> +	struct drm_xe_mem_region *mem_region;
> +	struct drm_xe_query_mem_region *mem_regions;
>  
>  	xe_dev = find_in_cache(fd);
>  	igt_assert(xe_dev);
>  
>  	region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
> -	mem_region = &xe_dev->mem_regions->regions[region_idx];
> +	mem_region = &xe_dev->mem_regions->mem_regions[region_idx];
>  
>  	if (XE_IS_CLASS_VRAM(mem_region)) {
>  		uint64_t available_vram;
>  
>  		mem_regions = xe_query_mem_regions_new(fd);
>  		pthread_mutex_lock(&cache.cache_mutex);
> -		mem_region->used = mem_regions->regions[region_idx].used;
> +		mem_region->used = mem_regions->mem_regions[region_idx].used;
>  		available_vram = mem_region->total_size - mem_region->used;
>  		pthread_mutex_unlock(&cache.cache_mutex);
>  		free(mem_regions);
> diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> index fede00036..1c76b0caf 100644
> --- a/lib/xe/xe_query.h
> +++ b/lib/xe/xe_query.h
> @@ -39,7 +39,7 @@ struct xe_device {
>  	unsigned int number_engines;
>  
>  	/** @mem_regions: regions memory information and usage */
> -	struct drm_xe_query_mem_regions *mem_regions;
> +	struct drm_xe_query_mem_region *mem_regions;
>  
>  	/** @vram_size: array of vram sizes for all gt_list */
>  	uint64_t *vram_size;
> @@ -83,7 +83,7 @@ uint64_t vram_memory(int fd, int gt);
>  uint64_t vram_if_possible(int fd, int gt);
>  struct drm_xe_query_engine_info *xe_engines(int fd);
>  struct drm_xe_query_engine_info *xe_engine(int fd, int idx);
> -struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
> +struct drm_xe_mem_region *xe_mem_region(int fd, uint64_t region);
>  const char *xe_region_name(uint64_t region);
>  uint16_t xe_region_class(int fd, uint64_t region);
>  uint32_t xe_min_page_size(int fd, uint64_t region);
> diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
> index 742e6333e..1bb52b142 100644
> --- a/lib/xe/xe_util.c
> +++ b/lib/xe/xe_util.c
> @@ -10,7 +10,7 @@
>  #include "xe/xe_query.h"
>  #include "xe/xe_util.h"
>  
> -static bool __region_belongs_to_regions_type(struct drm_xe_query_mem_region *region,
> +static bool __region_belongs_to_regions_type(struct drm_xe_mem_region *region,
>  					     uint32_t *mem_regions_type,
>  					     int num_regions)
>  {
> @@ -23,7 +23,7 @@ static bool __region_belongs_to_regions_type(struct drm_xe_query_mem_region *reg
>  struct igt_collection *
>  __xe_get_memory_region_set(int xe, uint32_t *mem_regions_type, int num_regions)
>  {
> -	struct drm_xe_query_mem_region *memregion;
> +	struct drm_xe_mem_region *memregion;
>  	struct igt_collection *set = NULL;
>  	uint64_t memreg = all_memory_regions(xe), region;
>  	int count = 0, pos = 0;
> @@ -78,7 +78,7 @@ char *xe_memregion_dynamic_subtest_name(int xe, struct igt_collection *set)
>  	igt_assert(name);
>  
>  	for_each_collection_data(data, set) {
> -		struct drm_xe_query_mem_region *memreg;
> +		struct drm_xe_mem_region *memreg;
>  		int r;
>  
>  		region = data->value;
> diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
> index b04a3443f..19582f94d 100644
> --- a/tests/intel/xe_create.c
> +++ b/tests/intel/xe_create.c
> @@ -48,7 +48,7 @@ static int __create_bo(int fd, uint32_t vm, uint64_t size, uint32_t placement,
>   */
>  static void create_invalid_size(int fd)
>  {
> -	struct drm_xe_query_mem_region *memregion;
> +	struct drm_xe_mem_region *memregion;
>  	uint64_t memreg = all_memory_regions(fd), region;
>  	uint32_t vm;
>  	uint32_t handle;
> diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
> index cec3e0825..fc39649ea 100644
> --- a/tests/intel/xe_drm_fdinfo.c
> +++ b/tests/intel/xe_drm_fdinfo.c
> @@ -42,7 +42,7 @@ IGT_TEST_DESCRIPTION("Read and verify drm client memory consumption using fdinfo
>  /* Subtests */
>  static void test_active(int fd, struct drm_xe_query_engine_info *engine)
>  {
> -	struct drm_xe_query_mem_region *memregion;
> +	struct drm_xe_mem_region *memregion;
>  	uint64_t memreg = all_memory_regions(fd), region;
>  	struct drm_client_fdinfo info = { };
>  	uint32_t vm;
> @@ -169,7 +169,7 @@ static void test_active(int fd, struct drm_xe_query_engine_info *engine)
>  
>  static void test_shared(int xe)
>  {
> -	struct drm_xe_query_mem_region *memregion;
> +	struct drm_xe_mem_region *memregion;
>  	uint64_t memreg = all_memory_regions(xe), region;
>  	struct drm_client_fdinfo info = { };
>  	struct drm_gem_flink flink;
> @@ -214,7 +214,7 @@ static void test_shared(int xe)
>  
>  static void test_total_resident(int xe)
>  {
> -	struct drm_xe_query_mem_region *memregion;
> +	struct drm_xe_mem_region *memregion;
>  	uint64_t memreg = all_memory_regions(xe), region;
>  	struct drm_client_fdinfo info = { };
>  	uint32_t vm;
> @@ -262,7 +262,7 @@ static void test_total_resident(int xe)
>  
>  static void basic(int xe)
>  {
> -	struct drm_xe_query_mem_region *memregion;
> +	struct drm_xe_mem_region *memregion;
>  	uint64_t memreg = all_memory_regions(xe), region;
>  	struct drm_client_fdinfo info = { };
>  	unsigned int ret;
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index d78ca31a8..6cd4175ae 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -373,10 +373,10 @@ NULL));
>   */
>  static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
>  {
> -	struct drm_xe_query_mem_regions *mem_regions;
> +	struct drm_xe_query_mem_region *mem_regions;
>  	struct drm_xe_device_query query = {
>  		.extensions = 0,
> -		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
> +		.query = DRM_XE_DEVICE_QUERY_MEM_REGION,
>  		.size = 0,
>  		.data = 0,
>  	};
> @@ -400,10 +400,10 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
>  	query.data = to_user_pointer(mem_regions);
>  	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
> -	for (i = 0; i < mem_regions->num_regions; i++) {
> -		if (mem_regions->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
> -			vram_used_mb +=  (mem_regions->regions[i].used / (1024 * 1024));
> -			vram_total_mb += (mem_regions->regions[i].total_size / (1024 * 1024));
> +	for (i = 0; i < mem_regions->num_mem_regions; i++) {
> +		if (mem_regions->mem_regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
> +			vram_used_mb +=  (mem_regions->mem_regions[i].used / (1024 * 1024));
> +			vram_total_mb += (mem_regions->mem_regions[i].total_size / (1024 * 1024));
>  		}
>  	}
>  
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index 48042337a..562ee2736 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -200,10 +200,10 @@ test_query_engines(int fd)
>  static void
>  test_query_mem_regions(int fd)
>  {
> -	struct drm_xe_query_mem_regions *mem_regions;
> +	struct drm_xe_query_mem_region *mem_regions;
>  	struct drm_xe_device_query query = {
>  		.extensions = 0,
> -		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
> +		.query = DRM_XE_DEVICE_QUERY_MEM_REGION,
>  		.size = 0,
>  		.data = 0,
>  	};
> @@ -218,34 +218,34 @@ test_query_mem_regions(int fd)
>  	query.data = to_user_pointer(mem_regions);
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
> -	for (i = 0; i < mem_regions->num_regions; i++) {
> +	for (i = 0; i < mem_regions->num_mem_regions; i++) {
>  		igt_info("mem region %d: %s\t%#llx / %#llx\n", i,
> -			mem_regions->regions[i].mem_class ==
> +			mem_regions->mem_regions[i].mem_class ==
>  			DRM_XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
> -			:mem_regions->regions[i].mem_class ==
> +			:mem_regions->mem_regions[i].mem_class ==
>  			DRM_XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
> -			mem_regions->regions[i].used,
> -			mem_regions->regions[i].total_size
> +			mem_regions->mem_regions[i].used,
> +			mem_regions->mem_regions[i].total_size
>  		);
>  		igt_info("min_page_size=0x%x\n",
> -		       mem_regions->regions[i].min_page_size);
> +		       mem_regions->mem_regions[i].min_page_size);
>  
>  		igt_info("visible size=%lluMiB\n",
> -			 mem_regions->regions[i].cpu_visible_size >> 20);
> +			 mem_regions->mem_regions[i].cpu_visible_size >> 20);
>  		igt_info("visible used=%lluMiB\n",
> -			 mem_regions->regions[i].cpu_visible_used >> 20);
> -
> -		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_size,
> -				   mem_regions->regions[i].total_size);
> -		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_used,
> -				   mem_regions->regions[i].cpu_visible_size);
> -		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_used,
> -				   mem_regions->regions[i].used);
> -		igt_assert_lte_u64(mem_regions->regions[i].used,
> -				   mem_regions->regions[i].total_size);
> -		igt_assert_lte_u64(mem_regions->regions[i].used -
> -				   mem_regions->regions[i].cpu_visible_used,
> -				   mem_regions->regions[i].total_size);
> +			 mem_regions->mem_regions[i].cpu_visible_used >> 20);
> +
> +		igt_assert_lte_u64(mem_regions->mem_regions[i].cpu_visible_size,
> +				   mem_regions->mem_regions[i].total_size);
> +		igt_assert_lte_u64(mem_regions->mem_regions[i].cpu_visible_used,
> +				   mem_regions->mem_regions[i].cpu_visible_size);
> +		igt_assert_lte_u64(mem_regions->mem_regions[i].cpu_visible_used,
> +				   mem_regions->mem_regions[i].used);
> +		igt_assert_lte_u64(mem_regions->mem_regions[i].used,
> +				   mem_regions->mem_regions[i].total_size);
> +		igt_assert_lte_u64(mem_regions->mem_regions[i].used -
> +				   mem_regions->mem_regions[i].cpu_visible_used,
> +				   mem_regions->mem_regions[i].total_size);
>  	}
>  	dump_hex_debug(mem_regions, query.size);
>  	free(mem_regions);
> diff --git a/tests/kms_plane.c b/tests/kms_plane.c
> index 24df7e8ca..419d4e9be 100644
> --- a/tests/kms_plane.c
> +++ b/tests/kms_plane.c
> @@ -458,7 +458,7 @@ test_plane_panning(data_t *data, enum pipe pipe)
>  	}
>  
>  	if (is_xe_device(data->drm_fd)) {
> -		struct drm_xe_query_mem_region *memregion;
> +		struct drm_xe_mem_region *memregion;
>  		uint64_t memreg = all_memory_regions(data->drm_fd), region;
>  
>  		xe_for_each_mem_region(data->drm_fd, memreg, region) {
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [igt-dev] ✗ CI.xeBAT: failure for uAPI Alignment - Cleanup and future proof (rev2)
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (14 preceding siblings ...)
  2023-11-17 18:12 ` [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - Cleanup and future proof (rev2) Patchwork
@ 2023-11-17 19:53 ` Patchwork
  2023-11-18 14:55 ` [igt-dev] ✗ Fi.CI.IGT: " Patchwork
  16 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2023-11-17 19:53 UTC (permalink / raw)
  To: Francois Dugast; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 15091 bytes --]

== Series Details ==

Series: uAPI Alignment - Cleanup and future proof (rev2)
URL   : https://patchwork.freedesktop.org/series/126537/
State : failure

== Summary ==

CI Bug Log - changes from XEIGT_7594_BAT -> XEIGTPW_10212_BAT
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with XEIGTPW_10212_BAT absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in XEIGTPW_10212_BAT, please notify your bug team (lgci.bug.filing@intel.com) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (4 -> 4)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in XEIGTPW_10212_BAT:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_addfb_basic@too-wide:
    - bat-dg2-oem2:       [PASS][1] -> [WARN][2] +7 other tests warn
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@kms_addfb_basic@too-wide.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@kms_addfb_basic@too-wide.html

  * igt@xe_exec_fault_mode@twice-userptr-invalidate-prefetch:
    - bat-pvc-2:          [PASS][3] -> [FAIL][4] +86 other tests fail
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-pvc-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-prefetch.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-pvc-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-prefetch.html

  * igt@xe_exec_threads@threads-mixed-basic:
    - bat-dg2-oem2:       [PASS][5] -> [CRASH][6] +5 other tests crash
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@xe_exec_threads@threads-mixed-basic.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@xe_exec_threads@threads-mixed-basic.html

  * igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate-race:
    - bat-atsm-2:         [PASS][7] -> [CRASH][8] +5 other tests crash
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-atsm-2/igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate-race.html
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-atsm-2/igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate-race.html

  * igt@xe_intel_bb@create-in-region:
    - bat-dg2-oem2:       [PASS][9] -> [FAIL][10] +105 other tests fail
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@xe_intel_bb@create-in-region.html
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@xe_intel_bb@create-in-region.html
    - bat-adlp-7:         [PASS][11] -> [FAIL][12] +164 other tests fail
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@xe_intel_bb@create-in-region.html
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_intel_bb@create-in-region.html

  * igt@xe_prime_self_import@basic-with_one_bo:
    - bat-atsm-2:         [PASS][13] -> [FAIL][14] +48 other tests fail
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-atsm-2/igt@xe_prime_self_import@basic-with_one_bo.html
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-atsm-2/igt@xe_prime_self_import@basic-with_one_bo.html

  * {igt@xe_query@query-gt} (NEW):
    - bat-adlp-7:         NOTRUN -> [FAIL][15]
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_query@query-gt.html

  * igt@xe_vm@munmap-style-unbind-front:
    - bat-pvc-2:          [PASS][16] -> [SKIP][17] +7 other tests skip
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-pvc-2/igt@xe_vm@munmap-style-unbind-front.html
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-pvc-2/igt@xe_vm@munmap-style-unbind-front.html

  * igt@xe_vm@munmap-style-unbind-one-partial:
    - bat-atsm-2:         [PASS][18] -> [SKIP][19] +8 other tests skip
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-atsm-2/igt@xe_vm@munmap-style-unbind-one-partial.html
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-atsm-2/igt@xe_vm@munmap-style-unbind-one-partial.html
    - bat-dg2-oem2:       [PASS][20] -> [SKIP][21] +8 other tests skip
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@xe_vm@munmap-style-unbind-one-partial.html
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@xe_vm@munmap-style-unbind-one-partial.html

  
#### Warnings ####

  * igt@kms_addfb_basic@basic-y-tiled-legacy:
    - bat-dg2-oem2:       [SKIP][22] ([Intel XE#624]) -> [FAIL][23]
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@kms_addfb_basic@basic-y-tiled-legacy.html
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@kms_addfb_basic@basic-y-tiled-legacy.html
    - bat-adlp-7:         [FAIL][24] ([Intel XE#609]) -> [FAIL][25] +2 other tests fail
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@kms_addfb_basic@basic-y-tiled-legacy.html
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@kms_addfb_basic@basic-y-tiled-legacy.html

  * igt@kms_addfb_basic@tile-pitch-mismatch:
    - bat-dg2-oem2:       [FAIL][26] ([Intel XE#609]) -> [FAIL][27]
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@kms_addfb_basic@tile-pitch-mismatch.html
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@kms_addfb_basic@tile-pitch-mismatch.html

  * igt@kms_dsc@dsc-basic:
    - bat-adlp-7:         [SKIP][28] ([Intel XE#423]) -> [FAIL][29]
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@kms_dsc@dsc-basic.html
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@kms_dsc@dsc-basic.html

  * igt@kms_flip@basic-flip-vs-wf_vblank:
    - bat-adlp-7:         [FAIL][30] ([Intel XE#480]) -> [FAIL][31]
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@kms_flip@basic-flip-vs-wf_vblank.html
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@kms_flip@basic-flip-vs-wf_vblank.html

  * igt@kms_flip@basic-flip-vs-wf_vblank@c-dp3:
    - bat-dg2-oem2:       [FAIL][32] ([Intel XE#480]) -> [FAIL][33] +1 other test fail
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@kms_flip@basic-flip-vs-wf_vblank@c-dp3.html
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@kms_flip@basic-flip-vs-wf_vblank@c-dp3.html

  * igt@kms_frontbuffer_tracking@basic:
    - bat-dg2-oem2:       [FAIL][34] ([Intel XE#608]) -> [FAIL][35]
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@kms_frontbuffer_tracking@basic.html
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@kms_frontbuffer_tracking@basic.html
    - bat-adlp-7:         [DMESG-FAIL][36] ([Intel XE#282] / [i915#2017]) -> [FAIL][37]
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12:
    - bat-dg2-oem2:       [FAIL][38] ([Intel XE#400] / [Intel XE#616]) -> [FAIL][39] +2 other tests fail
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html

  * igt@xe_evict@evict-beng-small-external:
    - bat-adlp-7:         [SKIP][40] ([Intel XE#261] / [Intel XE#688]) -> [FAIL][41] +15 other tests fail
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@xe_evict@evict-beng-small-external.html
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_evict@evict-beng-small-external.html

  * igt@xe_evict@evict-small-cm:
    - bat-pvc-2:          [DMESG-FAIL][42] ([Intel XE#482]) -> [FAIL][43] +3 other tests fail
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-pvc-2/igt@xe_evict@evict-small-cm.html
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-pvc-2/igt@xe_evict@evict-small-cm.html

  * igt@xe_exec_fault_mode@twice-userptr:
    - bat-adlp-7:         [SKIP][44] ([Intel XE#288]) -> [FAIL][45] +17 other tests fail
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@xe_exec_fault_mode@twice-userptr.html
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_exec_fault_mode@twice-userptr.html

  * igt@xe_mmap@vram:
    - bat-adlp-7:         [SKIP][46] ([Intel XE#263]) -> [FAIL][47]
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@xe_mmap@vram.html
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_mmap@vram.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * {igt@xe_create@create-execqueues-leak}:
    - bat-pvc-2:          [PASS][48] -> [WARN][49] +1 other test warn
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-pvc-2/igt@xe_create@create-execqueues-leak.html
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-pvc-2/igt@xe_create@create-execqueues-leak.html

  * {igt@xe_create@create-execqueues-noleak}:
    - bat-adlp-7:         [FAIL][50] ([Intel XE#524]) -> [FAIL][51]
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@xe_create@create-execqueues-noleak.html
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_create@create-execqueues-noleak.html
    - bat-atsm-2:         [FAIL][52] ([Intel XE#524]) -> [FAIL][53]
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-atsm-2/igt@xe_create@create-execqueues-noleak.html
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-atsm-2/igt@xe_create@create-execqueues-noleak.html

  * {igt@xe_evict_ccs@evict-overcommit-parallel-nofree-samefd}:
    - bat-atsm-2:         [PASS][54] -> [FAIL][55] +2 other tests fail
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-atsm-2/igt@xe_evict_ccs@evict-overcommit-parallel-nofree-samefd.html
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-atsm-2/igt@xe_evict_ccs@evict-overcommit-parallel-nofree-samefd.html
    - bat-pvc-2:          [INCOMPLETE][56] ([Intel XE#392]) -> [FAIL][57]
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-pvc-2/igt@xe_evict_ccs@evict-overcommit-parallel-nofree-samefd.html
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-pvc-2/igt@xe_evict_ccs@evict-overcommit-parallel-nofree-samefd.html

  * {igt@xe_evict_ccs@evict-overcommit-simple}:
    - bat-dg2-oem2:       [PASS][58] -> [FAIL][59] +3 other tests fail
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-dg2-oem2/igt@xe_evict_ccs@evict-overcommit-simple.html
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-dg2-oem2/igt@xe_evict_ccs@evict-overcommit-simple.html
    - bat-adlp-7:         [SKIP][60] ([Intel XE#688]) -> [FAIL][61] +1 other test fail
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@xe_evict_ccs@evict-overcommit-simple.html
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_evict_ccs@evict-overcommit-simple.html

  * {igt@xe_exec_fault_mode@twice-bindexecqueue-userptr}:
    - bat-adlp-7:         [SKIP][62] ([Intel XE#288]) -> [FAIL][63] +14 other tests fail
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr.html
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr.html

  * {igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate}:
    - bat-pvc-2:          [PASS][64] -> [FAIL][65] +27 other tests fail
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-pvc-2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate.html
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-pvc-2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate.html

  * {igt@xe_vm@bind-execqueues-independent}:
    - bat-adlp-7:         [PASS][66] -> [FAIL][67] +15 other tests fail
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7594/bat-adlp-7/igt@xe_vm@bind-execqueues-independent.html
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/bat-adlp-7/igt@xe_vm@bind-execqueues-independent.html

  
New tests
---------

  New tests have been introduced between XEIGT_7594_BAT and XEIGTPW_10212_BAT:

### New IGT tests (1) ###

  * igt@xe_query@query-gt:
    - Statuses : 1 fail(s) 3 pass(s)
    - Exec time: [0.0] s

  

  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
  [Intel XE#263]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/263
  [Intel XE#282]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/282
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/392
  [Intel XE#400]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/400
  [Intel XE#423]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/423
  [Intel XE#480]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/480
  [Intel XE#482]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/482
  [Intel XE#524]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/524
  [Intel XE#608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/608
  [Intel XE#609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/609
  [Intel XE#616]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/616
  [Intel XE#624]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/624
  [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
  [i915#2017]: https://gitlab.freedesktop.org/drm/intel/issues/2017


Build changes
-------------

  * IGT: IGT_7594 -> IGTPW_10212
  * Linux: xe-502-3b8183b7efad3d97ab6cf401f3fc0d24b30b6d3d -> xe-503-eba8bfb1dc87535362e28de282addc8752204df4

  IGTPW_10212: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/index.html
  IGT_7594: 8478eefdaa3eef02b4370339ef0d1970d44a67a2 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-502-3b8183b7efad3d97ab6cf401f3fc0d24b30b6d3d: 3b8183b7efad3d97ab6cf401f3fc0d24b30b6d3d
  xe-503-eba8bfb1dc87535362e28de282addc8752204df4: eba8bfb1dc87535362e28de282addc8752204df4

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10212/index.html

[-- Attachment #2: Type: text/html, Size: 16902 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [igt-dev] ✗ Fi.CI.IGT: failure for uAPI Alignment - Cleanup and future proof (rev2)
  2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
                   ` (15 preceding siblings ...)
  2023-11-17 19:53 ` [igt-dev] ✗ CI.xeBAT: failure " Patchwork
@ 2023-11-18 14:55 ` Patchwork
  16 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2023-11-18 14:55 UTC (permalink / raw)
  To: Francois Dugast; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 100268 bytes --]

== Series Details ==

Series: uAPI Alignment - Cleanup and future proof (rev2)
URL   : https://patchwork.freedesktop.org/series/126537/
State : failure

== Summary ==

CI Bug Log - changes from IGT_7594_full -> IGTPW_10212_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with IGTPW_10212_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in IGTPW_10212_full, please notify your bug team (lgci.bug.filing@intel.com) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/index.html

Participating hosts (11 -> 10)
------------------------------

  Missing    (1): shard-mtlp0 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_10212_full:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_dsc@dsc-fractional-bpp:
    - shard-dg2:          NOTRUN -> [SKIP][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_dsc@dsc-fractional-bpp.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-vga1-hdmi-a1:
    - shard-snb:          NOTRUN -> [DMESG-WARN][2]
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-snb1/igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-vga1-hdmi-a1.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * {igt@drm_buddy@drm_buddy@drm_test_buddy_alloc_pathological}:
    - shard-snb:          [PASS][3] -> [TIMEOUT][4] +2 other tests timeout
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-snb6/igt@drm_buddy@drm_buddy@drm_test_buddy_alloc_pathological.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-snb6/igt@drm_buddy@drm_buddy@drm_test_buddy_alloc_pathological.html

  * {igt@i915_pm_rc6_residency@rc6-accuracy@gt0}:
    - shard-dg2:          [PASS][5] -> [FAIL][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg2-11/igt@i915_pm_rc6_residency@rc6-accuracy@gt0.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@i915_pm_rc6_residency@rc6-accuracy@gt0.html

  * {igt@i915_pm_rc6_residency@rc6-idle@gt0-vecs0}:
    - shard-dg1:          [PASS][7] -> [FAIL][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg1-18/igt@i915_pm_rc6_residency@rc6-idle@gt0-vecs0.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-17/igt@i915_pm_rc6_residency@rc6-idle@gt0-vecs0.html

  * {igt@kms_psr@pr_cursor_blt}:
    - shard-mtlp:         NOTRUN -> [SKIP][9] +2 other tests skip
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@kms_psr@pr_cursor_blt.html

  * {igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options}:
    - shard-mtlp:         [PASS][10] -> [TIMEOUT][11] +1 other test timeout
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-mtlp-4/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html
    - shard-rkl:          NOTRUN -> [TIMEOUT][12]
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html
    - shard-dg1:          [PASS][13] -> [TIMEOUT][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg1-14/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-19/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html
    - shard-tglu:         [PASS][15] -> [TIMEOUT][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-tglu-10/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-4/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html

  * {igt@kms_selftest@drm_damage_helper@drm_test_damage_iter_single_damage_outside_src}:
    - shard-dg2:          NOTRUN -> [TIMEOUT][17]
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_selftest@drm_damage_helper@drm_test_damage_iter_single_damage_outside_src.html

  * {igt@kms_selftest@drm_format@drm_test_format_min_pitch_one_plane_32bpp}:
    - shard-rkl:          [PASS][18] -> [TIMEOUT][19] +2 other tests timeout
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_selftest@drm_format@drm_test_format_min_pitch_one_plane_32bpp.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_selftest@drm_format@drm_test_format_min_pitch_one_plane_32bpp.html

  
Known issues
------------

  Here are the changes found in IGTPW_10212_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@api_intel_bb@blit-reloc-keep-cache:
    - shard-mtlp:         NOTRUN -> [SKIP][20] ([i915#8411])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@api_intel_bb@blit-reloc-keep-cache.html

  * igt@api_intel_bb@object-reloc-keep-cache:
    - shard-dg2:          NOTRUN -> [SKIP][21] ([i915#8411])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@api_intel_bb@object-reloc-keep-cache.html

  * igt@device_reset@cold-reset-bound:
    - shard-mtlp:         NOTRUN -> [SKIP][22] ([i915#7701])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@device_reset@cold-reset-bound.html

  * igt@drm_fdinfo@most-busy-idle-check-all@rcs0:
    - shard-rkl:          [PASS][23] -> [FAIL][24] ([i915#7742])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@drm_fdinfo@most-busy-idle-check-all@rcs0.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@drm_fdinfo@most-busy-idle-check-all@rcs0.html

  * igt@drm_fdinfo@most-busy-idle-check-all@vecs1:
    - shard-dg2:          NOTRUN -> [SKIP][25] ([i915#8414]) +22 other tests skip
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@drm_fdinfo@most-busy-idle-check-all@vecs1.html

  * igt@drm_fdinfo@virtual-busy:
    - shard-mtlp:         NOTRUN -> [SKIP][26] ([i915#8414]) +1 other test skip
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@drm_fdinfo@virtual-busy.html

  * igt@drm_fdinfo@virtual-busy-all:
    - shard-dg1:          NOTRUN -> [SKIP][27] ([i915#8414])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-18/igt@drm_fdinfo@virtual-busy-all.html

  * igt@drm_read@empty-block:
    - shard-rkl:          [PASS][28] -> [SKIP][29] ([i915#1845] / [i915#4098]) +17 other tests skip
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@drm_read@empty-block.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@drm_read@empty-block.html

  * igt@fbdev@unaligned-write:
    - shard-rkl:          [PASS][30] -> [SKIP][31] ([i915#2582]) +1 other test skip
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@fbdev@unaligned-write.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@fbdev@unaligned-write.html

  * igt@gem_bad_reloc@negative-reloc-lut:
    - shard-rkl:          [PASS][32] -> [SKIP][33] ([i915#3281]) +7 other tests skip
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@gem_bad_reloc@negative-reloc-lut.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@gem_bad_reloc@negative-reloc-lut.html

  * igt@gem_basic@multigpu-create-close:
    - shard-mtlp:         NOTRUN -> [SKIP][34] ([i915#7697])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@gem_basic@multigpu-create-close.html

  * igt@gem_ccs@suspend-resume:
    - shard-mtlp:         NOTRUN -> [SKIP][35] ([i915#9323])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-8/igt@gem_ccs@suspend-resume.html

  * igt@gem_close_race@multigpu-basic-process:
    - shard-rkl:          NOTRUN -> [SKIP][36] ([i915#7697])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@gem_close_race@multigpu-basic-process.html

  * igt@gem_close_race@multigpu-basic-threads:
    - shard-dg2:          NOTRUN -> [SKIP][37] ([i915#7697]) +1 other test skip
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@gem_close_race@multigpu-basic-threads.html

  * igt@gem_ctx_exec@basic-nohangcheck:
    - shard-tglu:         [PASS][38] -> [FAIL][39] ([i915#6268])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-tglu-7/igt@gem_ctx_exec@basic-nohangcheck.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-7/igt@gem_ctx_exec@basic-nohangcheck.html

  * igt@gem_ctx_persistence@hang:
    - shard-mtlp:         NOTRUN -> [SKIP][40] ([i915#8555])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-8/igt@gem_ctx_persistence@hang.html

  * igt@gem_ctx_persistence@saturated-hostile-nopreempt@ccs0:
    - shard-dg2:          NOTRUN -> [SKIP][41] ([i915#5882]) +9 other tests skip
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@gem_ctx_persistence@saturated-hostile-nopreempt@ccs0.html

  * igt@gem_ctx_sseu@engines:
    - shard-dg2:          NOTRUN -> [SKIP][42] ([i915#280])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@gem_ctx_sseu@engines.html
    - shard-dg1:          NOTRUN -> [SKIP][43] ([i915#280])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@gem_ctx_sseu@engines.html

  * igt@gem_ctx_sseu@invalid-args:
    - shard-mtlp:         NOTRUN -> [SKIP][44] ([i915#280])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-5/igt@gem_ctx_sseu@invalid-args.html

  * igt@gem_exec_balancer@parallel-ordering:
    - shard-rkl:          NOTRUN -> [SKIP][45] ([i915#4525])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_exec_balancer@parallel-ordering.html

  * igt@gem_exec_capture@many-4k-incremental:
    - shard-dg2:          NOTRUN -> [FAIL][46] ([i915#9606])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@gem_exec_capture@many-4k-incremental.html

  * igt@gem_exec_endless@dispatch@bcs0:
    - shard-rkl:          [PASS][47] -> [SKIP][48] ([i915#9591])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@gem_exec_endless@dispatch@bcs0.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_exec_endless@dispatch@bcs0.html

  * igt@gem_exec_fair@basic-none-rrul@rcs0:
    - shard-rkl:          NOTRUN -> [FAIL][49] ([i915#2842])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@gem_exec_fair@basic-none-rrul@rcs0.html

  * igt@gem_exec_fair@basic-none-solo:
    - shard-mtlp:         NOTRUN -> [SKIP][50] ([i915#4473]) +1 other test skip
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-7/igt@gem_exec_fair@basic-none-solo.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-glk:          NOTRUN -> [FAIL][51] ([i915#2842])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk8/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [PASS][52] -> [FAIL][53] ([i915#2842]) +1 other test fail
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-glk3/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk9/igt@gem_exec_fair@basic-pace-share@rcs0.html
    - shard-rkl:          [PASS][54] -> [FAIL][55] ([i915#2842])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-sync:
    - shard-dg1:          NOTRUN -> [SKIP][56] ([i915#3539])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-19/igt@gem_exec_fair@basic-sync.html

  * igt@gem_exec_fence@submit67:
    - shard-dg2:          NOTRUN -> [SKIP][57] ([i915#4812]) +1 other test skip
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@gem_exec_fence@submit67.html

  * igt@gem_exec_flush@basic-uc-set-default:
    - shard-dg2:          NOTRUN -> [SKIP][58] ([i915#3539]) +1 other test skip
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@gem_exec_flush@basic-uc-set-default.html

  * igt@gem_exec_flush@basic-wb-rw-before-default:
    - shard-dg2:          NOTRUN -> [SKIP][59] ([i915#3539] / [i915#4852]) +2 other tests skip
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@gem_exec_flush@basic-wb-rw-before-default.html

  * igt@gem_exec_reloc@basic-cpu-gtt-noreloc:
    - shard-dg2:          NOTRUN -> [SKIP][60] ([i915#3281]) +14 other tests skip
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@gem_exec_reloc@basic-cpu-gtt-noreloc.html
    - shard-rkl:          NOTRUN -> [SKIP][61] ([i915#3281]) +6 other tests skip
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@gem_exec_reloc@basic-cpu-gtt-noreloc.html

  * igt@gem_exec_reloc@basic-write-read-active:
    - shard-dg1:          NOTRUN -> [SKIP][62] ([i915#3281]) +3 other tests skip
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@gem_exec_reloc@basic-write-read-active.html

  * igt@gem_exec_reloc@basic-write-read-noreloc:
    - shard-mtlp:         NOTRUN -> [SKIP][63] ([i915#3281]) +11 other tests skip
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-8/igt@gem_exec_reloc@basic-write-read-noreloc.html

  * igt@gem_exec_schedule@preempt-queue-contexts-chain:
    - shard-dg2:          NOTRUN -> [SKIP][64] ([i915#4537] / [i915#4812]) +1 other test skip
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@gem_exec_schedule@preempt-queue-contexts-chain.html

  * igt@gem_exec_schedule@reorder-wide:
    - shard-dg1:          NOTRUN -> [SKIP][65] ([i915#4812]) +1 other test skip
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@gem_exec_schedule@reorder-wide.html

  * igt@gem_exec_suspend@basic-s4-devices@lmem0:
    - shard-dg1:          [PASS][66] -> [ABORT][67] ([i915#7975] / [i915#8213])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg1-17/igt@gem_exec_suspend@basic-s4-devices@lmem0.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-14/igt@gem_exec_suspend@basic-s4-devices@lmem0.html

  * igt@gem_fenced_exec_thrash@no-spare-fences:
    - shard-mtlp:         NOTRUN -> [SKIP][68] ([i915#4860]) +1 other test skip
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@gem_fenced_exec_thrash@no-spare-fences.html

  * igt@gem_fenced_exec_thrash@no-spare-fences-interruptible:
    - shard-dg2:          NOTRUN -> [SKIP][69] ([i915#4860]) +1 other test skip
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@gem_fenced_exec_thrash@no-spare-fences-interruptible.html

  * igt@gem_lmem_swapping@heavy-verify-multi:
    - shard-rkl:          NOTRUN -> [SKIP][70] ([i915#4613])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@gem_lmem_swapping@heavy-verify-multi.html

  * igt@gem_lmem_swapping@parallel-random:
    - shard-tglu:         NOTRUN -> [SKIP][71] ([i915#4613])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-10/igt@gem_lmem_swapping@parallel-random.html

  * igt@gem_lmem_swapping@random-engines:
    - shard-glk:          NOTRUN -> [SKIP][72] ([fdo#109271] / [i915#4613]) +1 other test skip
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk3/igt@gem_lmem_swapping@random-engines.html

  * igt@gem_lmem_swapping@verify-random-ccs:
    - shard-mtlp:         NOTRUN -> [SKIP][73] ([i915#4613]) +4 other tests skip
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@gem_lmem_swapping@verify-random-ccs.html

  * igt@gem_madvise@dontneed-before-pwrite:
    - shard-dg1:          NOTRUN -> [SKIP][74] ([i915#3282]) +1 other test skip
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-14/igt@gem_madvise@dontneed-before-pwrite.html

  * igt@gem_media_fill@media-fill:
    - shard-dg2:          NOTRUN -> [SKIP][75] ([i915#8289])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@gem_media_fill@media-fill.html

  * igt@gem_mmap_gtt@big-bo-tiledy:
    - shard-mtlp:         NOTRUN -> [SKIP][76] ([i915#4077]) +7 other tests skip
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-8/igt@gem_mmap_gtt@big-bo-tiledy.html

  * igt@gem_mmap_gtt@cpuset-big-copy-odd:
    - shard-dg2:          NOTRUN -> [SKIP][77] ([i915#4077]) +12 other tests skip
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@gem_mmap_gtt@cpuset-big-copy-odd.html
    - shard-dg1:          NOTRUN -> [SKIP][78] ([i915#4077]) +2 other tests skip
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@gem_mmap_gtt@cpuset-big-copy-odd.html

  * igt@gem_mmap_wc@bad-object:
    - shard-dg2:          NOTRUN -> [SKIP][79] ([i915#4083]) +6 other tests skip
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@gem_mmap_wc@bad-object.html

  * igt@gem_mmap_wc@read:
    - shard-mtlp:         NOTRUN -> [SKIP][80] ([i915#4083]) +3 other tests skip
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@gem_mmap_wc@read.html

  * igt@gem_mmap_wc@set-cache-level:
    - shard-rkl:          [PASS][81] -> [SKIP][82] ([i915#1850])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@gem_mmap_wc@set-cache-level.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_mmap_wc@set-cache-level.html

  * igt@gem_partial_pwrite_pread@reads:
    - shard-dg2:          NOTRUN -> [SKIP][83] ([i915#3282]) +8 other tests skip
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@gem_partial_pwrite_pread@reads.html

  * igt@gem_partial_pwrite_pread@reads-uncached:
    - shard-rkl:          NOTRUN -> [SKIP][84] ([i915#3282]) +2 other tests skip
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@gem_partial_pwrite_pread@reads-uncached.html

  * igt@gem_partial_pwrite_pread@writes-after-reads:
    - shard-mtlp:         NOTRUN -> [SKIP][85] ([i915#3282]) +3 other tests skip
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@gem_partial_pwrite_pread@writes-after-reads.html

  * igt@gem_partial_pwrite_pread@writes-after-reads-uncached:
    - shard-rkl:          [PASS][86] -> [SKIP][87] ([i915#3282]) +8 other tests skip
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@gem_partial_pwrite_pread@writes-after-reads-uncached.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@gem_partial_pwrite_pread@writes-after-reads-uncached.html

  * igt@gem_pxp@create-regular-context-2:
    - shard-rkl:          NOTRUN -> [SKIP][88] ([i915#4270]) +1 other test skip
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@gem_pxp@create-regular-context-2.html

  * igt@gem_pxp@display-protected-crc:
    - shard-mtlp:         NOTRUN -> [SKIP][89] ([i915#4270]) +2 other tests skip
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@gem_pxp@display-protected-crc.html

  * igt@gem_pxp@dmabuf-shared-protected-dst-is-context-refcounted:
    - shard-dg2:          NOTRUN -> [SKIP][90] ([i915#4270]) +1 other test skip
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@gem_pxp@dmabuf-shared-protected-dst-is-context-refcounted.html

  * igt@gem_pxp@reject-modify-context-protection-on:
    - shard-dg1:          NOTRUN -> [SKIP][91] ([i915#4270]) +1 other test skip
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@gem_pxp@reject-modify-context-protection-on.html

  * igt@gem_render_copy@y-tiled-to-vebox-yf-tiled:
    - shard-rkl:          NOTRUN -> [SKIP][92] ([i915#768]) +1 other test skip
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_render_copy@y-tiled-to-vebox-yf-tiled.html

  * igt@gem_render_copy@yf-tiled-ccs-to-y-tiled-ccs:
    - shard-mtlp:         NOTRUN -> [SKIP][93] ([i915#8428]) +4 other tests skip
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@gem_render_copy@yf-tiled-ccs-to-y-tiled-ccs.html

  * igt@gem_set_tiling_vs_blt@tiled-to-tiled:
    - shard-rkl:          [PASS][94] -> [SKIP][95] ([i915#8411])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@gem_set_tiling_vs_blt@tiled-to-tiled.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@gem_set_tiling_vs_blt@tiled-to-tiled.html

  * igt@gem_set_tiling_vs_blt@tiled-to-untiled:
    - shard-dg2:          NOTRUN -> [SKIP][96] ([i915#4079]) +2 other tests skip
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@gem_set_tiling_vs_blt@tiled-to-untiled.html

  * igt@gem_softpin@evict-snoop-interruptible:
    - shard-mtlp:         NOTRUN -> [SKIP][97] ([i915#4885])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-8/igt@gem_softpin@evict-snoop-interruptible.html

  * igt@gem_spin_batch@spin-all-new:
    - shard-dg2:          NOTRUN -> [FAIL][98] ([i915#5889])
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@gem_spin_batch@spin-all-new.html

  * igt@gem_tiled_pread_basic:
    - shard-dg1:          NOTRUN -> [SKIP][99] ([i915#4079])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@gem_tiled_pread_basic.html

  * igt@gem_tiled_pread_pwrite:
    - shard-mtlp:         NOTRUN -> [SKIP][100] ([i915#4079])
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@gem_tiled_pread_pwrite.html

  * igt@gem_unfence_active_buffers:
    - shard-mtlp:         NOTRUN -> [SKIP][101] ([i915#4879])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@gem_unfence_active_buffers.html

  * igt@gem_userptr_blits@create-destroy-unsync:
    - shard-dg2:          NOTRUN -> [SKIP][102] ([i915#3297]) +2 other tests skip
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@gem_userptr_blits@create-destroy-unsync.html
    - shard-rkl:          NOTRUN -> [SKIP][103] ([i915#3297]) +1 other test skip
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@gem_userptr_blits@create-destroy-unsync.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy:
    - shard-dg2:          NOTRUN -> [SKIP][104] ([i915#3297] / [i915#4880]) +1 other test skip
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@gem_userptr_blits@map-fixed-invalidate-busy.html

  * igt@gem_userptr_blits@unsync-overlap:
    - shard-mtlp:         NOTRUN -> [SKIP][105] ([i915#3297]) +3 other tests skip
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@gem_userptr_blits@unsync-overlap.html

  * igt@gem_userptr_blits@unsync-unmap-cycles:
    - shard-dg1:          NOTRUN -> [SKIP][106] ([i915#3297]) +1 other test skip
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-19/igt@gem_userptr_blits@unsync-unmap-cycles.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-glk:          NOTRUN -> [FAIL][107] ([i915#3318])
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk9/igt@gem_userptr_blits@vma-merge.html
    - shard-dg2:          NOTRUN -> [FAIL][108] ([i915#3318])
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@gem_userptr_blits@vma-merge.html

  * igt@gen3_render_linear_blits:
    - shard-rkl:          NOTRUN -> [SKIP][109] ([fdo#109289]) +1 other test skip
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@gen3_render_linear_blits.html

  * igt@gen9_exec_parse@basic-rejected:
    - shard-rkl:          NOTRUN -> [SKIP][110] ([i915#2527])
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@gen9_exec_parse@basic-rejected.html

  * igt@gen9_exec_parse@basic-rejected-ctx-param:
    - shard-mtlp:         NOTRUN -> [SKIP][111] ([i915#2856]) +2 other tests skip
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@gen9_exec_parse@basic-rejected-ctx-param.html

  * igt@gen9_exec_parse@batch-invalid-length:
    - shard-dg2:          NOTRUN -> [SKIP][112] ([i915#2856]) +1 other test skip
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@gen9_exec_parse@batch-invalid-length.html

  * igt@gen9_exec_parse@shadow-peek:
    - shard-rkl:          [PASS][113] -> [SKIP][114] ([i915#2527]) +2 other tests skip
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@gen9_exec_parse@shadow-peek.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@gen9_exec_parse@shadow-peek.html

  * igt@i915_pm_rc6_residency@media-rc6-accuracy:
    - shard-dg1:          NOTRUN -> [SKIP][115] ([fdo#109289]) +1 other test skip
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@i915_pm_rc6_residency@media-rc6-accuracy.html

  * igt@i915_pm_rps@min-max-config-loaded:
    - shard-mtlp:         NOTRUN -> [SKIP][116] ([i915#6621])
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@i915_pm_rps@min-max-config-loaded.html

  * igt@i915_pm_rps@thresholds-idle-park@gt0:
    - shard-mtlp:         NOTRUN -> [SKIP][117] ([i915#8925])
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-7/igt@i915_pm_rps@thresholds-idle-park@gt0.html

  * igt@i915_pm_rps@thresholds-idle-park@gt1:
    - shard-mtlp:         NOTRUN -> [SKIP][118] ([i915#3555] / [i915#8925])
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-7/igt@i915_pm_rps@thresholds-idle-park@gt1.html

  * igt@i915_pm_rps@thresholds-idle@gt0:
    - shard-dg1:          NOTRUN -> [SKIP][119] ([i915#8925])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@i915_pm_rps@thresholds-idle@gt0.html

  * igt@i915_query@query-topology-unsupported:
    - shard-mtlp:         NOTRUN -> [SKIP][120] ([fdo#109302])
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-5/igt@i915_query@query-topology-unsupported.html

  * igt@i915_suspend@basic-s3-without-i915:
    - shard-rkl:          [PASS][121] -> [FAIL][122] ([fdo#103375])
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@i915_suspend@basic-s3-without-i915.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@i915_suspend@basic-s3-without-i915.html

  * igt@kms_addfb_basic@addfb25-framebuffer-vs-set-tiling:
    - shard-dg1:          NOTRUN -> [SKIP][123] ([i915#4212])
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@kms_addfb_basic@addfb25-framebuffer-vs-set-tiling.html

  * igt@kms_addfb_basic@addfb25-x-tiled-legacy:
    - shard-mtlp:         NOTRUN -> [SKIP][124] ([i915#4212])
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@kms_addfb_basic@addfb25-x-tiled-legacy.html

  * igt@kms_addfb_basic@addfb25-x-tiled-mismatch-legacy:
    - shard-dg2:          NOTRUN -> [SKIP][125] ([i915#4212])
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_addfb_basic@addfb25-x-tiled-mismatch-legacy.html

  * igt@kms_addfb_basic@invalid-smem-bo-on-discrete:
    - shard-mtlp:         NOTRUN -> [SKIP][126] ([i915#3826])
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@kms_addfb_basic@invalid-smem-bo-on-discrete.html

  * igt@kms_addfb_basic@tile-pitch-mismatch:
    - shard-dg2:          NOTRUN -> [SKIP][127] ([i915#4212] / [i915#5608])
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_addfb_basic@tile-pitch-mismatch.html

  * igt@kms_async_flips@invalid-async-flip:
    - shard-dg2:          NOTRUN -> [SKIP][128] ([i915#6228])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_async_flips@invalid-async-flip.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels:
    - shard-dg2:          NOTRUN -> [SKIP][129] ([i915#1769] / [i915#3555])
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html

  * igt@kms_atomic_transition@plane-use-after-nonblocking-unbind:
    - shard-rkl:          NOTRUN -> [SKIP][130] ([i915#1845] / [i915#4098]) +15 other tests skip
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_atomic_transition@plane-use-after-nonblocking-unbind.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-0:
    - shard-dg1:          NOTRUN -> [SKIP][131] ([i915#4538] / [i915#5286]) +2 other tests skip
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@kms_big_fb@4-tiled-64bpp-rotate-0.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-270:
    - shard-rkl:          NOTRUN -> [SKIP][132] ([i915#5286])
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip:
    - shard-mtlp:         [PASS][133] -> [FAIL][134] ([i915#5138]) +1 other test fail
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-mtlp-8/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-7/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip.html

  * igt@kms_big_fb@linear-16bpp-rotate-90:
    - shard-dg2:          NOTRUN -> [SKIP][135] ([fdo#111614]) +6 other tests skip
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@kms_big_fb@linear-16bpp-rotate-90.html

  * igt@kms_big_fb@linear-32bpp-rotate-90:
    - shard-mtlp:         NOTRUN -> [SKIP][136] ([fdo#111614]) +2 other tests skip
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@kms_big_fb@linear-32bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-16bpp-rotate-270:
    - shard-dg1:          NOTRUN -> [SKIP][137] ([i915#3638]) +1 other test skip
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-18/igt@kms_big_fb@x-tiled-16bpp-rotate-270.html

  * igt@kms_big_fb@x-tiled-8bpp-rotate-270:
    - shard-rkl:          NOTRUN -> [SKIP][138] ([fdo#111614] / [i915#3638])
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_big_fb@x-tiled-8bpp-rotate-270.html

  * igt@kms_big_fb@y-tiled-addfb:
    - shard-mtlp:         NOTRUN -> [SKIP][139] ([i915#6187])
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@kms_big_fb@y-tiled-addfb.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-async-flip:
    - shard-tglu:         [PASS][140] -> [FAIL][141] ([i915#3743]) +1 other test fail
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-tglu-7/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-9/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
    - shard-dg2:          NOTRUN -> [SKIP][142] ([i915#5190]) +14 other tests skip
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-0:
    - shard-mtlp:         NOTRUN -> [SKIP][143] ([fdo#111615]) +8 other tests skip
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@kms_big_fb@yf-tiled-32bpp-rotate-0.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-180:
    - shard-dg2:          NOTRUN -> [SKIP][144] ([i915#4538] / [i915#5190]) +4 other tests skip
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html
    - shard-rkl:          NOTRUN -> [SKIP][145] ([fdo#110723]) +1 other test skip
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-180:
    - shard-dg1:          NOTRUN -> [SKIP][146] ([i915#4538]) +1 other test skip
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@kms_big_fb@yf-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-addfb-size-overflow:
    - shard-tglu:         NOTRUN -> [SKIP][147] ([fdo#111615])
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-10/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html

  * igt@kms_big_joiner@invalid-modeset:
    - shard-dg1:          NOTRUN -> [SKIP][148] ([i915#2705])
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@kms_big_joiner@invalid-modeset.html

  * igt@kms_cdclk@mode-transition@pipe-d-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][149] ([i915#4087] / [i915#7213]) +3 other tests skip
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@kms_cdclk@mode-transition@pipe-d-hdmi-a-3.html

  * igt@kms_cdclk@plane-scaling@pipe-c-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][150] ([i915#4087]) +3 other tests skip
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@kms_cdclk@plane-scaling@pipe-c-hdmi-a-3.html

  * igt@kms_chamelium_color@ctm-max:
    - shard-dg1:          NOTRUN -> [SKIP][151] ([fdo#111827])
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-19/igt@kms_chamelium_color@ctm-max.html
    - shard-mtlp:         NOTRUN -> [SKIP][152] ([fdo#111827])
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@kms_chamelium_color@ctm-max.html

  * igt@kms_chamelium_color@ctm-negative:
    - shard-glk:          NOTRUN -> [SKIP][153] ([fdo#109271]) +73 other tests skip
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk5/igt@kms_chamelium_color@ctm-negative.html

  * igt@kms_chamelium_color@ctm-red-to-blue:
    - shard-dg2:          NOTRUN -> [SKIP][154] ([fdo#111827]) +1 other test skip
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@kms_chamelium_color@ctm-red-to-blue.html

  * igt@kms_chamelium_color@degamma:
    - shard-rkl:          NOTRUN -> [SKIP][155] ([fdo#111827]) +1 other test skip
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_chamelium_color@degamma.html

  * igt@kms_chamelium_edid@hdmi-edid-stress-resolution-non-4k:
    - shard-dg2:          NOTRUN -> [SKIP][156] ([i915#7828]) +10 other tests skip
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_chamelium_edid@hdmi-edid-stress-resolution-non-4k.html

  * igt@kms_chamelium_edid@hdmi-mode-timings:
    - shard-dg1:          NOTRUN -> [SKIP][157] ([i915#7828])
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@kms_chamelium_edid@hdmi-mode-timings.html

  * igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode:
    - shard-rkl:          NOTRUN -> [SKIP][158] ([i915#7828]) +2 other tests skip
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode.html

  * igt@kms_chamelium_hpd@hdmi-hpd-enable-disable-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][159] ([i915#7828]) +7 other tests skip
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@kms_chamelium_hpd@hdmi-hpd-enable-disable-mode.html

  * igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe:
    - shard-tglu:         NOTRUN -> [SKIP][160] ([i915#7828]) +1 other test skip
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-2/igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe.html

  * igt@kms_color@ctm-0-75@pipe-b:
    - shard-rkl:          [PASS][161] -> [SKIP][162] ([i915#4098]) +1 other test skip
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@kms_color@ctm-0-75@pipe-b.html
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_color@ctm-0-75@pipe-b.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-mtlp:         NOTRUN -> [SKIP][163] ([i915#6944])
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-8/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@atomic-dpms@pipe-a-dp-4:
    - shard-dg2:          NOTRUN -> [TIMEOUT][164] ([i915#7173])
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_content_protection@atomic-dpms@pipe-a-dp-4.html

  * igt@kms_content_protection@atomic@pipe-a-dp-1:
    - shard-apl:          NOTRUN -> [TIMEOUT][165] ([i915#7173])
   [165]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-apl2/igt@kms_content_protection@atomic@pipe-a-dp-1.html

  * igt@kms_content_protection@dp-mst-lic-type-1:
    - shard-mtlp:         NOTRUN -> [SKIP][166] ([i915#3299])
   [166]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@kms_content_protection@dp-mst-lic-type-1.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-dg2:          NOTRUN -> [SKIP][167] ([i915#3299])
   [167]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_cursor_crc@cursor-onscreen-32x10:
    - shard-dg1:          NOTRUN -> [SKIP][168] ([i915#3555])
   [168]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@kms_cursor_crc@cursor-onscreen-32x10.html

  * igt@kms_cursor_crc@cursor-onscreen-512x170:
    - shard-rkl:          NOTRUN -> [SKIP][169] ([fdo#109279] / [i915#3359])
   [169]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_cursor_crc@cursor-onscreen-512x170.html

  * igt@kms_cursor_crc@cursor-random-32x10:
    - shard-mtlp:         NOTRUN -> [SKIP][170] ([i915#3555] / [i915#8814])
   [170]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-5/igt@kms_cursor_crc@cursor-random-32x10.html

  * igt@kms_cursor_crc@cursor-random-512x170:
    - shard-dg2:          NOTRUN -> [SKIP][171] ([i915#3359]) +2 other tests skip
   [171]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_cursor_crc@cursor-random-512x170.html
    - shard-dg1:          NOTRUN -> [SKIP][172] ([i915#3359])
   [172]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-14/igt@kms_cursor_crc@cursor-random-512x170.html

  * igt@kms_cursor_crc@cursor-sliding-32x10:
    - shard-dg2:          NOTRUN -> [SKIP][173] ([i915#3555]) +6 other tests skip
   [173]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_cursor_crc@cursor-sliding-32x10.html

  * igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic:
    - shard-mtlp:         NOTRUN -> [SKIP][174] ([fdo#111767] / [i915#3546])
   [174]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic.html

  * igt@kms_cursor_legacy@2x-nonblocking-modeset-vs-cursor-atomic:
    - shard-mtlp:         NOTRUN -> [SKIP][175] ([i915#3546]) +2 other tests skip
   [175]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@kms_cursor_legacy@2x-nonblocking-modeset-vs-cursor-atomic.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-legacy:
    - shard-tglu:         NOTRUN -> [SKIP][176] ([fdo#109274])
   [176]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-2/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-varying-size:
    - shard-rkl:          NOTRUN -> [SKIP][177] ([fdo#111825]) +4 other tests skip
   [177]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html

  * igt@kms_cursor_legacy@cursorb-vs-flipa-toggle:
    - shard-dg2:          NOTRUN -> [SKIP][178] ([fdo#109274] / [i915#5354]) +3 other tests skip
   [178]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@kms_cursor_legacy@cursorb-vs-flipa-toggle.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
    - shard-glk:          [PASS][179] -> [FAIL][180] ([i915#2346])
   [179]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-glk9/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
   [180]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk2/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions:
    - shard-dg2:          NOTRUN -> [SKIP][181] ([i915#4103] / [i915#4213])
   [181]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions.html
    - shard-tglu:         NOTRUN -> [SKIP][182] ([i915#4103])
   [182]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-2/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
    - shard-rkl:          NOTRUN -> [SKIP][183] ([i915#4103])
   [183]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html

  * igt@kms_dirtyfb@dirtyfb-ioctl@psr-vga-1:
    - shard-snb:          NOTRUN -> [SKIP][184] ([fdo#109271]) +10 other tests skip
   [184]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-snb6/igt@kms_dirtyfb@dirtyfb-ioctl@psr-vga-1.html

  * igt@kms_dsc@dsc-with-output-formats:
    - shard-mtlp:         NOTRUN -> [SKIP][185] ([i915#3555] / [i915#3840])
   [185]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@kms_dsc@dsc-with-output-formats.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-glk:          [PASS][186] -> [FAIL][187] ([i915#4767])
   [186]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-glk9/igt@kms_fbcon_fbt@fbc-suspend.html
   [187]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk2/igt@kms_fbcon_fbt@fbc-suspend.html
    - shard-rkl:          [PASS][188] -> [SKIP][189] ([i915#1849] / [i915#4098])
   [188]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-4/igt@kms_fbcon_fbt@fbc-suspend.html
   [189]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_fence_pin_leak:
    - shard-dg2:          NOTRUN -> [SKIP][190] ([i915#4881])
   [190]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_fence_pin_leak.html

  * igt@kms_flip@2x-blocking-wf_vblank:
    - shard-tglu:         NOTRUN -> [SKIP][191] ([fdo#109274] / [i915#3637])
   [191]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-7/igt@kms_flip@2x-blocking-wf_vblank.html

  * igt@kms_flip@2x-flip-vs-absolute-wf_vblank:
    - shard-mtlp:         NOTRUN -> [SKIP][192] ([i915#3637]) +2 other tests skip
   [192]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@kms_flip@2x-flip-vs-absolute-wf_vblank.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible:
    - shard-dg2:          NOTRUN -> [SKIP][193] ([fdo#109274] / [fdo#111767])
   [193]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible.html

  * igt@kms_flip@2x-flip-vs-panning-vs-hang:
    - shard-dg2:          NOTRUN -> [SKIP][194] ([fdo#109274]) +6 other tests skip
   [194]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@kms_flip@2x-flip-vs-panning-vs-hang.html

  * igt@kms_flip@basic-flip-vs-dpms:
    - shard-rkl:          NOTRUN -> [SKIP][195] ([i915#3637] / [i915#4098]) +8 other tests skip
   [195]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_flip@basic-flip-vs-dpms.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-upscaling:
    - shard-rkl:          NOTRUN -> [SKIP][196] ([i915#3555]) +10 other tests skip
   [196]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-upscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-upscaling@pipe-a-valid-mode:
    - shard-dg1:          NOTRUN -> [SKIP][197] ([i915#2587] / [i915#2672])
   [197]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-17/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][198] ([i915#3555] / [i915#8810])
   [198]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-downscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][199] ([i915#2672]) +2 other tests skip
   [199]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling@pipe-a-valid-mode:
    - shard-rkl:          NOTRUN -> [SKIP][200] ([i915#2672]) +4 other tests skip
   [200]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][201] ([i915#2672] / [i915#3555]) +2 other tests skip
   [201]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-1/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling@pipe-a-valid-mode:
    - shard-dg2:          NOTRUN -> [SKIP][202] ([i915#2672]) +3 other tests skip
   [202]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling@pipe-a-valid-mode.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-mmap-wc:
    - shard-rkl:          [PASS][203] -> [SKIP][204] ([i915#1849] / [i915#4098] / [i915#5354]) +2 other tests skip
   [203]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-mmap-wc.html
   [204]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-pwrite:
    - shard-dg2:          [PASS][205] -> [FAIL][206] ([i915#6880])
   [205]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg2-1/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-pwrite.html
   [206]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-shrfb-draw-mmap-wc:
    - shard-dg2:          NOTRUN -> [SKIP][207] ([i915#8708]) +27 other tests skip
   [207]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt:
    - shard-dg1:          NOTRUN -> [SKIP][208] ([fdo#111825]) +7 other tests skip
   [208]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-18/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-blt:
    - shard-mtlp:         NOTRUN -> [SKIP][209] ([i915#1825]) +24 other tests skip
   [209]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-5/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-pwrite:
    - shard-dg1:          NOTRUN -> [SKIP][210] ([i915#3458]) +5 other tests skip
   [210]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-18/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-rte:
    - shard-rkl:          NOTRUN -> [SKIP][211] ([i915#3023]) +3 other tests skip
   [211]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_frontbuffer_tracking@fbcpsr-1p-rte.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt:
    - shard-rkl:          NOTRUN -> [SKIP][212] ([i915#1849] / [i915#4098] / [i915#5354]) +2 other tests skip
   [212]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-shrfb-draw-mmap-cpu:
    - shard-dg2:          NOTRUN -> [SKIP][213] ([i915#3458]) +20 other tests skip
   [213]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-blt:
    - shard-tglu:         NOTRUN -> [SKIP][214] ([fdo#110189])
   [214]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-10/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-mmap-gtt:
    - shard-mtlp:         NOTRUN -> [SKIP][215] ([i915#8708]) +6 other tests skip
   [215]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-move:
    - shard-tglu:         NOTRUN -> [SKIP][216] ([fdo#109280])
   [216]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-4/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-move.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-cur-indfb-draw-blt:
    - shard-dg2:          NOTRUN -> [SKIP][217] ([i915#5354]) +27 other tests skip
   [217]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-cur-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt:
    - shard-rkl:          NOTRUN -> [SKIP][218] ([fdo#111825] / [i915#1825]) +12 other tests skip
   [218]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-wc:
    - shard-dg1:          NOTRUN -> [SKIP][219] ([i915#8708]) +6 other tests skip
   [219]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-14/igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-wc.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-dg2:          NOTRUN -> [SKIP][220] ([i915#3555] / [i915#8228])
   [220]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@kms_hdr@bpc-switch-suspend.html
    - shard-dg1:          NOTRUN -> [SKIP][221] ([i915#3555] / [i915#8228])
   [221]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-18/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-dg2:          NOTRUN -> [SKIP][222] ([i915#4816])
   [222]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_panel_fitting@atomic-fastset:
    - shard-dg2:          NOTRUN -> [SKIP][223] ([i915#6301])
   [223]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@kms_panel_fitting@atomic-fastset.html

  * igt@kms_plane@plane-panning-bottom-right-suspend:
    - shard-rkl:          NOTRUN -> [SKIP][224] ([i915#4098] / [i915#8825])
   [224]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_plane@plane-panning-bottom-right-suspend.html

  * igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-a-hdmi-a-1:
    - shard-glk:          NOTRUN -> [FAIL][225] ([i915#4573]) +1 other test fail
   [225]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk9/igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-a-hdmi-a-1.html

  * igt@kms_plane_multiple@tiling-yf:
    - shard-dg2:          NOTRUN -> [SKIP][226] ([i915#3555] / [i915#8806])
   [226]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_plane_multiple@tiling-yf.html

  * igt@kms_plane_scaling@intel-max-src-size:
    - shard-dg2:          NOTRUN -> [SKIP][227] ([i915#6953])
   [227]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@kms_plane_scaling@intel-max-src-size.html

  * igt@kms_plane_scaling@plane-scaler-with-clipping-clamping-modifiers:
    - shard-rkl:          NOTRUN -> [SKIP][228] ([i915#3555] / [i915#4098] / [i915#8152])
   [228]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_plane_scaling@plane-scaler-with-clipping-clamping-modifiers.html

  * igt@kms_plane_scaling@plane-scaler-with-clipping-clamping-rotation@pipe-c-hdmi-a-4:
    - shard-dg1:          NOTRUN -> [SKIP][229] ([i915#5176] / [i915#9423]) +3 other tests skip
   [229]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@kms_plane_scaling@plane-scaler-with-clipping-clamping-rotation@pipe-c-hdmi-a-4.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-a-hdmi-a-2:
    - shard-dg2:          NOTRUN -> [SKIP][230] ([i915#5235]) +7 other tests skip
   [230]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-a-hdmi-a-2.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][231] ([i915#5235]) +9 other tests skip
   [231]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-b-hdmi-a-2.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-5:
    - shard-rkl:          NOTRUN -> [SKIP][232] ([i915#4098] / [i915#6953] / [i915#8152])
   [232]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_plane_scaling@planes-downscale-factor-0-5.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-20x20:
    - shard-rkl:          NOTRUN -> [SKIP][233] ([i915#8152]) +1 other test skip
   [233]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-20x20.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-factor-0-25:
    - shard-rkl:          NOTRUN -> [SKIP][234] ([i915#6953] / [i915#8152])
   [234]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-factor-0-25.html

  * igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-c-hdmi-a-1:
    - shard-dg1:          NOTRUN -> [SKIP][235] ([i915#5235]) +11 other tests skip
   [235]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-19/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-c-hdmi-a-1.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75@pipe-a-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][236] ([i915#5235]) +12 other tests skip
   [236]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75@pipe-a-edp-1.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75@pipe-d-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][237] ([i915#3555] / [i915#5235]) +2 other tests skip
   [237]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75@pipe-d-edp-1.html

  * igt@kms_properties@plane-properties-atomic:
    - shard-rkl:          [PASS][238] -> [SKIP][239] ([i915#1849])
   [238]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@kms_properties@plane-properties-atomic.html
   [239]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_properties@plane-properties-atomic.html

  * igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-fully-sf:
    - shard-dg2:          NOTRUN -> [SKIP][240] ([i915#9683]) +1 other test skip
   [240]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area:
    - shard-rkl:          NOTRUN -> [SKIP][241] ([fdo#111068] / [i915#9683]) +1 other test skip
   [241]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area.html

  * igt@kms_psr@psr2_primary_blt:
    - shard-dg2:          NOTRUN -> [SKIP][242] ([i915#9681]) +1 other test skip
   [242]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@kms_psr@psr2_primary_blt.html
    - shard-rkl:          NOTRUN -> [SKIP][243] ([i915#9673]) +1 other test skip
   [243]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_psr@psr2_primary_blt.html

  * igt@kms_rotation_crc@bad-tiling:
    - shard-dg2:          NOTRUN -> [SKIP][244] ([i915#4235]) +2 other tests skip
   [244]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-6/igt@kms_rotation_crc@bad-tiling.html

  * igt@kms_rotation_crc@primary-4-tiled-reflect-x-0:
    - shard-rkl:          NOTRUN -> [SKIP][245] ([i915#5289])
   [245]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_rotation_crc@primary-4-tiled-reflect-x-0.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-270:
    - shard-mtlp:         NOTRUN -> [SKIP][246] ([i915#4235])
   [246]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-8/igt@kms_rotation_crc@primary-y-tiled-reflect-x-270.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180:
    - shard-mtlp:         NOTRUN -> [SKIP][247] ([i915#5289]) +1 other test skip
   [247]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
    - shard-dg2:          NOTRUN -> [SKIP][248] ([i915#4235] / [i915#5190])
   [248]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html

  * igt@kms_setmode@invalid-clone-exclusive-crtc:
    - shard-mtlp:         NOTRUN -> [SKIP][249] ([i915#3555] / [i915#8823])
   [249]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@kms_setmode@invalid-clone-exclusive-crtc.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-dg1:          NOTRUN -> [SKIP][250] ([i915#8623])
   [250]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@kms_tiled_display@basic-test-pattern.html
    - shard-dg2:          NOTRUN -> [SKIP][251] ([i915#8623])
   [251]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@kms_tiled_display@basic-test-pattern.html

  * igt@kms_tv_load_detect@load-detect:
    - shard-rkl:          NOTRUN -> [SKIP][252] ([fdo#109309])
   [252]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_tv_load_detect@load-detect.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1:
    - shard-mtlp:         [PASS][253] -> [FAIL][254] ([i915#9196])
   [253]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-mtlp-2/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html
   [254]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1:
    - shard-rkl:          [PASS][255] -> [FAIL][256] ([i915#9196])
   [255]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1.html
   [256]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1.html

  * igt@kms_vblank@ts-continuation-dpms-suspend:
    - shard-rkl:          NOTRUN -> [SKIP][257] ([i915#4098]) +8 other tests skip
   [257]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_vblank@ts-continuation-dpms-suspend.html

  * igt@kms_vblank@ts-continuation-suspend@pipe-a-edp-1:
    - shard-mtlp:         [PASS][258] -> [ABORT][259] ([i915#9414])
   [258]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-mtlp-8/igt@kms_vblank@ts-continuation-suspend@pipe-a-edp-1.html
   [259]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@kms_vblank@ts-continuation-suspend@pipe-a-edp-1.html

  * igt@kms_writeback@writeback-fb-id:
    - shard-dg1:          NOTRUN -> [SKIP][260] ([i915#2437])
   [260]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@kms_writeback@writeback-fb-id.html
    - shard-apl:          NOTRUN -> [SKIP][261] ([fdo#109271] / [i915#2437])
   [261]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-apl1/igt@kms_writeback@writeback-fb-id.html

  * igt@kms_writeback@writeback-invalid-parameters:
    - shard-mtlp:         NOTRUN -> [SKIP][262] ([i915#2437])
   [262]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@kms_writeback@writeback-invalid-parameters.html

  * igt@perf@gen8-unprivileged-single-ctx-counters:
    - shard-rkl:          [PASS][263] -> [SKIP][264] ([i915#2436])
   [263]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@perf@gen8-unprivileged-single-ctx-counters.html
   [264]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@perf@gen8-unprivileged-single-ctx-counters.html

  * igt@perf@per-context-mode-unprivileged:
    - shard-dg2:          NOTRUN -> [SKIP][265] ([fdo#109289]) +4 other tests skip
   [265]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@perf@per-context-mode-unprivileged.html

  * igt@perf@unprivileged-single-ctx-counters:
    - shard-mtlp:         NOTRUN -> [SKIP][266] ([fdo#109289]) +1 other test skip
   [266]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@perf@unprivileged-single-ctx-counters.html

  * igt@perf_pmu@busy-double-start@vcs1:
    - shard-dg1:          [PASS][267] -> [FAIL][268] ([i915#4349]) +2 other tests fail
   [267]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg1-14/igt@perf_pmu@busy-double-start@vcs1.html
   [268]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-16/igt@perf_pmu@busy-double-start@vcs1.html

  * igt@perf_pmu@event-wait@rcs0:
    - shard-dg2:          NOTRUN -> [SKIP][269] ([fdo#112283]) +1 other test skip
   [269]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@perf_pmu@event-wait@rcs0.html

  * igt@perf_pmu@frequency@gt0:
    - shard-dg2:          NOTRUN -> [FAIL][270] ([i915#6806])
   [270]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-2/igt@perf_pmu@frequency@gt0.html

  * igt@perf_pmu@rc6@other-idle-gt0:
    - shard-dg2:          NOTRUN -> [SKIP][271] ([i915#8516])
   [271]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@perf_pmu@rc6@other-idle-gt0.html

  * igt@prime_vgem@basic-fence-mmap:
    - shard-dg2:          NOTRUN -> [SKIP][272] ([i915#3708] / [i915#4077])
   [272]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@prime_vgem@basic-fence-mmap.html

  * igt@prime_vgem@basic-fence-read:
    - shard-dg2:          NOTRUN -> [SKIP][273] ([i915#3291] / [i915#3708]) +2 other tests skip
   [273]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@prime_vgem@basic-fence-read.html
    - shard-rkl:          NOTRUN -> [SKIP][274] ([fdo#109295] / [i915#3291] / [i915#3708])
   [274]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@prime_vgem@basic-fence-read.html

  * igt@prime_vgem@basic-read:
    - shard-dg1:          NOTRUN -> [SKIP][275] ([i915#3708])
   [275]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-15/igt@prime_vgem@basic-read.html

  * igt@prime_vgem@fence-write-hang:
    - shard-dg2:          NOTRUN -> [SKIP][276] ([i915#3708])
   [276]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@prime_vgem@fence-write-hang.html

  * igt@v3d/v3d_job_submission@array-job-submission:
    - shard-dg2:          NOTRUN -> [SKIP][277] ([i915#2575]) +14 other tests skip
   [277]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@v3d/v3d_job_submission@array-job-submission.html

  * igt@v3d/v3d_submit_csd@bad-flag:
    - shard-rkl:          NOTRUN -> [SKIP][278] ([fdo#109315]) +4 other tests skip
   [278]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@v3d/v3d_submit_csd@bad-flag.html

  * igt@v3d/v3d_submit_csd@bad-multisync-extension:
    - shard-dg1:          NOTRUN -> [SKIP][279] ([i915#2575]) +1 other test skip
   [279]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-19/igt@v3d/v3d_submit_csd@bad-multisync-extension.html

  * igt@v3d/v3d_submit_csd@multi-and-single-sync:
    - shard-apl:          NOTRUN -> [SKIP][280] ([fdo#109271]) +32 other tests skip
   [280]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-apl2/igt@v3d/v3d_submit_csd@multi-and-single-sync.html

  * igt@v3d/v3d_submit_csd@valid-multisync-submission:
    - shard-mtlp:         NOTRUN -> [SKIP][281] ([i915#2575]) +8 other tests skip
   [281]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-7/igt@v3d/v3d_submit_csd@valid-multisync-submission.html

  * igt@vc4/vc4_mmap@mmap-bo:
    - shard-dg2:          NOTRUN -> [SKIP][282] ([i915#7711]) +12 other tests skip
   [282]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@vc4/vc4_mmap@mmap-bo.html
    - shard-tglu:         NOTRUN -> [SKIP][283] ([i915#2575])
   [283]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-4/igt@vc4/vc4_mmap@mmap-bo.html

  * igt@vc4/vc4_purgeable_bo@mark-purgeable:
    - shard-dg1:          NOTRUN -> [SKIP][284] ([i915#7711]) +2 other tests skip
   [284]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-14/igt@vc4/vc4_purgeable_bo@mark-purgeable.html

  * igt@vc4/vc4_purgeable_bo@mark-willneed:
    - shard-rkl:          NOTRUN -> [SKIP][285] ([i915#7711]) +3 other tests skip
   [285]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@vc4/vc4_purgeable_bo@mark-willneed.html

  * igt@vc4/vc4_wait_bo@used-bo-0ns:
    - shard-mtlp:         NOTRUN -> [SKIP][286] ([i915#7711]) +4 other tests skip
   [286]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-2/igt@vc4/vc4_wait_bo@used-bo-0ns.html

  
#### Possible fixes ####

  * {igt@drm_buddy@drm_buddy@drm_test_buddy_alloc_optimistic}:
    - shard-tglu:         [TIMEOUT][287] -> [PASS][288] +1 other test pass
   [287]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-tglu-9/igt@drm_buddy@drm_buddy@drm_test_buddy_alloc_optimistic.html
   [288]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-10/igt@drm_buddy@drm_buddy@drm_test_buddy_alloc_optimistic.html

  * {igt@drm_mm@drm_mm@drm_test_mm_init}:
    - shard-glk:          [TIMEOUT][289] -> [PASS][290]
   [289]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-glk5/igt@drm_mm@drm_mm@drm_test_mm_init.html
   [290]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-glk8/igt@drm_mm@drm_mm@drm_test_mm_init.html

  * igt@fbdev@write:
    - shard-rkl:          [SKIP][291] ([i915#2582]) -> [PASS][292]
   [291]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@fbdev@write.html
   [292]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@fbdev@write.html

  * igt@gem_eio@reset-stress:
    - shard-dg1:          [FAIL][293] ([i915#5784]) -> [PASS][294]
   [293]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg1-16/igt@gem_eio@reset-stress.html
   [294]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-17/igt@gem_eio@reset-stress.html

  * igt@gem_eio@wait-wedge-immediate:
    - shard-mtlp:         [ABORT][295] ([i915#9414]) -> [PASS][296]
   [295]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-mtlp-1/igt@gem_eio@wait-wedge-immediate.html
   [296]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@gem_eio@wait-wedge-immediate.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-rkl:          [FAIL][297] ([i915#2842]) -> [PASS][298]
   [297]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-4/igt@gem_exec_fair@basic-none-share@rcs0.html
   [298]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
    - shard-tglu:         [FAIL][299] ([i915#2842]) -> [PASS][300] +1 other test pass
   [299]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-tglu-4/igt@gem_exec_fair@basic-pace-solo@rcs0.html
   [300]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-2/igt@gem_exec_fair@basic-pace-solo@rcs0.html

  * igt@gem_exec_reloc@basic-write-read-active:
    - shard-rkl:          [SKIP][301] ([i915#3281]) -> [PASS][302] +4 other tests pass
   [301]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@gem_exec_reloc@basic-write-read-active.html
   [302]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_exec_reloc@basic-write-read-active.html

  * igt@gem_set_tiling_vs_blt@untiled-to-tiled:
    - shard-rkl:          [SKIP][303] ([i915#8411]) -> [PASS][304]
   [303]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@gem_set_tiling_vs_blt@untiled-to-tiled.html
   [304]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_set_tiling_vs_blt@untiled-to-tiled.html

  * igt@gem_tiled_partial_pwrite_pread@writes:
    - shard-rkl:          [SKIP][305] ([i915#3282]) -> [PASS][306] +1 other test pass
   [305]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-4/igt@gem_tiled_partial_pwrite_pread@writes.html
   [306]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_tiled_partial_pwrite_pread@writes.html

  * igt@gen9_exec_parse@bb-start-far:
    - shard-rkl:          [SKIP][307] ([i915#2527]) -> [PASS][308] +2 other tests pass
   [307]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@gen9_exec_parse@bb-start-far.html
   [308]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gen9_exec_parse@bb-start-far.html

  * {igt@i915_pm_rc6_residency@rc6-idle@gt0-rcs0}:
    - shard-dg1:          [FAIL][309] -> [PASS][310]
   [309]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg1-18/igt@i915_pm_rc6_residency@rc6-idle@gt0-rcs0.html
   [310]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-17/igt@i915_pm_rc6_residency@rc6-idle@gt0-rcs0.html

  * {igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0}:
    - shard-rkl:          [WARN][311] -> [PASS][312]
   [311]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0.html
   [312]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0.html

  * igt@i915_pm_sseu@full-enable:
    - shard-rkl:          [SKIP][313] ([i915#4387]) -> [PASS][314]
   [313]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@i915_pm_sseu@full-enable.html
   [314]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@i915_pm_sseu@full-enable.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-mtlp:         [FAIL][315] ([i915#5138]) -> [PASS][316]
   [315]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-mtlp-2/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
   [316]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-6/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
    - shard-tglu:         [FAIL][317] ([i915#3743]) -> [PASS][318]
   [317]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-tglu-2/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
   [318]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-tglu-4/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html

  * {igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y-tiled-gen12-rc-ccs}:
    - shard-rkl:          [SKIP][319] ([i915#4098]) -> [PASS][320] +14 other tests pass
   [319]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y-tiled-gen12-rc-ccs.html
   [320]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y-tiled-gen12-rc-ccs.html

  * igt@kms_cursor_legacy@basic-flip-after-cursor-varying-size:
    - shard-rkl:          [SKIP][321] ([i915#1845] / [i915#4098]) -> [PASS][322] +36 other tests pass
   [321]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_cursor_legacy@basic-flip-after-cursor-varying-size.html
   [322]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_cursor_legacy@basic-flip-after-cursor-varying-size.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-pwrite:
    - shard-rkl:          [SKIP][323] ([i915#1849] / [i915#4098] / [i915#5354]) -> [PASS][324] +11 other tests pass
   [323]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-pwrite.html
   [324]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-pwrite.html

  * {igt@kms_pm_rpm@cursor-dpms}:
    - shard-rkl:          [SKIP][325] ([i915#1849]) -> [PASS][326] +1 other test pass
   [325]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_pm_rpm@cursor-dpms.html
   [326]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_pm_rpm@cursor-dpms.html

  * {igt@kms_pm_rpm@dpms-lpsp}:
    - shard-dg1:          [SKIP][327] ([i915#9519]) -> [PASS][328] +2 other tests pass
   [327]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg1-14/igt@kms_pm_rpm@dpms-lpsp.html
   [328]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-19/igt@kms_pm_rpm@dpms-lpsp.html

  * {igt@kms_pm_rpm@drm-resources-equal}:
    - shard-rkl:          [SKIP][329] ([fdo#109308]) -> [PASS][330]
   [329]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_pm_rpm@drm-resources-equal.html
   [330]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_pm_rpm@drm-resources-equal.html

  * {igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait}:
    - shard-rkl:          [SKIP][331] ([i915#9519]) -> [PASS][332]
   [331]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html
   [332]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html

  * {igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_rotate_180}:
    - shard-rkl:          [TIMEOUT][333] -> [PASS][334] +1 other test pass
   [333]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_rotate_180.html
   [334]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_rotate_180.html

  * {igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options}:
    - shard-apl:          [TIMEOUT][335] -> [PASS][336] +1 other test pass
   [335]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-apl6/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html
   [336]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-apl6/igt@kms_selftest@drm_cmdline_parser@drm_test_cmdline_tv_options.html

  * {igt@kms_selftest@drm_damage_helper@drm_test_damage_iter_single_damage_fractional_src}:
    - shard-snb:          [TIMEOUT][337] -> [PASS][338] +2 other tests pass
   [337]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-snb2/igt@kms_selftest@drm_damage_helper@drm_test_damage_iter_single_damage_fractional_src.html
   [338]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-snb5/igt@kms_selftest@drm_damage_helper@drm_test_damage_iter_single_damage_fractional_src.html

  * {igt@kms_selftest@drm_dp_mst_helper@drm_test_dp_mst_sideband_msg_req_decode}:
    - shard-dg2:          [TIMEOUT][339] -> [PASS][340]
   [339]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg2-2/igt@kms_selftest@drm_dp_mst_helper@drm_test_dp_mst_sideband_msg_req_decode.html
   [340]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_selftest@drm_dp_mst_helper@drm_test_dp_mst_sideband_msg_req_decode.html

  * {igt@kms_universal_plane@cursor-fb-leak@pipe-c-hdmi-a-4}:
    - shard-dg1:          [FAIL][341] ([i915#9196]) -> [PASS][342]
   [341]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg1-16/igt@kms_universal_plane@cursor-fb-leak@pipe-c-hdmi-a-4.html
   [342]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg1-18/igt@kms_universal_plane@cursor-fb-leak@pipe-c-hdmi-a-4.html

  * igt@perf@gen12-group-exclusive-stream-ctx-handle:
    - shard-rkl:          [SKIP][343] ([fdo#109289]) -> [PASS][344] +1 other test pass
   [343]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@perf@gen12-group-exclusive-stream-ctx-handle.html
   [344]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@perf@gen12-group-exclusive-stream-ctx-handle.html

  * igt@perf_pmu@busy-double-start@bcs0:
    - shard-mtlp:         [FAIL][345] ([i915#4349]) -> [PASS][346] +1 other test pass
   [345]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-mtlp-4/igt@perf_pmu@busy-double-start@bcs0.html
   [346]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-mtlp-4/igt@perf_pmu@busy-double-start@bcs0.html

  * igt@prime_vgem@basic-write:
    - shard-rkl:          [SKIP][347] ([fdo#109295] / [i915#3291] / [i915#3708]) -> [PASS][348]
   [347]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-4/igt@prime_vgem@basic-write.html
   [348]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@prime_vgem@basic-write.html

  
#### Warnings ####

  * igt@gem_ccs@ctrl-surf-copy:
    - shard-rkl:          [SKIP][349] ([i915#3555]) -> [SKIP][350] ([i915#7957])
   [349]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@gem_ccs@ctrl-surf-copy.html
   [350]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_ccs@ctrl-surf-copy.html

  * igt@gem_exec_fair@basic-none@bcs0:
    - shard-rkl:          [FAIL][351] ([i915#2842]) -> [SKIP][352] ([i915#9591])
   [351]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-4/igt@gem_exec_fair@basic-none@bcs0.html
   [352]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@gem_exec_fair@basic-none@bcs0.html

  * igt@kms_big_fb@4-tiled-16bpp-rotate-0:
    - shard-rkl:          [SKIP][353] ([i915#1845] / [i915#4098]) -> [SKIP][354] ([i915#5286]) +6 other tests skip
   [353]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_big_fb@4-tiled-16bpp-rotate-0.html
   [354]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_big_fb@4-tiled-16bpp-rotate-0.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-rkl:          [SKIP][355] ([i915#5286]) -> [SKIP][356] ([i915#1845] / [i915#4098]) +4 other tests skip
   [355]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-4/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
   [356]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@linear-64bpp-rotate-90:
    - shard-rkl:          [SKIP][357] ([i915#1845] / [i915#4098]) -> [SKIP][358] ([fdo#111614] / [i915#3638]) +3 other tests skip
   [357]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_big_fb@linear-64bpp-rotate-90.html
   [358]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_big_fb@linear-64bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-270:
    - shard-rkl:          [SKIP][359] ([fdo#111614] / [i915#3638]) -> [SKIP][360] ([i915#1845] / [i915#4098]) +1 other test skip
   [359]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_big_fb@y-tiled-64bpp-rotate-270.html
   [360]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_big_fb@y-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-16bpp-rotate-270:
    - shard-rkl:          [SKIP][361] ([fdo#110723]) -> [SKIP][362] ([i915#1845] / [i915#4098]) +4 other tests skip
   [361]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_big_fb@yf-tiled-16bpp-rotate-270.html
   [362]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_big_fb@yf-tiled-16bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-rkl:          [SKIP][363] ([i915#1845] / [i915#4098]) -> [SKIP][364] ([fdo#110723]) +4 other tests skip
   [363]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
   [364]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_color@deep-color:
    - shard-rkl:          [SKIP][365] ([i915#9608]) -> [SKIP][366] ([i915#3555])
   [365]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_color@deep-color.html
   [366]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_color@deep-color.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-rkl:          [SKIP][367] ([i915#7118]) -> [SKIP][368] ([i915#1845] / [i915#4098]) +1 other test skip
   [367]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@kms_content_protection@atomic-dpms.html
   [368]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@srm:
    - shard-rkl:          [SKIP][369] ([i915#1845] / [i915#4098]) -> [SKIP][370] ([i915#7118]) +1 other test skip
   [369]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_content_protection@srm.html
   [370]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_content_protection@srm.html

  * igt@kms_content_protection@type1:
    - shard-dg2:          [SKIP][371] ([i915#7118]) -> [SKIP][372] ([i915#7118] / [i915#7162])
   [371]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg2-6/igt@kms_content_protection@type1.html
   [372]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-11/igt@kms_content_protection@type1.html

  * igt@kms_cursor_crc@cursor-random-512x512:
    - shard-rkl:          [SKIP][373] ([i915#1845] / [i915#4098]) -> [SKIP][374] ([i915#3359])
   [373]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_cursor_crc@cursor-random-512x512.html
   [374]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_cursor_crc@cursor-random-512x512.html

  * igt@kms_cursor_crc@cursor-rapid-movement-32x32:
    - shard-rkl:          [SKIP][375] ([i915#1845] / [i915#4098]) -> [SKIP][376] ([i915#3555]) +8 other tests skip
   [375]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_cursor_crc@cursor-rapid-movement-32x32.html
   [376]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_cursor_crc@cursor-rapid-movement-32x32.html

  * igt@kms_cursor_crc@cursor-sliding-512x512:
    - shard-rkl:          [SKIP][377] ([i915#3359]) -> [SKIP][378] ([i915#1845] / [i915#4098])
   [377]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_cursor_crc@cursor-sliding-512x512.html
   [378]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_cursor_crc@cursor-sliding-512x512.html

  * igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy:
    - shard-rkl:          [SKIP][379] ([i915#1845] / [i915#4098]) -> [SKIP][380] ([fdo#111825]) +3 other tests skip
   [379]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html
   [380]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html

  * igt@kms_cursor_legacy@cursorb-vs-flipa-legacy:
    - shard-rkl:          [SKIP][381] ([fdo#111825]) -> [SKIP][382] ([i915#1845] / [i915#4098]) +4 other tests skip
   [381]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html
   [382]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions:
    - shard-rkl:          [SKIP][383] ([i915#1845] / [i915#4098]) -> [SKIP][384] ([fdo#111767] / [fdo#111825])
   [383]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions.html
   [384]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-1/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
    - shard-rkl:          [SKIP][385] ([i915#1845] / [i915#4098]) -> [SKIP][386] ([i915#4103])
   [385]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
   [386]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html

  * igt@kms_dsc@dsc-with-formats:
    - shard-rkl:          [SKIP][387] ([i915#1845] / [i915#4098]) -> [SKIP][388] ([i915#3555] / [i915#3840])
   [387]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_dsc@dsc-with-formats.html
   [388]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_dsc@dsc-with-formats.html

  * igt@kms_force_connector_basic@force-load-detect:
    - shard-rkl:          [SKIP][389] ([fdo#109285]) -> [SKIP][390] ([fdo#109285] / [i915#4098])
   [389]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_force_connector_basic@force-load-detect.html
   [390]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_force_connector_basic@force-load-detect.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt:
    - shard-rkl:          [SKIP][391] ([i915#1849] / [i915#4098] / [i915#5354]) -> [SKIP][392] ([fdo#111825] / [i915#1825]) +37 other tests skip
   [391]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt.html
   [392]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-shrfb-fliptrack-mmap-gtt:
    - shard-rkl:          [SKIP][393] ([i915#1849] / [i915#4098] / [i915#5354]) -> [SKIP][394] ([fdo#111825]) +1 other test skip
   [393]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_frontbuffer_tracking@fbc-2p-shrfb-fliptrack-mmap-gtt.html
   [394]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_frontbuffer_tracking@fbc-2p-shrfb-fliptrack-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbc-tiling-4:
    - shard-rkl:          [SKIP][395] ([i915#1849] / [i915#4098] / [i915#5354]) -> [SKIP][396] ([i915#5439])
   [395]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_frontbuffer_tracking@fbc-tiling-4.html
   [396]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_frontbuffer_tracking@fbc-tiling-4.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-onoff:
    - shard-rkl:          [SKIP][397] ([fdo#111825] / [i915#1825]) -> [SKIP][398] ([i915#1849] / [i915#4098] / [i915#5354]) +21 other tests skip
   [397]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-onoff.html
   [398]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcpsr-tiling-4:
    - shard-rkl:          [SKIP][399] ([i915#5439]) -> [SKIP][400] ([i915#1849] / [i915#4098] / [i915#5354])
   [399]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_frontbuffer_tracking@fbcpsr-tiling-4.html
   [400]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-tiling-4.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-shrfb-msflip-blt:
    - shard-rkl:          [SKIP][401] ([i915#1849] / [i915#4098] / [i915#5354]) -> [SKIP][402] ([i915#3023]) +33 other tests skip
   [401]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_frontbuffer_tracking@psr-1p-primscrn-shrfb-msflip-blt.html
   [402]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_frontbuffer_tracking@psr-1p-primscrn-shrfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary:
    - shard-rkl:          [SKIP][403] ([i915#3023]) -> [SKIP][404] ([i915#1849] / [i915#4098] / [i915#5354]) +17 other tests skip
   [403]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary.html
   [404]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary.html

  * igt@kms_hdr@static-toggle:
    - shard-rkl:          [SKIP][405] ([i915#3555] / [i915#8228]) -> [SKIP][406] ([i915#1845] / [i915#4098]) +1 other test skip
   [405]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_hdr@static-toggle.html
   [406]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_hdr@static-toggle.html

  * igt@kms_hdr@static-toggle-suspend:
    - shard-rkl:          [SKIP][407] ([i915#1845] / [i915#4098]) -> [SKIP][408] ([i915#3555] / [i915#8228]) +1 other test skip
   [407]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-5/igt@kms_hdr@static-toggle-suspend.html
   [408]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-7/igt@kms_hdr@static-toggle-suspend.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-rkl:          [SKIP][409] ([i915#4070] / [i915#4816]) -> [SKIP][410] ([i915#4816])
   [409]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-1/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
   [410]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-4/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_plane_lowres@tiling-yf:
    - shard-rkl:          [SKIP][411] ([i915#3555]) -> [SKIP][412] ([i915#1845] / [i915#4098]) +2 other tests skip
   [411]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_plane_lowres@tiling-yf.html
   [412]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_plane_lowres@tiling-yf.html

  * igt@kms_rotation_crc@primary-4-tiled-reflect-x-180:
    - shard-rkl:          [SKIP][413] ([i915#5289]) -> [SKIP][414] ([i915#1845] / [i915#4098])
   [413]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-rkl-7/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html
   [414]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-rkl-5/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html

  * igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem:
    - shard-dg2:          [CRASH][415] ([i915#9351]) -> [INCOMPLETE][416] ([i915#5493])
   [415]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7594/shard-dg2-1/igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem.html
   [416]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/shard-dg2-1/igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#103375]: https://bugs.freedesktop.org/show_bug.cgi?id=103375
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109295]: https://bugs.freedesktop.org/show_bug.cgi?id=109295
  [fdo#109302]: https://bugs.freedesktop.org/show_bug.cgi?id=109302
  [fdo#109308]: https://bugs.freedesktop.org/show_bug.cgi?id=109308
  [fdo#109309]: https://bugs.freedesktop.org/show_bug.cgi?id=109309
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
  [fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111767]: https://bugs.freedesktop.org/show_bug.cgi?id=111767
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112283]: https://bugs.freedesktop.org/show_bug.cgi?id=112283
  [i915#118]: https://gitlab.freedesktop.org/drm/intel/issues/118
  [i915#1769]: https://gitlab.freedesktop.org/drm/intel/issues/1769
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
  [i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845
  [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849
  [i915#1850]: https://gitlab.freedesktop.org/drm/intel/issues/1850
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2436]: https://gitlab.freedesktop.org/drm/intel/issues/2436
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2582]: https://gitlab.freedesktop.org/drm/intel/issues/2582
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2705]: https://gitlab.freedesktop.org/drm/intel/issues/2705
  [i915#280]: https://gitlab.freedesktop.org/drm/intel/issues/280
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2856]: https://gitlab.freedesktop.org/drm/intel/issues/2856
  [i915#3023]: https://gitlab.freedesktop.org/drm/intel/issues/3023
  [i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
  [i915#3299]: https://gitlab.freedesktop.org/drm/intel/issues/3299
  [i915#3318]: https://gitlab.freedesktop.org/drm/intel/issues/3318
  [i915#3359]: https://gitlab.freedesktop.org/drm/intel/issues/3359
  [i915#3458]: https://gitlab.freedesktop.org/drm/intel/issues/3458
  [i915#3539]: https://gitlab.freedesktop.org/drm/intel/issues/3539
  [i915#3546]: https://gitlab.freedesktop.org/drm/intel/issues/3546
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3637]: https://gitlab.freedesktop.org/drm/intel/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/intel/issues/3638
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3743]: https://gitlab.freedesktop.org/drm/intel/issues/3743
  [i915#3826]: https://gitlab.freedesktop.org/drm/intel/issues/3826
  [i915#3840]: https://gitlab.freedesktop.org/drm/intel/issues/3840
  [i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4087]: https://gitlab.freedesktop.org/drm/intel/issues/4087
  [i915#4098]: https://gitlab.freedesktop.org/drm/intel/issues/4098
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
  [i915#4235]: https://gitlab.freedesktop.org/drm/intel/issues/4235
  [i915#4270]: https://gitlab.freedesktop.org/drm/intel/issues/4270
  [i915#4275]: https://gitlab.freedesktop.org/drm/intel/issues/4275
  [i915#4281]: https://gitlab.freedesktop.org/drm/intel/issues/4281
  [i915#4349]: https://gitlab.freedesktop.org/drm/intel/issues/4349
  [i915#4387]: https://gitlab.freedesktop.org/drm/intel/issues/4387
  [i915#4473]: https://gitlab.freedesktop.org/drm/intel/issues/4473
  [i915#4525]: https://gitlab.freedesktop.org/drm/intel/issues/4525
  [i915#4537]: https://gitlab.freedesktop.org/drm/intel/issues/4537
  [i915#4538]: https://gitlab.freedesktop.org/drm/intel/issues/4538
  [i915#4573]: https://gitlab.freedesktop.org/drm/intel/issues/4573
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4767]: https://gitlab.freedesktop.org/drm/intel/issues/4767
  [i915#4812]: https://gitlab.freedesktop.org/drm/intel/issues/4812
  [i915#4816]: https://gitlab.freedesktop.org/drm/intel/issues/4816
  [i915#4852]: https://gitlab.freedesktop.org/drm/intel/issues/4852
  [i915#4860]: https://gitlab.freedesktop.org/drm/intel/issues/4860
  [i915#4879]: https://gitlab.freedesktop.org/drm/intel/issues/4879
  [i915#4880]: https://gitlab.freedesktop.org/drm/intel/issues/4880
  [i915#4881]: https://gitlab.freedesktop.org/drm/intel/issues/4881
  [i915#4885]: https://gitlab.freedesktop.org/drm/intel/issues/4885
  [i915#5030]: https://gitlab.freedesktop.org/drm/intel/issues/5030
  [i915#5138]: https://gitlab.freedesktop.org/drm/intel/issues/5138
  [i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5235]: https://gitlab

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10212/index.html

[-- Attachment #2: Type: text/html, Size: 123302 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 04/13] xe_query: Add missing include.
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 04/13] xe_query: Add missing include Francois Dugast
@ 2023-11-21 17:00   ` Kamil Konieczny
  2023-11-28 17:48     ` Francois Dugast
  0 siblings, 1 reply; 30+ messages in thread
From: Kamil Konieczny @ 2023-11-21 17:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-16 at 14:53:39 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 

LGTM, please remove final dot from subject line at merge.

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> When trying to use xe_for_each_mem_region from a caller
> that is not importing the igt_aux.h, the following build issue
> will occur:
> 
> ../lib/xe/xe_query.h:76:38: error: implicit declaration of function ‘igt_fls’ [-Werror=implicit-function-declaration]
>    76 |         for (uint64_t __i = 0; __i < igt_fls(__memreg); __i++) \
> 
> So, to avoid a dependency chain, let's include from the file
> that is using the helper.
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
>  lib/xe/xe_query.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> index 38e9aa440..7b3fc3100 100644
> --- a/lib/xe/xe_query.h
> +++ b/lib/xe/xe_query.h
> @@ -11,6 +11,8 @@
>  
>  #include <stdint.h>
>  #include <xe_drm.h>
> +
> +#include "igt_aux.h"
>  #include "igt_list.h"
>  #include "igt_sizes.h"
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 01/13] drm-uapi/xe: Extend drm_xe_vm_bind_op
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 01/13] drm-uapi/xe: Extend drm_xe_vm_bind_op Francois Dugast
@ 2023-11-21 17:01   ` Kamil Konieczny
  0 siblings, 0 replies; 30+ messages in thread
From: Kamil Konieczny @ 2023-11-21 17:01 UTC (permalink / raw)
  To: igt-dev

Hi Francois,
On 2023-11-16 at 14:53:36 +0000, Francois Dugast wrote:
> Align with commit ("drm/xe/uapi: Extend drm_xe_vm_bind_op")
> 
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  include/drm-uapi/xe_drm.h | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index af32ec161..5ef16f16e 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -613,6 +613,9 @@ struct drm_xe_vm_destroy {
>  };
>  
>  struct drm_xe_vm_bind_op {
> +	/** @extensions: Pointer to the first extension struct, if any */
> +	__u64 extensions;
> +
>  	/**
>  	 * @obj: GEM object to operate on, MBZ for MAP_USERPTR, MBZ for UNMAP
>  	 */
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 02/13] xe_ioctl: Converge bo_create to the most used version
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 02/13] xe_ioctl: Converge bo_create to the most used version Francois Dugast
@ 2023-11-21 17:13   ` Kamil Konieczny
  2023-11-28 16:11     ` Francois Dugast
  0 siblings, 1 reply; 30+ messages in thread
From: Kamil Konieczny @ 2023-11-21 17:13 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-16 at 14:53:37 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Let's unify the call instead of having 2 separated
> options for the same goal.
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
>  lib/xe/xe_ioctl.c           | 15 ---------------
>  lib/xe/xe_ioctl.h           |  1 -
>  tests/intel/xe_perf_pmu.c   |  4 ++--
>  tests/intel/xe_spin_batch.c |  2 +-
>  tests/intel/xe_vm.c         |  9 +++++----
>  5 files changed, 8 insertions(+), 23 deletions(-)
> 
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index 738c4ffdb..78d431ab2 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -253,21 +253,6 @@ uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags)
>  	return handle;
>  }
>  
> -uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size)
> -{
> -	struct drm_xe_gem_create create = {
> -		.vm_id = vm,
> -		.size = size,
> -		.flags = vram_if_possible(fd, gt),
> -	};
> -	int err;
> -
> -	err = igt_ioctl(fd, DRM_IOCTL_XE_GEM_CREATE, &create);
> -	igt_assert_eq(err, 0);
> -
> -	return create.handle;
> -}
> -
>  uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext, bool async)
>  {
>  	struct drm_xe_engine_class_instance instance = {
> diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> index a9171bcf7..fb191d98f 100644
> --- a/lib/xe/xe_ioctl.h
> +++ b/lib/xe/xe_ioctl.h
> @@ -67,7 +67,6 @@ void xe_vm_destroy(int fd, uint32_t vm);
>  uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags,
>  			      uint32_t *handle);
>  uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags);
> -uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size);
>  uint32_t xe_exec_queue_create(int fd, uint32_t vm,
>  			  struct drm_xe_engine_class_instance *instance,
>  			  uint64_t ext);
> diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
> index e9d05cf2b..2c549f778 100644
> --- a/tests/intel/xe_perf_pmu.c
> +++ b/tests/intel/xe_perf_pmu.c
> @@ -103,7 +103,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create(fd, eci->gt_id, vm, bo_size);
> +	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
----------------------------------------------------------------- ^
s/fd, 0/fd, eci->gt_id/

>  	spin = xe_bo_map(fd, bo, bo_size);
>  
>  	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> @@ -223,7 +223,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
>  	bo_size = sizeof(*data) * num_placements;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create(fd, gt, vm, bo_size);
> +	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
----------------------------------------------------------------- ^
s/fd, 0/fd, gt/

>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < num_placements; i++) {
> diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
> index 6ab604d9b..261fde9af 100644
> --- a/tests/intel/xe_spin_batch.c
> +++ b/tests/intel/xe_spin_batch.c
> @@ -169,7 +169,7 @@ static void xe_spin_fixed_duration(int fd)
>  	exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
>  	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
>  	bo_size = ALIGN(sizeof(*spin) + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> -	bo = xe_bo_create(fd, 0, vm, bo_size);
> +	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
>  	spin = xe_bo_map(fd, bo, bo_size);
>  	spin_addr = intel_allocator_alloc_with_strategy(ahnd, bo, bo_size, 0,
>  							ALLOC_STRATEGY_LOW_TO_HIGH);
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index 05e8e7516..eedd05b57 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -267,7 +267,7 @@ static void test_partial_unbinds(int fd)
>  {
>  	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	size_t bo_size = 3 * xe_get_default_alignment(fd);
> -	uint32_t bo = xe_bo_create(fd, 0, vm, bo_size);
> +	uint32_t bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
>  	uint64_t unbind_size = bo_size / 3;
>  	uint64_t addr = 0x1a0000;
>  
> @@ -316,7 +316,7 @@ static void unbind_all(int fd, int n_vmas)
>  	};
>  
>  	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
> -	bo = xe_bo_create(fd, 0, vm, bo_size);
> +	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
>  
>  	for (i = 0; i < n_vmas; ++i)
>  		xe_vm_bind_async(fd, vm, 0, bo, 0, addr + i * bo_size,
> @@ -362,6 +362,7 @@ static void userptr_invalid(int fd)
>  	xe_vm_destroy(fd, vm);
>  }
>  
> +

Remove this newline.

Regards,
Kamil

>  /**
>   * SUBTEST: shared-%s-page
>   * Description: Test shared arg[1] page
> @@ -1575,9 +1576,9 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
>  		igt_assert(map0 != MAP_FAILED);
>  		igt_assert(map1 != MAP_FAILED);
>  	} else {
> -		bo0 = xe_bo_create(fd, eci->gt_id, vm, bo_size);
> +		bo0 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
>  		map0 = xe_bo_map(fd, bo0, bo_size);
> -		bo1 = xe_bo_create(fd, eci->gt_id, vm, bo_size);
> +		bo1 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
>  		map1 = xe_bo_map(fd, bo1, bo_size);
>  	}
>  	memset(map0, 0, bo_size);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 03/13] xe_ioctl: Rename *xe_bo_create_flags to simply xe_bo_create
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 03/13] xe_ioctl: Rename *xe_bo_create_flags to simply xe_bo_create Francois Dugast
@ 2023-11-21 17:24   ` Kamil Konieczny
  0 siblings, 0 replies; 30+ messages in thread
From: Kamil Konieczny @ 2023-11-21 17:24 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-16 at 14:53:38 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Now that we have only one variant we can unify to the
> simplest version.
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>

Lgtm but please make corrections from previous patch.

Regards,
Kamil

> ---
>  benchmarks/gem_wsim.c              |  2 +-
>  lib/igt_draw.c                     |  6 ++---
>  lib/igt_fb.c                       |  6 ++---
>  lib/intel_batchbuffer.c            |  6 ++---
>  lib/intel_blt.c                    |  2 +-
>  lib/intel_bufops.c                 |  2 +-
>  lib/xe/xe_ioctl.c                  |  8 +++---
>  lib/xe/xe_ioctl.h                  |  6 ++---
>  lib/xe/xe_spin.c                   |  8 +++---
>  tests/intel/api_intel_allocator.c  |  4 +--
>  tests/intel/kms_big_fb.c           | 22 ++++++++--------
>  tests/intel/kms_ccs.c              |  4 +--
>  tests/intel/xe_ccs.c               | 12 ++++-----
>  tests/intel/xe_copy_basic.c        |  8 +++---
>  tests/intel/xe_dma_buf_sync.c      |  4 +--
>  tests/intel/xe_drm_fdinfo.c        |  6 ++---
>  tests/intel/xe_evict.c             | 40 +++++++++++++++---------------
>  tests/intel/xe_evict_ccs.c         |  6 ++---
>  tests/intel/xe_exec_balancer.c     |  6 ++---
>  tests/intel/xe_exec_basic.c        |  3 +--
>  tests/intel/xe_exec_compute_mode.c |  4 +--
>  tests/intel/xe_exec_fault_mode.c   | 10 ++++----
>  tests/intel/xe_exec_reset.c        | 16 ++++++------
>  tests/intel/xe_exec_store.c        | 12 ++++-----
>  tests/intel/xe_exec_threads.c      | 12 ++++-----
>  tests/intel/xe_exercise_blt.c      |  4 +--
>  tests/intel/xe_guc_pc.c            |  4 +--
>  tests/intel/xe_intel_bb.c          |  2 +-
>  tests/intel/xe_mmap.c              | 32 ++++++++++++------------
>  tests/intel/xe_noexec_ping_pong.c  |  4 +--
>  tests/intel/xe_perf_pmu.c          |  4 +--
>  tests/intel/xe_pm.c                |  6 ++---
>  tests/intel/xe_pm_residency.c      |  4 +--
>  tests/intel/xe_prime_self_import.c | 28 ++++++++++-----------
>  tests/intel/xe_spin_batch.c        |  2 +-
>  tests/intel/xe_vm.c                | 35 +++++++++++++-------------
>  tests/intel/xe_waitfence.c         | 20 +++++++--------
>  tests/kms_addfb_basic.c            |  2 +-
>  tests/kms_getfb.c                  |  2 +-
>  39 files changed, 181 insertions(+), 183 deletions(-)
> 
> diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
> index df4850086..d6d3deb5f 100644
> --- a/benchmarks/gem_wsim.c
> +++ b/benchmarks/gem_wsim.c
> @@ -1734,7 +1734,7 @@ xe_alloc_step_batch(struct workload *wrk, struct w_step *w)
>  	struct dep_entry *dep;
>  	int i;
>  
> -	w->bb_handle = xe_bo_create_flags(fd, vm->id, PAGE_SIZE,
> +	w->bb_handle = xe_bo_create(fd, vm->id, PAGE_SIZE,
>  				visible_vram_if_possible(fd, eq->hwe_list[0].gt_id));
>  	w->xe.data = xe_bo_map(fd, w->bb_handle, PAGE_SIZE);
>  	w->xe.exec.address =
> diff --git a/lib/igt_draw.c b/lib/igt_draw.c
> index 9a7664a37..5935eb058 100644
> --- a/lib/igt_draw.c
> +++ b/lib/igt_draw.c
> @@ -795,9 +795,9 @@ static void draw_rect_render(int fd, struct cmd_data *cmd_data,
>  	if (is_i915_device(fd))
>  		tmp.handle = gem_create(fd, tmp.size);
>  	else
> -		tmp.handle = xe_bo_create_flags(fd, 0,
> -						ALIGN(tmp.size, xe_get_default_alignment(fd)),
> -						visible_vram_if_possible(fd, 0));
> +		tmp.handle = xe_bo_create(fd, 0,
> +					  ALIGN(tmp.size, xe_get_default_alignment(fd)),
> +					  visible_vram_if_possible(fd, 0));
>  
>  	tmp.stride = rect->w * pixel_size;
>  	tmp.bpp = buf->bpp;
> diff --git a/lib/igt_fb.c b/lib/igt_fb.c
> index e70d2e3ce..f96dca7a4 100644
> --- a/lib/igt_fb.c
> +++ b/lib/igt_fb.c
> @@ -1205,8 +1205,8 @@ static int create_bo_for_fb(struct igt_fb *fb, bool prefer_sysmem)
>  			/* If we can't use fences, we won't use ggtt detiling later. */
>  			igt_assert(err == 0 || err == -EOPNOTSUPP);
>  		} else if (is_xe_device(fd)) {
> -			fb->gem_handle = xe_bo_create_flags(fd, 0, fb->size,
> -							visible_vram_if_possible(fd, 0));
> +			fb->gem_handle = xe_bo_create(fd, 0, fb->size,
> +						      visible_vram_if_possible(fd, 0));
>  		} else if (is_vc4_device(fd)) {
>  			fb->gem_handle = igt_vc4_create_bo(fd, fb->size);
>  
> @@ -2903,7 +2903,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
>  
>  		bb_size = ALIGN(bb_size + xe_cs_prefetch_size(dst_fb->fd),
>  				xe_get_default_alignment(dst_fb->fd));
> -		xe_bb = xe_bo_create_flags(dst_fb->fd, 0, bb_size, mem_region);
> +		xe_bb = xe_bo_create(dst_fb->fd, 0, bb_size, mem_region);
>  	}
>  
>  	for (int i = 0; i < dst_fb->num_planes - dst_cc; i++) {
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index f12d6219d..7fa4e3487 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -945,7 +945,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
>  
>  		ibb->alignment = xe_get_default_alignment(fd);
>  		size = ALIGN(size, ibb->alignment);
> -		ibb->handle = xe_bo_create_flags(fd, 0, size, visible_vram_if_possible(fd, 0));
> +		ibb->handle = xe_bo_create(fd, 0, size, visible_vram_if_possible(fd, 0));
>  
>  		/* Limit to 48-bit due to MI_* address limitation */
>  		ibb->gtt_size = 1ull << min_t(uint32_t, xe_va_bits(fd), 48);
> @@ -1403,8 +1403,8 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
>  	if (ibb->driver == INTEL_DRIVER_I915)
>  		ibb->handle = gem_create(ibb->fd, ibb->size);
>  	else
> -		ibb->handle = xe_bo_create_flags(ibb->fd, 0, ibb->size,
> -						 visible_vram_if_possible(ibb->fd, 0));
> +		ibb->handle = xe_bo_create(ibb->fd, 0, ibb->size,
> +					   visible_vram_if_possible(ibb->fd, 0));
>  
>  	/* Reacquire offset for RELOC and SIMPLE */
>  	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
> diff --git a/lib/intel_blt.c b/lib/intel_blt.c
> index 2edcd72f3..36830fb3e 100644
> --- a/lib/intel_blt.c
> +++ b/lib/intel_blt.c
> @@ -1807,7 +1807,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region,
>  			flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
>  
>  		size = ALIGN(size, xe_get_default_alignment(blt->fd));
> -		handle = xe_bo_create_flags(blt->fd, 0, size, flags);
> +		handle = xe_bo_create(blt->fd, 0, size, flags);
>  	} else {
>  		igt_assert(__gem_create_in_memory_regions(blt->fd, &handle,
>  							  &size, region) == 0);
> diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
> index 2c91adb88..6f3a77f47 100644
> --- a/lib/intel_bufops.c
> +++ b/lib/intel_bufops.c
> @@ -920,7 +920,7 @@ static void __intel_buf_init(struct buf_ops *bops,
>  				igt_assert_eq(__gem_create(bops->fd, &size, &buf->handle), 0);
>  		} else {
>  			size = ALIGN(size, xe_get_default_alignment(bops->fd));
> -			buf->handle = xe_bo_create_flags(bops->fd, 0, size, region);
> +			buf->handle = xe_bo_create(bops->fd, 0, size, region);
>  		}
>  	}
>  
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index 78d431ab2..63fa2ae25 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -226,8 +226,8 @@ void xe_vm_destroy(int fd, uint32_t vm)
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_DESTROY, &destroy), 0);
>  }
>  
> -uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags,
> -			      uint32_t *handle)
> +uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
> +			uint32_t *handle)
>  {
>  	struct drm_xe_gem_create create = {
>  		.vm_id = vm,
> @@ -244,11 +244,11 @@ uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags
>  	return 0;
>  }
>  
> -uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags)
> +uint32_t xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags)
>  {
>  	uint32_t handle;
>  
> -	igt_assert_eq(__xe_bo_create_flags(fd, vm, size, flags, &handle), 0);
> +	igt_assert_eq(__xe_bo_create(fd, vm, size, flags, &handle), 0);
>  
>  	return handle;
>  }
> diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> index fb191d98f..1ec29c2c5 100644
> --- a/lib/xe/xe_ioctl.h
> +++ b/lib/xe/xe_ioctl.h
> @@ -64,9 +64,9 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
>  			    uint32_t bo, struct drm_xe_sync *sync,
>  			    uint32_t num_syncs);
>  void xe_vm_destroy(int fd, uint32_t vm);
> -uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags,
> -			      uint32_t *handle);
> -uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags);
> +uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
> +			uint32_t *handle);
> +uint32_t xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags);
>  uint32_t xe_exec_queue_create(int fd, uint32_t vm,
>  			  struct drm_xe_engine_class_instance *instance,
>  			  uint64_t ext);
> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> index cfc663acc..828938434 100644
> --- a/lib/xe/xe_spin.c
> +++ b/lib/xe/xe_spin.c
> @@ -219,8 +219,8 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
>  			spin->engine = xe_exec_queue_create_class(fd, spin->vm, DRM_XE_ENGINE_CLASS_COPY);
>  	}
>  
> -	spin->handle = xe_bo_create_flags(fd, spin->vm, bo_size,
> -					  visible_vram_if_possible(fd, 0));
> +	spin->handle = xe_bo_create(fd, spin->vm, bo_size,
> +				    visible_vram_if_possible(fd, 0));
>  	xe_spin = xe_bo_map(fd, spin->handle, bo_size);
>  	addr = intel_allocator_alloc_with_strategy(ahnd, spin->handle, bo_size, 0, ALLOC_STRATEGY_LOW_TO_HIGH);
>  	xe_vm_bind_sync(fd, spin->vm, spin->handle, 0, addr, bo_size);
> @@ -298,8 +298,8 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
>  
>  	vm = xe_vm_create(fd, 0, 0);
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, hwe->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, hwe->gt_id));
>  	spin = xe_bo_map(fd, bo, 0x1000);
>  
>  	xe_vm_bind_sync(fd, vm, bo, 0, addr, bo_size);
> diff --git a/tests/intel/api_intel_allocator.c b/tests/intel/api_intel_allocator.c
> index f3fcf8a34..158fd86a1 100644
> --- a/tests/intel/api_intel_allocator.c
> +++ b/tests/intel/api_intel_allocator.c
> @@ -468,8 +468,8 @@ static void __simple_allocs(int fd)
>  
>  		size = (rand() % 4 + 1) * 0x1000;
>  		if (is_xe)
> -			handles[i] = xe_bo_create_flags(fd, 0, size,
> -							system_memory(fd));
> +			handles[i] = xe_bo_create(fd, 0, size,
> +						  system_memory(fd));
>  		else
>  			handles[i] = gem_create(fd, size);
>  
> diff --git a/tests/intel/kms_big_fb.c b/tests/intel/kms_big_fb.c
> index 2c7b24fca..9c2b8dc79 100644
> --- a/tests/intel/kms_big_fb.c
> +++ b/tests/intel/kms_big_fb.c
> @@ -777,10 +777,10 @@ test_size_overflow(data_t *data)
>  	if (is_i915_device(data->drm_fd))
>  		bo = gem_buffer_create_fb_obj(data->drm_fd, (1ULL << 32) - 4096);
>  	else
> -		bo = xe_bo_create_flags(data->drm_fd, 0,
> -					ALIGN(((1ULL << 32) - 4096),
> -					      xe_get_default_alignment(data->drm_fd)),
> -					vram_if_possible(data->drm_fd, 0));
> +		bo = xe_bo_create(data->drm_fd, 0,
> +				  ALIGN(((1ULL << 32) - 4096),
> +					xe_get_default_alignment(data->drm_fd)),
> +				  vram_if_possible(data->drm_fd, 0));
>  	igt_require(bo);
>  
>  	ret = __kms_addfb(data->drm_fd, bo,
> @@ -837,10 +837,10 @@ test_size_offset_overflow(data_t *data)
>  	if (is_i915_device(data->drm_fd))
>  		bo = gem_buffer_create_fb_obj(data->drm_fd, (1ULL << 32) - 4096);
>  	else
> -		bo = xe_bo_create_flags(data->drm_fd, 0,
> -					ALIGN(((1ULL << 32) - 4096),
> -					      xe_get_default_alignment(data->drm_fd)),
> -					vram_if_possible(data->drm_fd, 0));
> +		bo = xe_bo_create(data->drm_fd, 0,
> +				  ALIGN(((1ULL << 32) - 4096),
> +					xe_get_default_alignment(data->drm_fd)),
> +				  vram_if_possible(data->drm_fd, 0));
>  	igt_require(bo);
>  
>  	offsets[0] = 0;
> @@ -926,9 +926,9 @@ test_addfb(data_t *data)
>  	if (is_i915_device(data->drm_fd))
>  		bo = gem_buffer_create_fb_obj(data->drm_fd, size);
>  	else
> -		bo = xe_bo_create_flags(data->drm_fd, 0,
> -					ALIGN(size, xe_get_default_alignment(data->drm_fd)),
> -					vram_if_possible(data->drm_fd, 0));
> +		bo = xe_bo_create(data->drm_fd, 0,
> +				  ALIGN(size, xe_get_default_alignment(data->drm_fd)),
> +				  vram_if_possible(data->drm_fd, 0));
>  	igt_require(bo);
>  
>  	if (is_i915_device(data->drm_fd) && intel_display_ver(data->devid) < 4)
> diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c
> index 93e837b84..337afc00c 100644
> --- a/tests/intel/kms_ccs.c
> +++ b/tests/intel/kms_ccs.c
> @@ -434,8 +434,8 @@ static void test_bad_ccs_plane(data_t *data, int width, int height, int ccs_plan
>  	if (data->flags & TEST_BAD_CCS_HANDLE) {
>  		bad_ccs_bo = is_i915_device(data->drm_fd) ?
>  				gem_create(data->drm_fd, fb.size) :
> -				xe_bo_create_flags(data->drm_fd, 0, fb.size,
> -						   visible_vram_if_possible(data->drm_fd, 0));
> +				xe_bo_create(data->drm_fd, 0, fb.size,
> +					     visible_vram_if_possible(data->drm_fd, 0));
>  		f.handles[ccs_plane] = bad_ccs_bo;
>  	}
>  
> diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
> index 465f67e23..ceecba416 100644
> --- a/tests/intel/xe_ccs.c
> +++ b/tests/intel/xe_ccs.c
> @@ -102,8 +102,8 @@ static void surf_copy(int xe,
>  
>  	igt_assert(mid->compression);
>  	ccscopy = (uint32_t *) malloc(ccssize);
> -	ccs = xe_bo_create_flags(xe, 0, ccssize, sysmem);
> -	ccs2 = xe_bo_create_flags(xe, 0, ccssize, sysmem);
> +	ccs = xe_bo_create(xe, 0, ccssize, sysmem);
> +	ccs2 = xe_bo_create(xe, 0, ccssize, sysmem);
>  
>  	blt_ctrl_surf_copy_init(xe, &surf);
>  	surf.print_bb = param.print_bb;
> @@ -111,7 +111,7 @@ static void surf_copy(int xe,
>  				 uc_mocs, BLT_INDIRECT_ACCESS);
>  	blt_set_ctrl_surf_object(&surf.dst, ccs, sysmem, ccssize, uc_mocs, DIRECT_ACCESS);
>  	bb_size = xe_get_default_alignment(xe);
> -	bb1 = xe_bo_create_flags(xe, 0, bb_size, sysmem);
> +	bb1 = xe_bo_create(xe, 0, bb_size, sysmem);
>  	blt_set_batch(&surf.bb, bb1, bb_size, sysmem);
>  	blt_ctrl_surf_copy(xe, ctx, NULL, ahnd, &surf);
>  	intel_ctx_xe_sync(ctx, true);
> @@ -166,7 +166,7 @@ static void surf_copy(int xe,
>  	blt_set_copy_object(&blt.dst, dst);
>  	blt_set_object_ext(&ext.src, mid->compression_type, mid->x2, mid->y2, SURFACE_TYPE_2D);
>  	blt_set_object_ext(&ext.dst, 0, dst->x2, dst->y2, SURFACE_TYPE_2D);
> -	bb2 = xe_bo_create_flags(xe, 0, bb_size, sysmem);
> +	bb2 = xe_bo_create(xe, 0, bb_size, sysmem);
>  	blt_set_batch(&blt.bb, bb2, bb_size, sysmem);
>  	blt_block_copy(xe, ctx, NULL, ahnd, &blt, &ext);
>  	intel_ctx_xe_sync(ctx, true);
> @@ -297,7 +297,7 @@ static void block_copy(int xe,
>  	uint8_t uc_mocs = intel_get_uc_mocs_index(xe);
>  	int result;
>  
> -	bb = xe_bo_create_flags(xe, 0, bb_size, region1);
> +	bb = xe_bo_create(xe, 0, bb_size, region1);
>  
>  	if (!blt_uses_extended_block_copy(xe))
>  		pext = NULL;
> @@ -418,7 +418,7 @@ static void block_multicopy(int xe,
>  	uint8_t uc_mocs = intel_get_uc_mocs_index(xe);
>  	int result;
>  
> -	bb = xe_bo_create_flags(xe, 0, bb_size, region1);
> +	bb = xe_bo_create(xe, 0, bb_size, region1);
>  
>  	if (!blt_uses_extended_block_copy(xe))
>  		pext3 = NULL;
> diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c
> index 191c29155..715f7d3b5 100644
> --- a/tests/intel/xe_copy_basic.c
> +++ b/tests/intel/xe_copy_basic.c
> @@ -52,7 +52,7 @@ mem_copy(int fd, uint32_t src_handle, uint32_t dst_handle, const intel_ctx_t *ct
>  	uint32_t bb;
>  	int result;
>  
> -	bb = xe_bo_create_flags(fd, 0, bb_size, region);
> +	bb = xe_bo_create(fd, 0, bb_size, region);
>  
>  	blt_mem_init(fd, &mem);
>  	blt_set_mem_object(&mem.src, src_handle, size, 0, width, height,
> @@ -102,7 +102,7 @@ mem_set(int fd, uint32_t dst_handle, const intel_ctx_t *ctx, uint32_t size,
>  	uint32_t bb;
>  	uint8_t *result;
>  
> -	bb = xe_bo_create_flags(fd, 0, bb_size, region);
> +	bb = xe_bo_create(fd, 0, bb_size, region);
>  	blt_mem_init(fd, &mem);
>  	blt_set_mem_object(&mem.dst, dst_handle, size, 0, width, height, region,
>  			   dst_mocs, M_LINEAR, COMPRESSION_DISABLED);
> @@ -132,8 +132,8 @@ static void copy_test(int fd, uint32_t size, enum blt_cmd_type cmd, uint32_t reg
>  	uint32_t bo_size = ALIGN(size, xe_get_default_alignment(fd));
>  	intel_ctx_t *ctx;
>  
> -	src_handle = xe_bo_create_flags(fd, 0, bo_size, region);
> -	dst_handle = xe_bo_create_flags(fd, 0, bo_size, region);
> +	src_handle = xe_bo_create(fd, 0, bo_size, region);
> +	dst_handle = xe_bo_create(fd, 0, bo_size, region);
>  	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	exec_queue = xe_exec_queue_create(fd, vm, &inst, 0);
>  	ctx = intel_ctx_xe(fd, vm, exec_queue, 0, 0, 0);
> diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
> index 0d835dddb..ac9d9d767 100644
> --- a/tests/intel/xe_dma_buf_sync.c
> +++ b/tests/intel/xe_dma_buf_sync.c
> @@ -119,8 +119,8 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd[0]),
>  			xe_get_default_alignment(fd[0]));
>  	for (i = 0; i < n_bo; ++i) {
> -		bo[i] = xe_bo_create_flags(fd[0], 0, bo_size,
> -					   visible_vram_if_possible(fd[0], hwe0->gt_id));
> +		bo[i] = xe_bo_create(fd[0], 0, bo_size,
> +				     visible_vram_if_possible(fd[0], hwe0->gt_id));
>  		dma_buf_fd[i] = prime_handle_to_fd(fd[0], bo[i]);
>  		import_bo[i] = prime_fd_to_handle(fd[1], dma_buf_fd[i]);
>  
> diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
> index 4ef30cf49..8f737a533 100644
> --- a/tests/intel/xe_drm_fdinfo.c
> +++ b/tests/intel/xe_drm_fdinfo.c
> @@ -85,7 +85,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
>  		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
>  		pre_size = info.region_mem[memregion->instance + 1].active;
>  
> -		bo = xe_bo_create_flags(fd, vm, bo_size, region);
> +		bo = xe_bo_create(fd, vm, bo_size, region);
>  		data = xe_bo_map(fd, bo, bo_size);
>  
>  		for (i = 0; i < N_EXEC_QUEUES; i++) {
> @@ -185,7 +185,7 @@ static void test_shared(int xe)
>  		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
>  		pre_size = info.region_mem[memregion->instance + 1].shared;
>  
> -		bo = xe_bo_create_flags(xe, 0, BO_SIZE, region);
> +		bo = xe_bo_create(xe, 0, BO_SIZE, region);
>  
>  		flink.handle = bo;
>  		ret = igt_ioctl(xe, DRM_IOCTL_GEM_FLINK, &flink);
> @@ -232,7 +232,7 @@ static void test_total_resident(int xe)
>  		igt_assert_f(ret != 0, "failed with err:%d\n", errno);
>  		pre_size = info.region_mem[memregion->instance + 1].shared;
>  
> -		handle = xe_bo_create_flags(xe, vm, BO_SIZE, region);
> +		handle = xe_bo_create(xe, vm, BO_SIZE, region);
>  		xe_vm_bind_sync(xe, vm, handle, 0, addr, BO_SIZE);
>  
>  		ret = igt_parse_drm_fdinfo(xe, &info, NULL, 0, NULL, 0);
> diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
> index 6d953e58b..a9d501d5f 100644
> --- a/tests/intel/xe_evict.c
> +++ b/tests/intel/xe_evict.c
> @@ -99,18 +99,18 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
>                                  i < n_execs / 8 ? 0 : vm;
>  
>  			if (flags & MULTI_VM) {
> -				__bo = bo[i] = xe_bo_create_flags(fd, 0,
> -								  bo_size,
> -								  visible_vram_memory(fd, eci->gt_id));
> +				__bo = bo[i] = xe_bo_create(fd, 0,
> +							    bo_size,
> +							    visible_vram_memory(fd, eci->gt_id));
>  			} else if (flags & THREADED) {
> -				__bo = bo[i] = xe_bo_create_flags(fd, vm,
> -								  bo_size,
> -								  visible_vram_memory(fd, eci->gt_id));
> +				__bo = bo[i] = xe_bo_create(fd, vm,
> +							    bo_size,
> +							    visible_vram_memory(fd, eci->gt_id));
>  			} else {
> -				__bo = bo[i] = xe_bo_create_flags(fd, _vm,
> -								  bo_size,
> -								  visible_vram_memory(fd, eci->gt_id) |
> -								  system_memory(fd));
> +				__bo = bo[i] = xe_bo_create(fd, _vm,
> +							    bo_size,
> +							    visible_vram_memory(fd, eci->gt_id) |
> +							    system_memory(fd));
>  			}
>  		} else {
>  			__bo = bo[i % (n_execs / 2)];
> @@ -275,18 +275,18 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
>                                  i < n_execs / 8 ? 0 : vm;
>  
>  			if (flags & MULTI_VM) {
> -				__bo = bo[i] = xe_bo_create_flags(fd, 0,
> -								  bo_size,
> -								  visible_vram_memory(fd, eci->gt_id));
> +				__bo = bo[i] = xe_bo_create(fd, 0,
> +							    bo_size,
> +							    visible_vram_memory(fd, eci->gt_id));
>  			} else if (flags & THREADED) {
> -				__bo = bo[i] = xe_bo_create_flags(fd, vm,
> -								  bo_size,
> -								  visible_vram_memory(fd, eci->gt_id));
> +				__bo = bo[i] = xe_bo_create(fd, vm,
> +							    bo_size,
> +							    visible_vram_memory(fd, eci->gt_id));
>  			} else {
> -				__bo = bo[i] = xe_bo_create_flags(fd, _vm,
> -								  bo_size,
> -								  visible_vram_memory(fd, eci->gt_id) |
> -								  system_memory(fd));
> +				__bo = bo[i] = xe_bo_create(fd, _vm,
> +							    bo_size,
> +							    visible_vram_memory(fd, eci->gt_id) |
> +							    system_memory(fd));
>  			}
>  		} else {
>  			__bo = bo[i % (n_execs / 2)];
> diff --git a/tests/intel/xe_evict_ccs.c b/tests/intel/xe_evict_ccs.c
> index 1f5c795ef..1dc12eedd 100644
> --- a/tests/intel/xe_evict_ccs.c
> +++ b/tests/intel/xe_evict_ccs.c
> @@ -82,7 +82,7 @@ static void copy_obj(struct blt_copy_data *blt,
>  	w = src_obj->x2;
>  	h = src_obj->y2;
>  
> -	bb = xe_bo_create_flags(fd, 0, bb_size, visible_vram_memory(fd, 0));
> +	bb = xe_bo_create(fd, 0, bb_size, visible_vram_memory(fd, 0));
>  
>  	blt->color_depth = CD_32bit;
>  	blt->print_bb = params.print_bb;
> @@ -274,8 +274,8 @@ static void evict_single(int fd, int child, const struct config *config)
>  		}
>  
>  		if (config->flags & TEST_SIMPLE) {
> -			big_obj = xe_bo_create_flags(fd, vm, kb_left * SZ_1K,
> -						     vram_memory(fd, 0));
> +			big_obj = xe_bo_create(fd, vm, kb_left * SZ_1K,
> +					       vram_memory(fd, 0));
>  			break;
>  		}
>  
> diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> index 8a0165b8c..da34e117d 100644
> --- a/tests/intel/xe_exec_balancer.c
> +++ b/tests/intel/xe_exec_balancer.c
> @@ -70,7 +70,7 @@ static void test_all_active(int fd, int gt, int class)
>  	bo_size = sizeof(*data) * num_placements;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> +	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < num_placements; i++) {
> @@ -224,7 +224,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  		}
>  		memset(data, 0, bo_size);
>  	} else {
> -		bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> +		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  
> @@ -452,7 +452,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  			igt_assert(data);
>  		}
>  	} else {
> -		bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> +		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> index a401f0165..841696b68 100644
> --- a/tests/intel/xe_exec_basic.c
> +++ b/tests/intel/xe_exec_basic.c
> @@ -140,8 +140,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		if (flags & DEFER_ALLOC)
>  			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
>  
> -		bo = xe_bo_create_flags(fd, n_vm == 1 ? vm[0] : 0,
> -					bo_size, bo_flags);
> +		bo = xe_bo_create(fd, n_vm == 1 ? vm[0] : 0, bo_size, bo_flags);
>  		if (!(flags & DEFER_BIND))
>  			data = xe_bo_map(fd, bo, bo_size);
>  	}
> diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
> index 20d3fc6e8..beb962f79 100644
> --- a/tests/intel/xe_exec_compute_mode.c
> +++ b/tests/intel/xe_exec_compute_mode.c
> @@ -141,8 +141,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  			igt_assert(data);
>  		}
>  	} else {
> -		bo = xe_bo_create_flags(fd, flags & VM_FOR_BO ? vm : 0,
> -					bo_size, visible_vram_if_possible(fd, eci->gt_id));
> +		bo = xe_bo_create(fd, flags & VM_FOR_BO ? vm : 0,
> +				  bo_size, visible_vram_if_possible(fd, eci->gt_id));
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> index 92d552f97..903ad430d 100644
> --- a/tests/intel/xe_exec_fault_mode.c
> +++ b/tests/intel/xe_exec_fault_mode.c
> @@ -151,12 +151,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		}
>  	} else {
>  		if (flags & PREFETCH)
> -			bo = xe_bo_create_flags(fd, 0, bo_size,
> -						all_memory_regions(fd) |
> -						visible_vram_if_possible(fd, 0));
> +			bo = xe_bo_create(fd, 0, bo_size,
> +					  all_memory_regions(fd) |
> +					  visible_vram_if_possible(fd, 0));
>  		else
> -			bo = xe_bo_create_flags(fd, 0, bo_size,
> -						visible_vram_if_possible(fd, eci->gt_id));
> +			bo = xe_bo_create(fd, 0, bo_size,
> +					  visible_vram_if_possible(fd, eci->gt_id));
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> index 195e62911..704690e83 100644
> --- a/tests/intel/xe_exec_reset.c
> +++ b/tests/intel/xe_exec_reset.c
> @@ -50,8 +50,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, eci->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, eci->gt_id));
>  	spin = xe_bo_map(fd, bo, bo_size);
>  
>  	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> @@ -181,7 +181,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> +	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < n_exec_queues; i++) {
> @@ -367,8 +367,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, eci->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, eci->gt_id));
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < n_exec_queues; i++) {
> @@ -534,8 +534,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, eci->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, eci->gt_id));
>  	data = xe_bo_map(fd, bo, bo_size);
>  	memset(data, 0, bo_size);
>  
> @@ -661,7 +661,7 @@ static void submit_jobs(struct gt_thread_data *t)
>  	uint32_t bo;
>  	uint32_t *data;
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
>  	data = xe_bo_map(fd, bo, bo_size);
>  	data[0] = MI_BATCH_BUFFER_END;
>  
> diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
> index 9c14bfd14..bcc4de8d0 100644
> --- a/tests/intel/xe_exec_store.c
> +++ b/tests/intel/xe_exec_store.c
> @@ -81,8 +81,8 @@ static void store(int fd)
>  			xe_get_default_alignment(fd));
>  
>  	hw_engine = xe_hw_engine(fd, 1);
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, hw_engine->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, hw_engine->gt_id));
>  
>  	xe_vm_bind_async(fd, vm, hw_engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
>  	data = xe_bo_map(fd, bo, bo_size);
> @@ -150,8 +150,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
>  	sync[0].handle = syncobj_create(fd, 0);
>  
>  	for (i = 0; i < count; i++) {
> -		bo[i] = xe_bo_create_flags(fd, vm, bo_size,
> -					       visible_vram_if_possible(fd, eci->gt_id));
> +		bo[i] = xe_bo_create(fd, vm, bo_size,
> +				     visible_vram_if_possible(fd, eci->gt_id));
>  		bo_map[i] = xe_bo_map(fd, bo[i], bo_size);
>  		dst_offset[i] = intel_allocator_alloc_with_strategy(ahnd, bo[i],
>  								    bo_size, 0,
> @@ -235,8 +235,8 @@ static void store_all(int fd, int gt, int class)
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, 0));
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	xe_for_each_hw_engine(fd, hwe) {
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index bb979b18c..a9b0c0b09 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -106,8 +106,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  			igt_assert(data);
>  		}
>  	} else {
> -		bo = xe_bo_create_flags(fd, vm, bo_size,
> -					visible_vram_if_possible(fd, gt));
> +		bo = xe_bo_create(fd, vm, bo_size,
> +				  visible_vram_if_possible(fd, gt));
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> @@ -307,8 +307,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  			igt_assert(data);
>  		}
>  	} else {
> -		bo = xe_bo_create_flags(fd, 0, bo_size,
> -					visible_vram_if_possible(fd, eci->gt_id));
> +		bo = xe_bo_create(fd, 0, bo_size,
> +				  visible_vram_if_possible(fd, eci->gt_id));
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> @@ -510,8 +510,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  			igt_assert(data);
>  		}
>  	} else {
> -		bo = xe_bo_create_flags(fd, vm, bo_size,
> -					visible_vram_if_possible(fd, eci->gt_id));
> +		bo = xe_bo_create(fd, vm, bo_size,
> +				  visible_vram_if_possible(fd, eci->gt_id));
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
> index fd310138d..9c69be3ef 100644
> --- a/tests/intel/xe_exercise_blt.c
> +++ b/tests/intel/xe_exercise_blt.c
> @@ -125,7 +125,7 @@ static void fast_copy_emit(int xe, const intel_ctx_t *ctx,
>  	uint32_t bb, width = param.width, height = param.height;
>  	int result;
>  
> -	bb = xe_bo_create_flags(xe, 0, bb_size, region1);
> +	bb = xe_bo_create(xe, 0, bb_size, region1);
>  
>  	blt_copy_init(xe, &bltinit);
>  	src = blt_create_object(&bltinit, region1, width, height, bpp, 0,
> @@ -184,7 +184,7 @@ static void fast_copy(int xe, const intel_ctx_t *ctx,
>  	uint32_t width = param.width, height = param.height;
>  	int result;
>  
> -	bb = xe_bo_create_flags(xe, 0, bb_size, region1);
> +	bb = xe_bo_create(xe, 0, bb_size, region1);
>  
>  	blt_copy_init(xe, &blt);
>  	src = blt_create_object(&blt, region1, width, height, bpp, 0,
> diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
> index fa2f20cca..1e29d8905 100644
> --- a/tests/intel/xe_guc_pc.c
> +++ b/tests/intel/xe_guc_pc.c
> @@ -65,8 +65,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, eci->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, eci->gt_id));
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < n_exec_queues; i++) {
> diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
> index d66996cd5..a3a315297 100644
> --- a/tests/intel/xe_intel_bb.c
> +++ b/tests/intel/xe_intel_bb.c
> @@ -396,7 +396,7 @@ static void create_in_region(struct buf_ops *bops, uint64_t region)
>  		intel_bb_set_debug(ibb, true);
>  
>  	size = xe_min_page_size(xe, system_memory(xe));
> -	handle = xe_bo_create_flags(xe, 0, size, system_memory(xe));
> +	handle = xe_bo_create(xe, 0, size, system_memory(xe));
>  	intel_buf_init_full(bops, handle, &buf,
>  			    width/4, height, 32, 0,
>  			    I915_TILING_NONE, 0,
> diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
> index 7e7e43c00..a805eabda 100644
> --- a/tests/intel/xe_mmap.c
> +++ b/tests/intel/xe_mmap.c
> @@ -52,7 +52,7 @@ test_mmap(int fd, uint32_t flags)
>  
>  	igt_require_f(flags, "Device doesn't support such memory region\n");
>  
> -	bo = xe_bo_create_flags(fd, 0, 4096, flags);
> +	bo = xe_bo_create(fd, 0, 4096, flags);
>  
>  	map = xe_bo_map(fd, bo, 4096);
>  	strcpy(map, "Write some data to the BO!");
> @@ -72,8 +72,8 @@ static void test_bad_flags(int fd)
>  {
>  	uint64_t size = xe_get_default_alignment(fd);
>  	struct drm_xe_gem_mmap_offset mmo = {
> -		.handle = xe_bo_create_flags(fd, 0, size,
> -					     visible_vram_if_possible(fd, 0)),
> +		.handle = xe_bo_create(fd, 0, size,
> +				       visible_vram_if_possible(fd, 0)),
>  		.flags = -1u,
>  	};
>  
> @@ -92,8 +92,8 @@ static void test_bad_extensions(int fd)
>  	uint64_t size = xe_get_default_alignment(fd);
>  	struct xe_user_extension ext;
>  	struct drm_xe_gem_mmap_offset mmo = {
> -		.handle = xe_bo_create_flags(fd, 0, size,
> -					     visible_vram_if_possible(fd, 0)),
> +		.handle = xe_bo_create(fd, 0, size,
> +				       visible_vram_if_possible(fd, 0)),
>  	};
>  
>  	mmo.extensions = to_user_pointer(&ext);
> @@ -113,8 +113,8 @@ static void test_bad_object(int fd)
>  {
>  	uint64_t size = xe_get_default_alignment(fd);
>  	struct drm_xe_gem_mmap_offset mmo = {
> -		.handle = xe_bo_create_flags(fd, 0, size,
> -					     visible_vram_if_possible(fd, 0)),
> +		.handle = xe_bo_create(fd, 0, size,
> +				       visible_vram_if_possible(fd, 0)),
>  	};
>  
>  	mmo.handle = 0xdeadbeef;
> @@ -159,13 +159,13 @@ static void test_small_bar(int fd)
>  	uint32_t *map;
>  
>  	/* 2BIG invalid case */
> -	igt_assert_neq(__xe_bo_create_flags(fd, 0, visible_size + 4096,
> -					    visible_vram_memory(fd, 0), &bo),
> +	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + 4096,
> +				      visible_vram_memory(fd, 0), &bo),
>  		       0);
>  
>  	/* Normal operation */
> -	bo = xe_bo_create_flags(fd, 0, visible_size / 4,
> -				visible_vram_memory(fd, 0));
> +	bo = xe_bo_create(fd, 0, visible_size / 4,
> +			  visible_vram_memory(fd, 0));
>  	mmo = xe_bo_mmap_offset(fd, bo);
>  	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
>  	igt_assert(map != MAP_FAILED);
> @@ -176,9 +176,9 @@ static void test_small_bar(int fd)
>  	gem_close(fd, bo);
>  
>  	/* Normal operation with system memory spilling */
> -	bo = xe_bo_create_flags(fd, 0, visible_size,
> -				visible_vram_memory(fd, 0) |
> -				system_memory(fd));
> +	bo = xe_bo_create(fd, 0, visible_size,
> +			  visible_vram_memory(fd, 0) |
> +			  system_memory(fd));
>  	mmo = xe_bo_mmap_offset(fd, bo);
>  	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
>  	igt_assert(map != MAP_FAILED);
> @@ -189,8 +189,8 @@ static void test_small_bar(int fd)
>  	gem_close(fd, bo);
>  
>  	/* Bogus operation with SIGBUS */
> -	bo = xe_bo_create_flags(fd, 0, visible_size + 4096,
> -				vram_memory(fd, 0));
> +	bo = xe_bo_create(fd, 0, visible_size + 4096,
> +			  vram_memory(fd, 0));
>  	mmo = xe_bo_mmap_offset(fd, bo);
>  	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
>  	igt_assert(map != MAP_FAILED);
> diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
> index 9c2a70ff3..88ef39783 100644
> --- a/tests/intel/xe_noexec_ping_pong.c
> +++ b/tests/intel/xe_noexec_ping_pong.c
> @@ -70,8 +70,8 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
>  				  (unsigned long) bo_size,
>  				  (unsigned int) vm[i]);
>  
> -			bo[i][j] = xe_bo_create_flags(fd, vm[i], bo_size,
> -						      vram_memory(fd, 0));
> +			bo[i][j] = xe_bo_create(fd, vm[i], bo_size,
> +						vram_memory(fd, 0));
>  			xe_vm_bind(fd, vm[i], bo[i][j], 0, 0x40000 + j*bo_size,
>  				   bo_size, NULL, 0);
>  		}
> diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
> index 2c549f778..406bd4b8d 100644
> --- a/tests/intel/xe_perf_pmu.c
> +++ b/tests/intel/xe_perf_pmu.c
> @@ -103,7 +103,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
>  	spin = xe_bo_map(fd, bo, bo_size);
>  
>  	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> @@ -223,7 +223,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
>  	bo_size = sizeof(*data) * num_placements;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < num_placements; i++) {
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index 9423984cc..9bfe1acad 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -271,8 +271,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
>  	if (check_rpm && runtime_usage_available(device.pci_xe))
>  		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
>  
> -	bo = xe_bo_create_flags(device.fd_xe, vm, bo_size,
> -				visible_vram_if_possible(device.fd_xe, eci->gt_id));
> +	bo = xe_bo_create(device.fd_xe, vm, bo_size,
> +			  visible_vram_if_possible(device.fd_xe, eci->gt_id));
>  	data = xe_bo_map(device.fd_xe, bo, bo_size);
>  
>  	for (i = 0; i < n_exec_queues; i++) {
> @@ -409,7 +409,7 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
>  	threshold = vram_used_mb + (SIZE / 1024 /1024);
>  	igt_require(threshold < vram_total_mb);
>  
> -	bo = xe_bo_create_flags(device.fd_xe, 0, SIZE, flags);
> +	bo = xe_bo_create(device.fd_xe, 0, SIZE, flags);
>  	map = xe_bo_map(device.fd_xe, bo, SIZE);
>  	memset(map, 0, SIZE);
>  	munmap(map, SIZE);
> diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
> index c87eeef3c..cc133f5fb 100644
> --- a/tests/intel/xe_pm_residency.c
> +++ b/tests/intel/xe_pm_residency.c
> @@ -100,8 +100,8 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
>  	exec_queue = xe_exec_queue_create(fd, vm, hwe, 0);
>  	bo_size = xe_get_default_alignment(fd);
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, hwe->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, hwe->gt_id));
>  	data = xe_bo_map(fd, bo, bo_size);
>  	syncobj = syncobj_create(fd, 0);
>  
> diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
> index 536230f9f..378368eaa 100644
> --- a/tests/intel/xe_prime_self_import.c
> +++ b/tests/intel/xe_prime_self_import.c
> @@ -105,7 +105,7 @@ static void test_with_fd_dup(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> -	handle = xe_bo_create_flags(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> +	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
>  
>  	dma_buf_fd1 = prime_handle_to_fd(fd1, handle);
>  	gem_close(fd1, handle);
> @@ -138,8 +138,8 @@ static void test_with_two_bos(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> -	handle1 = xe_bo_create_flags(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> -	handle2 = xe_bo_create_flags(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> +	handle1 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> +	handle2 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
>  
>  	dma_buf_fd = prime_handle_to_fd(fd1, handle1);
>  	handle_import = prime_fd_to_handle(fd2, dma_buf_fd);
> @@ -174,8 +174,8 @@ static void test_with_one_bo_two_files(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> -	handle_orig = xe_bo_create_flags(fd1, 0, bo_size,
> -					 visible_vram_if_possible(fd1, 0));
> +	handle_orig = xe_bo_create(fd1, 0, bo_size,
> +				   visible_vram_if_possible(fd1, 0));
>  	dma_buf_fd1 = prime_handle_to_fd(fd1, handle_orig);
>  
>  	flink_name = gem_flink(fd1, handle_orig);
> @@ -207,7 +207,7 @@ static void test_with_one_bo(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> -	handle = xe_bo_create_flags(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> +	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
>  
>  	dma_buf_fd = prime_handle_to_fd(fd1, handle);
>  	handle_import1 = prime_fd_to_handle(fd2, dma_buf_fd);
> @@ -293,8 +293,8 @@ static void *thread_fn_reimport_vs_close(void *p)
>  
>  	fds[0] = drm_open_driver(DRIVER_XE);
>  
> -	handle = xe_bo_create_flags(fds[0], 0, bo_size,
> -				    visible_vram_if_possible(fds[0], 0));
> +	handle = xe_bo_create(fds[0], 0, bo_size,
> +			      visible_vram_if_possible(fds[0], 0));
>  
>  	fds[1] = prime_handle_to_fd(fds[0], handle);
>  	pthread_barrier_init(&g_barrier, NULL, num_threads);
> @@ -336,8 +336,8 @@ static void *thread_fn_export_vs_close(void *p)
>  
>  	igt_until_timeout(g_time_out) {
>  		/* We want to race gem close against prime export on handle one.*/
> -		handle = xe_bo_create_flags(fd, 0, bo_size,
> -					    visible_vram_if_possible(fd, 0));
> +		handle = xe_bo_create(fd, 0, bo_size,
> +				      visible_vram_if_possible(fd, 0));
>  		if (handle != 1)
>  			gem_close(fd, handle);
>  
> @@ -433,8 +433,8 @@ static void test_llseek_size(void)
>  	for (i = 0; i < 10; i++) {
>  		int bufsz = xe_get_default_alignment(fd) << i;
>  
> -		handle = xe_bo_create_flags(fd, 0, bufsz,
> -					    visible_vram_if_possible(fd, 0));
> +		handle = xe_bo_create(fd, 0, bufsz,
> +				      visible_vram_if_possible(fd, 0));
>  		dma_buf_fd = prime_handle_to_fd(fd, handle);
>  
>  		gem_close(fd, handle);
> @@ -462,8 +462,8 @@ static void test_llseek_bad(void)
>  
>  	fd = drm_open_driver(DRIVER_XE);
>  
> -	handle = xe_bo_create_flags(fd, 0, bo_size,
> -				    visible_vram_if_possible(fd, 0));
> +	handle = xe_bo_create(fd, 0, bo_size,
> +			      visible_vram_if_possible(fd, 0));
>  	dma_buf_fd = prime_handle_to_fd(fd, handle);
>  
>  	gem_close(fd, handle);
> diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
> index 261fde9af..c1b161f9c 100644
> --- a/tests/intel/xe_spin_batch.c
> +++ b/tests/intel/xe_spin_batch.c
> @@ -169,7 +169,7 @@ static void xe_spin_fixed_duration(int fd)
>  	exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
>  	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
>  	bo_size = ALIGN(sizeof(*spin) + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> -	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
>  	spin = xe_bo_map(fd, bo, bo_size);
>  	spin_addr = intel_allocator_alloc_with_strategy(ahnd, bo, bo_size, 0,
>  							ALLOC_STRATEGY_LOW_TO_HIGH);
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index eedd05b57..52195737c 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -51,8 +51,8 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
>  	batch_size = (n_dwords * 4 + 1) * sizeof(uint32_t);
>  	batch_size = ALIGN(batch_size + xe_cs_prefetch_size(fd),
>  			   xe_get_default_alignment(fd));
> -	batch_bo = xe_bo_create_flags(fd, vm, batch_size,
> -				      visible_vram_if_possible(fd, 0));
> +	batch_bo = xe_bo_create(fd, vm, batch_size,
> +				visible_vram_if_possible(fd, 0));
>  	batch_map = xe_bo_map(fd, batch_bo, batch_size);
>  
>  	for (i = 0; i < n_dwords; i++) {
> @@ -116,7 +116,7 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
>  		vms = malloc(sizeof(*vms) * n_addrs);
>  		igt_assert(vms);
>  	}
> -	bo = xe_bo_create_flags(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
>  	map = xe_bo_map(fd, bo, bo_size);
>  	memset(map, 0, bo_size);
>  
> @@ -267,7 +267,7 @@ static void test_partial_unbinds(int fd)
>  {
>  	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	size_t bo_size = 3 * xe_get_default_alignment(fd);
> -	uint32_t bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> +	uint32_t bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
>  	uint64_t unbind_size = bo_size / 3;
>  	uint64_t addr = 0x1a0000;
>  
> @@ -316,7 +316,7 @@ static void unbind_all(int fd, int n_vmas)
>  	};
>  
>  	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
> -	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0));
>  
>  	for (i = 0; i < n_vmas; ++i)
>  		xe_vm_bind_async(fd, vm, 0, bo, 0, addr + i * bo_size,
> @@ -362,7 +362,6 @@ static void userptr_invalid(int fd)
>  	xe_vm_destroy(fd, vm);
>  }
>  
> -
>  /**
>   * SUBTEST: shared-%s-page
>   * Description: Test shared arg[1] page
> @@ -422,8 +421,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  		addr_stride = addr_stride + bo_size;
>  
>  	for (i = 0; i < n_bo; ++i) {
> -		bo[i] = xe_bo_create_flags(fd, vm, bo_size,
> -					   visible_vram_if_possible(fd, eci->gt_id));
> +		bo[i] = xe_bo_create(fd, vm, bo_size,
> +				     visible_vram_if_possible(fd, eci->gt_id));
>  		data[i] = xe_bo_map(fd, bo[i], bo_size);
>  	}
>  
> @@ -601,8 +600,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
>  	bo_size = sizeof(*data) * N_EXEC_QUEUES;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, eci->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, eci->gt_id));
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < N_EXEC_QUEUES; i++) {
> @@ -782,8 +781,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create_flags(fd, vm, bo_size,
> -				visible_vram_if_possible(fd, eci->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size,
> +			  visible_vram_if_possible(fd, eci->gt_id));
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
> @@ -980,8 +979,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
>  		igt_skip_on(xe_visible_vram_size(fd, 0) && bo_size >
>  			    xe_visible_vram_size(fd, 0));
>  
> -		bo = xe_bo_create_flags(fd, vm, bo_size,
> -					visible_vram_if_possible(fd, eci->gt_id));
> +		bo = xe_bo_create(fd, vm, bo_size,
> +				  visible_vram_if_possible(fd, eci->gt_id));
>  		map = xe_bo_map(fd, bo, bo_size);
>  	}
>  
> @@ -1272,8 +1271,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
>  			    MAP_ANONYMOUS, -1, 0);
>  		igt_assert(map != MAP_FAILED);
>  	} else {
> -		bo = xe_bo_create_flags(fd, vm, bo_size,
> -					visible_vram_if_possible(fd, eci->gt_id));
> +		bo = xe_bo_create(fd, vm, bo_size,
> +				  visible_vram_if_possible(fd, eci->gt_id));
>  		map = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(map, 0, bo_size);
> @@ -1576,9 +1575,9 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
>  		igt_assert(map0 != MAP_FAILED);
>  		igt_assert(map1 != MAP_FAILED);
>  	} else {
> -		bo0 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
> +		bo0 = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
>  		map0 = xe_bo_map(fd, bo0, bo_size);
> -		bo1 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
> +		bo1 = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
>  		map1 = xe_bo_map(fd, bo1, bo_size);
>  	}
>  	memset(map0, 0, bo_size);
> diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
> index b1cae0d9b..46048f9d5 100644
> --- a/tests/intel/xe_waitfence.c
> +++ b/tests/intel/xe_waitfence.c
> @@ -64,19 +64,19 @@ waitfence(int fd, enum waittype wt)
>  	int64_t timeout;
>  
>  	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
> -	bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> +	bo_1 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
>  	do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
> -	bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> +	bo_2 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
>  	do_bind(fd, vm, bo_2, 0, 0xc0000000, 0x40000, 2);
> -	bo_3 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> +	bo_3 = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
>  	do_bind(fd, vm, bo_3, 0, 0x180000000, 0x40000, 3);
> -	bo_4 = xe_bo_create_flags(fd, vm, 0x10000, MY_FLAG);
> +	bo_4 = xe_bo_create(fd, vm, 0x10000, MY_FLAG);
>  	do_bind(fd, vm, bo_4, 0, 0x140000000, 0x10000, 4);
> -	bo_5 = xe_bo_create_flags(fd, vm, 0x100000, MY_FLAG);
> +	bo_5 = xe_bo_create(fd, vm, 0x100000, MY_FLAG);
>  	do_bind(fd, vm, bo_5, 0, 0x100000000, 0x100000, 5);
> -	bo_6 = xe_bo_create_flags(fd, vm, 0x1c0000, MY_FLAG);
> +	bo_6 = xe_bo_create(fd, vm, 0x1c0000, MY_FLAG);
>  	do_bind(fd, vm, bo_6, 0, 0xc0040000, 0x1c0000, 6);
> -	bo_7 = xe_bo_create_flags(fd, vm, 0x10000, MY_FLAG);
> +	bo_7 = xe_bo_create(fd, vm, 0x10000, MY_FLAG);
>  	do_bind(fd, vm, bo_7, 0, 0xeffff0000, 0x10000, 7);
>  
>  	if (wt == RELTIME) {
> @@ -134,7 +134,7 @@ invalid_flag(int fd)
>  
>  	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
> -	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> +	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
>  
>  	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
>  
> @@ -159,7 +159,7 @@ invalid_ops(int fd)
>  
>  	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
> -	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> +	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
>  
>  	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
>  
> @@ -184,7 +184,7 @@ invalid_engine(int fd)
>  
>  	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
> -	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> +	bo = xe_bo_create(fd, vm, 0x40000, MY_FLAG);
>  
>  	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
>  
> diff --git a/tests/kms_addfb_basic.c b/tests/kms_addfb_basic.c
> index fc16b8814..4f293c2ee 100644
> --- a/tests/kms_addfb_basic.c
> +++ b/tests/kms_addfb_basic.c
> @@ -199,7 +199,7 @@ static void invalid_tests(int fd)
>  			handle = gem_create_in_memory_regions(fd, size, REGION_SMEM);
>  		} else {
>  			igt_require(xe_has_vram(fd));
> -			handle = xe_bo_create_flags(fd, 0, size, system_memory(fd));
> +			handle = xe_bo_create(fd, 0, size, system_memory(fd));
>  		}
>  
>  		f.handles[0] = handle;
> diff --git a/tests/kms_getfb.c b/tests/kms_getfb.c
> index 059f66d99..1f9e813d8 100644
> --- a/tests/kms_getfb.c
> +++ b/tests/kms_getfb.c
> @@ -149,7 +149,7 @@ static void get_ccs_fb(int fd, struct drm_mode_fb_cmd2 *ret)
>  	if (is_i915_device(fd))
>  		add.handles[0] = gem_buffer_create_fb_obj(fd, size);
>  	else
> -		add.handles[0] = xe_bo_create_flags(fd, 0, size, vram_if_possible(fd, 0));
> +		add.handles[0] = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0));
>  	igt_require(add.handles[0] != 0);
>  
>  	if (!HAS_FLATCCS(devid))
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 05/13] xe_query: Kill visible_vram_if_possible
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 05/13] xe_query: Kill visible_vram_if_possible Francois Dugast
@ 2023-11-21 17:40   ` Kamil Konieczny
  2023-11-28 19:49     ` Francois Dugast
  0 siblings, 1 reply; 30+ messages in thread
From: Kamil Konieczny @ 2023-11-21 17:40 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-16 at 14:53:40 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Let the caller set the flag and the xe_bo_query clear if
> not needed.
> 
> Although the current helper makes the code cleaner, the
> goal is to split the flags into placement and flags as two
> different arguments on xe_bo_create. So, the flag decision
> cannot be hidden under the helper.
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>

Overall looks good, I am thinking about adding igt_debug at
__xe_bo_create (see below) but that can go in separate,
follow-up patch.

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  benchmarks/gem_wsim.c              |  3 ++-
>  lib/igt_draw.c                     |  3 ++-
>  lib/igt_fb.c                       |  3 ++-
>  lib/intel_batchbuffer.c            |  6 ++++--
>  lib/xe/xe_ioctl.c                  | 19 +++++++++++++++++++
>  lib/xe/xe_query.c                  | 26 --------------------------
>  lib/xe/xe_query.h                  |  1 -
>  lib/xe/xe_spin.c                   |  7 ++++---
>  tests/intel/kms_ccs.c              |  3 ++-
>  tests/intel/xe_dma_buf_sync.c      |  3 ++-
>  tests/intel/xe_exec_balancer.c     |  9 ++++++---
>  tests/intel/xe_exec_basic.c        |  2 +-
>  tests/intel/xe_exec_compute_mode.c |  3 ++-
>  tests/intel/xe_exec_fault_mode.c   |  6 ++++--
>  tests/intel/xe_exec_reset.c        | 14 +++++++++-----
>  tests/intel/xe_exec_store.c        |  9 ++++++---
>  tests/intel/xe_exec_threads.c      |  9 ++++++---
>  tests/intel/xe_guc_pc.c            |  3 ++-
>  tests/intel/xe_mmap.c              |  9 ++++++---
>  tests/intel/xe_pm.c                |  3 ++-
>  tests/intel/xe_pm_residency.c      |  3 ++-
>  tests/intel/xe_prime_self_import.c | 27 ++++++++++++++++++---------
>  tests/intel/xe_vm.c                | 21 ++++++++++++++-------
>  23 files changed, 115 insertions(+), 77 deletions(-)
> 
> diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
> index d6d3deb5f..966d9b465 100644
> --- a/benchmarks/gem_wsim.c
> +++ b/benchmarks/gem_wsim.c
> @@ -1735,7 +1735,8 @@ xe_alloc_step_batch(struct workload *wrk, struct w_step *w)
>  	int i;
>  
>  	w->bb_handle = xe_bo_create(fd, vm->id, PAGE_SIZE,
> -				visible_vram_if_possible(fd, eq->hwe_list[0].gt_id));
> +				    vram_if_possible(fd, eq->hwe_list[0].gt_id) |
> +				    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	w->xe.data = xe_bo_map(fd, w->bb_handle, PAGE_SIZE);
>  	w->xe.exec.address =
>  		intel_allocator_alloc_with_strategy(vm->ahnd, w->bb_handle, PAGE_SIZE,
> diff --git a/lib/igt_draw.c b/lib/igt_draw.c
> index 5935eb058..b16afd799 100644
> --- a/lib/igt_draw.c
> +++ b/lib/igt_draw.c
> @@ -797,7 +797,8 @@ static void draw_rect_render(int fd, struct cmd_data *cmd_data,
>  	else
>  		tmp.handle = xe_bo_create(fd, 0,
>  					  ALIGN(tmp.size, xe_get_default_alignment(fd)),
> -					  visible_vram_if_possible(fd, 0));
> +					  vram_if_possible(fd, 0) |
> +					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
>  	tmp.stride = rect->w * pixel_size;
>  	tmp.bpp = buf->bpp;
> diff --git a/lib/igt_fb.c b/lib/igt_fb.c
> index f96dca7a4..0a6aa27c8 100644
> --- a/lib/igt_fb.c
> +++ b/lib/igt_fb.c
> @@ -1206,7 +1206,8 @@ static int create_bo_for_fb(struct igt_fb *fb, bool prefer_sysmem)
>  			igt_assert(err == 0 || err == -EOPNOTSUPP);
>  		} else if (is_xe_device(fd)) {
>  			fb->gem_handle = xe_bo_create(fd, 0, fb->size,
> -						      visible_vram_if_possible(fd, 0));
> +						      vram_if_possible(fd, 0)
> +						      | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		} else if (is_vc4_device(fd)) {
>  			fb->gem_handle = igt_vc4_create_bo(fd, fb->size);
>  
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 7fa4e3487..45b1665f7 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -945,7 +945,8 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
>  
>  		ibb->alignment = xe_get_default_alignment(fd);
>  		size = ALIGN(size, ibb->alignment);
> -		ibb->handle = xe_bo_create(fd, 0, size, visible_vram_if_possible(fd, 0));
> +		ibb->handle = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0) |
> +					   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
>  		/* Limit to 48-bit due to MI_* address limitation */
>  		ibb->gtt_size = 1ull << min_t(uint32_t, xe_va_bits(fd), 48);
> @@ -1404,7 +1405,8 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
>  		ibb->handle = gem_create(ibb->fd, ibb->size);
>  	else
>  		ibb->handle = xe_bo_create(ibb->fd, 0, ibb->size,
> -					   visible_vram_if_possible(ibb->fd, 0));
> +					   vram_if_possible(ibb->fd, 0) |
> +					   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
>  	/* Reacquire offset for RELOC and SIMPLE */
>  	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index 63fa2ae25..1d63081d6 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -226,6 +226,18 @@ void xe_vm_destroy(int fd, uint32_t vm)
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_DESTROY, &destroy), 0);
>  }
>  
> +static bool vram_selected(int fd, uint32_t selected_regions)
> +{
> +	uint64_t regions = all_memory_regions(fd) & selected_regions;
> +	uint64_t region;
> +
> +	xe_for_each_mem_region(fd, regions, region)
> +		if (xe_mem_region(fd, region)->mem_class == DRM_XE_MEM_REGION_CLASS_VRAM)
> +			return true;
> +
> +	return false;
> +}
> +
>  uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
>  			uint32_t *handle)
>  {
> @@ -236,6 +248,13 @@ uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
>  	};
>  	int err;
>  
> +	/*
> +	 * In case vram_if_possible returned system_memory,
> +	 *  visible VRAM cannot be requested through flags
-------^
Remove one space before "visible" word.
I am not sure, maybe we should add here igt_debug when
		flags & DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM

Adding Bhanu and Juha-Pekka to Cc.

Regards,
Kamil


> +	 */
> +	if (!vram_selected(fd, flags))
> +		create.flags &= ~DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
> +
>  	err = igt_ioctl(fd, DRM_IOCTL_XE_GEM_CREATE, &create);
>  	if (err)
>  		return err;
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index afd443be3..760a150db 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -442,32 +442,6 @@ uint64_t vram_if_possible(int fd, int gt)
>  	return vram_memory(fd, gt) ?: system_memory(fd);
>  }
>  
> -/**
> - * visible_vram_if_possible:
> - * @fd: xe device fd
> - * @gt: gt id
> - *
> - * Returns vram memory bitmask for xe device @fd and @gt id or system memory if
> - * there's no vram memory available for @gt. Also attaches the
> - * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
> - * when using vram.
> - */
> -uint64_t visible_vram_if_possible(int fd, int gt)
> -{
> -	uint64_t regions = all_memory_regions(fd);
> -	uint64_t system_memory = regions & 0x1;
> -	uint64_t vram = regions & (0x2 << gt);
> -
> -	/*
> -	 * TODO: Keep it backwards compat for now. Fixup once the kernel side
> -	 * has landed.
> -	 */
> -	if (__xe_visible_vram_size(fd, gt))
> -		return vram ? vram | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
> -	else
> -		return vram ? vram : system_memory; /* older kernel */
> -}
> -
>  /**
>   * xe_hw_engines:
>   * @fd: xe device fd
> diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> index 7b3fc3100..4dd0ad573 100644
> --- a/lib/xe/xe_query.h
> +++ b/lib/xe/xe_query.h
> @@ -82,7 +82,6 @@ uint64_t system_memory(int fd);
>  uint64_t vram_memory(int fd, int gt);
>  uint64_t visible_vram_memory(int fd, int gt);
>  uint64_t vram_if_possible(int fd, int gt);
> -uint64_t visible_vram_if_possible(int fd, int gt);
>  struct drm_xe_engine_class_instance *xe_hw_engines(int fd);
>  struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx);
>  struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> index 828938434..270b58bf5 100644
> --- a/lib/xe/xe_spin.c
> +++ b/lib/xe/xe_spin.c
> @@ -220,7 +220,8 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
>  	}
>  
>  	spin->handle = xe_bo_create(fd, spin->vm, bo_size,
> -				    visible_vram_if_possible(fd, 0));
> +				    vram_if_possible(fd, 0) |
> +				    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	xe_spin = xe_bo_map(fd, spin->handle, bo_size);
>  	addr = intel_allocator_alloc_with_strategy(ahnd, spin->handle, bo_size, 0, ALLOC_STRATEGY_LOW_TO_HIGH);
>  	xe_vm_bind_sync(fd, spin->vm, spin->handle, 0, addr, bo_size);
> @@ -298,8 +299,8 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
>  
>  	vm = xe_vm_create(fd, 0, 0);
>  
> -	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, hwe->gt_id));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, hwe->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	spin = xe_bo_map(fd, bo, 0x1000);
>  
>  	xe_vm_bind_sync(fd, vm, bo, 0, addr, bo_size);
> diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c
> index 337afc00c..5ae28615f 100644
> --- a/tests/intel/kms_ccs.c
> +++ b/tests/intel/kms_ccs.c
> @@ -435,7 +435,8 @@ static void test_bad_ccs_plane(data_t *data, int width, int height, int ccs_plan
>  		bad_ccs_bo = is_i915_device(data->drm_fd) ?
>  				gem_create(data->drm_fd, fb.size) :
>  				xe_bo_create(data->drm_fd, 0, fb.size,
> -					     visible_vram_if_possible(data->drm_fd, 0));
> +					     vram_if_possible(data->drm_fd, 0) |
> +					     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		f.handles[ccs_plane] = bad_ccs_bo;
>  	}
>  
> diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
> index ac9d9d767..9318647af 100644
> --- a/tests/intel/xe_dma_buf_sync.c
> +++ b/tests/intel/xe_dma_buf_sync.c
> @@ -120,7 +120,8 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
>  			xe_get_default_alignment(fd[0]));
>  	for (i = 0; i < n_bo; ++i) {
>  		bo[i] = xe_bo_create(fd[0], 0, bo_size,
> -				     visible_vram_if_possible(fd[0], hwe0->gt_id));
> +				     vram_if_possible(fd[0], hwe0->gt_id) |
> +				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		dma_buf_fd[i] = prime_handle_to_fd(fd[0], bo[i]);
>  		import_bo[i] = prime_fd_to_handle(fd[1], dma_buf_fd[i]);
>  
> diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> index da34e117d..388bb6185 100644
> --- a/tests/intel/xe_exec_balancer.c
> +++ b/tests/intel/xe_exec_balancer.c
> @@ -70,7 +70,8 @@ static void test_all_active(int fd, int gt, int class)
>  	bo_size = sizeof(*data) * num_placements;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < num_placements; i++) {
> @@ -224,7 +225,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  		}
>  		memset(data, 0, bo_size);
>  	} else {
> -		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> +		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
> +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  
> @@ -452,7 +454,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  			igt_assert(data);
>  		}
>  	} else {
> -		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> +		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
> +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> index 841696b68..ca287b2e5 100644
> --- a/tests/intel/xe_exec_basic.c
> +++ b/tests/intel/xe_exec_basic.c
> @@ -136,7 +136,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  	} else {
>  		uint32_t bo_flags;
>  
> -		bo_flags = visible_vram_if_possible(fd, eci->gt_id);
> +		bo_flags = vram_if_possible(fd, eci->gt_id) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
>  		if (flags & DEFER_ALLOC)
>  			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
>  
> diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
> index beb962f79..07a27fd29 100644
> --- a/tests/intel/xe_exec_compute_mode.c
> +++ b/tests/intel/xe_exec_compute_mode.c
> @@ -142,7 +142,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		}
>  	} else {
>  		bo = xe_bo_create(fd, flags & VM_FOR_BO ? vm : 0,
> -				  bo_size, visible_vram_if_possible(fd, eci->gt_id));
> +				  bo_size, vram_if_possible(fd, eci->gt_id) |
> +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> index 903ad430d..bfd61c4ea 100644
> --- a/tests/intel/xe_exec_fault_mode.c
> +++ b/tests/intel/xe_exec_fault_mode.c
> @@ -153,10 +153,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		if (flags & PREFETCH)
>  			bo = xe_bo_create(fd, 0, bo_size,
>  					  all_memory_regions(fd) |
> -					  visible_vram_if_possible(fd, 0));
> +					  vram_if_possible(fd, 0) |
> +					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		else
>  			bo = xe_bo_create(fd, 0, bo_size,
> -					  visible_vram_if_possible(fd, eci->gt_id));
> +					  vram_if_possible(fd, eci->gt_id) |
> +					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> index 704690e83..3affb19ae 100644
> --- a/tests/intel/xe_exec_reset.c
> +++ b/tests/intel/xe_exec_reset.c
> @@ -51,7 +51,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
>  			xe_get_default_alignment(fd));
>  
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, eci->gt_id));
> +			  vram_if_possible(fd, eci->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	spin = xe_bo_map(fd, bo, bo_size);
>  
>  	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> @@ -181,7 +182,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < n_exec_queues; i++) {
> @@ -368,7 +370,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  			xe_get_default_alignment(fd));
>  
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, eci->gt_id));
> +			  vram_if_possible(fd, eci->gt_id) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < n_exec_queues; i++) {
> @@ -535,7 +537,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  			xe_get_default_alignment(fd));
>  
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, eci->gt_id));
> +			  vram_if_possible(fd, eci->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  	memset(data, 0, bo_size);
>  
> @@ -661,7 +664,8 @@ static void submit_jobs(struct gt_thread_data *t)
>  	uint32_t bo;
>  	uint32_t *data;
>  
> -	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  	data[0] = MI_BATCH_BUFFER_END;
>  
> diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
> index bcc4de8d0..884183202 100644
> --- a/tests/intel/xe_exec_store.c
> +++ b/tests/intel/xe_exec_store.c
> @@ -82,7 +82,8 @@ static void store(int fd)
>  
>  	hw_engine = xe_hw_engine(fd, 1);
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, hw_engine->gt_id));
> +			  vram_if_possible(fd, hw_engine->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
>  	xe_vm_bind_async(fd, vm, hw_engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
>  	data = xe_bo_map(fd, bo, bo_size);
> @@ -151,7 +152,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  	for (i = 0; i < count; i++) {
>  		bo[i] = xe_bo_create(fd, vm, bo_size,
> -				     visible_vram_if_possible(fd, eci->gt_id));
> +				     vram_if_possible(fd, eci->gt_id) |
> +				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		bo_map[i] = xe_bo_map(fd, bo[i], bo_size);
>  		dst_offset[i] = intel_allocator_alloc_with_strategy(ahnd, bo[i],
>  								    bo_size, 0,
> @@ -236,7 +238,8 @@ static void store_all(int fd, int gt, int class)
>  			xe_get_default_alignment(fd));
>  
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, 0));
> +			  vram_if_possible(fd, 0) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	xe_for_each_hw_engine(fd, hwe) {
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index a9b0c0b09..ebc41dadd 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -107,7 +107,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  		}
>  	} else {
>  		bo = xe_bo_create(fd, vm, bo_size,
> -				  visible_vram_if_possible(fd, gt));
> +				  vram_if_possible(fd, gt) |
> +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> @@ -308,7 +309,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  		}
>  	} else {
>  		bo = xe_bo_create(fd, 0, bo_size,
> -				  visible_vram_if_possible(fd, eci->gt_id));
> +				  vram_if_possible(fd, eci->gt_id) |
> +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> @@ -511,7 +513,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  		}
>  	} else {
>  		bo = xe_bo_create(fd, vm, bo_size,
> -				  visible_vram_if_possible(fd, eci->gt_id));
> +				  vram_if_possible(fd, eci->gt_id) |
> +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		data = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(data, 0, bo_size);
> diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
> index 1e29d8905..4234475e0 100644
> --- a/tests/intel/xe_guc_pc.c
> +++ b/tests/intel/xe_guc_pc.c
> @@ -66,7 +66,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
>  			xe_get_default_alignment(fd));
>  
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, eci->gt_id));
> +			  vram_if_possible(fd, eci->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < n_exec_queues; i++) {
> diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
> index a805eabda..a4b53ad48 100644
> --- a/tests/intel/xe_mmap.c
> +++ b/tests/intel/xe_mmap.c
> @@ -73,7 +73,8 @@ static void test_bad_flags(int fd)
>  	uint64_t size = xe_get_default_alignment(fd);
>  	struct drm_xe_gem_mmap_offset mmo = {
>  		.handle = xe_bo_create(fd, 0, size,
> -				       visible_vram_if_possible(fd, 0)),
> +				       vram_if_possible(fd, 0) |
> +				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
>  		.flags = -1u,
>  	};
>  
> @@ -93,7 +94,8 @@ static void test_bad_extensions(int fd)
>  	struct xe_user_extension ext;
>  	struct drm_xe_gem_mmap_offset mmo = {
>  		.handle = xe_bo_create(fd, 0, size,
> -				       visible_vram_if_possible(fd, 0)),
> +				       vram_if_possible(fd, 0) |
> +				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
>  	};
>  
>  	mmo.extensions = to_user_pointer(&ext);
> @@ -114,7 +116,8 @@ static void test_bad_object(int fd)
>  	uint64_t size = xe_get_default_alignment(fd);
>  	struct drm_xe_gem_mmap_offset mmo = {
>  		.handle = xe_bo_create(fd, 0, size,
> -				       visible_vram_if_possible(fd, 0)),
> +				       vram_if_possible(fd, 0) |
> +				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
>  	};
>  
>  	mmo.handle = 0xdeadbeef;
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index 9bfe1acad..9fd3527f7 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -272,7 +272,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
>  		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
>  
>  	bo = xe_bo_create(device.fd_xe, vm, bo_size,
> -			  visible_vram_if_possible(device.fd_xe, eci->gt_id));
> +			  vram_if_possible(device.fd_xe, eci->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(device.fd_xe, bo, bo_size);
>  
>  	for (i = 0; i < n_exec_queues; i++) {
> diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
> index cc133f5fb..40a1693b8 100644
> --- a/tests/intel/xe_pm_residency.c
> +++ b/tests/intel/xe_pm_residency.c
> @@ -101,7 +101,8 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
>  	bo_size = xe_get_default_alignment(fd);
>  
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, hwe->gt_id));
> +			  vram_if_possible(fd, hwe->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  	syncobj = syncobj_create(fd, 0);
>  
> diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
> index 378368eaa..2c2f2898c 100644
> --- a/tests/intel/xe_prime_self_import.c
> +++ b/tests/intel/xe_prime_self_import.c
> @@ -105,7 +105,8 @@ static void test_with_fd_dup(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> -	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> +	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
> +			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
>  	dma_buf_fd1 = prime_handle_to_fd(fd1, handle);
>  	gem_close(fd1, handle);
> @@ -138,8 +139,10 @@ static void test_with_two_bos(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> -	handle1 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> -	handle2 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> +	handle1 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
> +			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> +	handle2 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
> +			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
>  	dma_buf_fd = prime_handle_to_fd(fd1, handle1);
>  	handle_import = prime_fd_to_handle(fd2, dma_buf_fd);
> @@ -175,7 +178,8 @@ static void test_with_one_bo_two_files(void)
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
>  	handle_orig = xe_bo_create(fd1, 0, bo_size,
> -				   visible_vram_if_possible(fd1, 0));
> +				   vram_if_possible(fd1, 0) |
> +				   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	dma_buf_fd1 = prime_handle_to_fd(fd1, handle_orig);
>  
>  	flink_name = gem_flink(fd1, handle_orig);
> @@ -207,7 +211,8 @@ static void test_with_one_bo(void)
>  	fd1 = drm_open_driver(DRIVER_XE);
>  	fd2 = drm_open_driver(DRIVER_XE);
>  
> -	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> +	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
> +			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
>  	dma_buf_fd = prime_handle_to_fd(fd1, handle);
>  	handle_import1 = prime_fd_to_handle(fd2, dma_buf_fd);
> @@ -294,7 +299,8 @@ static void *thread_fn_reimport_vs_close(void *p)
>  	fds[0] = drm_open_driver(DRIVER_XE);
>  
>  	handle = xe_bo_create(fds[0], 0, bo_size,
> -			      visible_vram_if_possible(fds[0], 0));
> +			      vram_if_possible(fds[0], 0) |
> +			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
>  	fds[1] = prime_handle_to_fd(fds[0], handle);
>  	pthread_barrier_init(&g_barrier, NULL, num_threads);
> @@ -337,7 +343,8 @@ static void *thread_fn_export_vs_close(void *p)
>  	igt_until_timeout(g_time_out) {
>  		/* We want to race gem close against prime export on handle one.*/
>  		handle = xe_bo_create(fd, 0, bo_size,
> -				      visible_vram_if_possible(fd, 0));
> +				      vram_if_possible(fd, 0) |
> +				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		if (handle != 1)
>  			gem_close(fd, handle);
>  
> @@ -434,7 +441,8 @@ static void test_llseek_size(void)
>  		int bufsz = xe_get_default_alignment(fd) << i;
>  
>  		handle = xe_bo_create(fd, 0, bufsz,
> -				      visible_vram_if_possible(fd, 0));
> +				      vram_if_possible(fd, 0) |
> +				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		dma_buf_fd = prime_handle_to_fd(fd, handle);
>  
>  		gem_close(fd, handle);
> @@ -463,7 +471,8 @@ static void test_llseek_bad(void)
>  	fd = drm_open_driver(DRIVER_XE);
>  
>  	handle = xe_bo_create(fd, 0, bo_size,
> -			      visible_vram_if_possible(fd, 0));
> +			      vram_if_possible(fd, 0) |
> +			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	dma_buf_fd = prime_handle_to_fd(fd, handle);
>  
>  	gem_close(fd, handle);
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index 52195737c..eb2e0078d 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -52,7 +52,8 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
>  	batch_size = ALIGN(batch_size + xe_cs_prefetch_size(fd),
>  			   xe_get_default_alignment(fd));
>  	batch_bo = xe_bo_create(fd, vm, batch_size,
> -				visible_vram_if_possible(fd, 0));
> +				vram_if_possible(fd, 0) |
> +				DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	batch_map = xe_bo_map(fd, batch_bo, batch_size);
>  
>  	for (i = 0; i < n_dwords; i++) {
> @@ -116,7 +117,8 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
>  		vms = malloc(sizeof(*vms) * n_addrs);
>  		igt_assert(vms);
>  	}
> -	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
> +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	map = xe_bo_map(fd, bo, bo_size);
>  	memset(map, 0, bo_size);
>  
> @@ -422,7 +424,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  
>  	for (i = 0; i < n_bo; ++i) {
>  		bo[i] = xe_bo_create(fd, vm, bo_size,
> -				     visible_vram_if_possible(fd, eci->gt_id));
> +				     vram_if_possible(fd, eci->gt_id) |
> +				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		data[i] = xe_bo_map(fd, bo[i], bo_size);
>  	}
>  
> @@ -601,7 +604,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, eci->gt_id));
> +			  vram_if_possible(fd, eci->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	for (i = 0; i < N_EXEC_QUEUES; i++) {
> @@ -782,7 +786,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  			xe_get_default_alignment(fd));
>  
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  visible_vram_if_possible(fd, eci->gt_id));
> +			  vram_if_possible(fd, eci->gt_id) |
> +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  
>  	if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
> @@ -980,7 +985,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
>  			    xe_visible_vram_size(fd, 0));
>  
>  		bo = xe_bo_create(fd, vm, bo_size,
> -				  visible_vram_if_possible(fd, eci->gt_id));
> +				  vram_if_possible(fd, eci->gt_id) |
> +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		map = xe_bo_map(fd, bo, bo_size);
>  	}
>  
> @@ -1272,7 +1278,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
>  		igt_assert(map != MAP_FAILED);
>  	} else {
>  		bo = xe_bo_create(fd, vm, bo_size,
> -				  visible_vram_if_possible(fd, eci->gt_id));
> +				  vram_if_possible(fd, eci->gt_id) |
> +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  		map = xe_bo_map(fd, bo, bo_size);
>  	}
>  	memset(map, 0, bo_size);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 07/13] xe: s/hw_engine/engine
  2023-11-16 14:53 ` [igt-dev] [PATCH v1 07/13] xe: s/hw_engine/engine Francois Dugast
@ 2023-11-21 18:15   ` Kamil Konieczny
  0 siblings, 0 replies; 30+ messages in thread
From: Kamil Konieczny @ 2023-11-21 18:15 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-16 at 14:53:42 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> HW engine is redundant after exec_queue name was created.
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

Btw there is:

tests/intel/xe_debugfs.c:               "hw_engines",
tests/intel/xe_debugfs.c:       sprintf(name, "gt%d/hw_engines", gt_id);

Regards,
Kamil

> ---
>  benchmarks/gem_wsim.c              |  8 ++---
>  lib/xe/xe_query.c                  | 36 ++++++++++----------
>  lib/xe/xe_query.h                  | 22 ++++++------
>  tests/intel/xe_create.c            |  4 +--
>  tests/intel/xe_dma_buf_sync.c      |  2 +-
>  tests/intel/xe_drm_fdinfo.c        |  2 +-
>  tests/intel/xe_evict.c             |  2 +-
>  tests/intel/xe_exec_balancer.c     | 28 ++++++++--------
>  tests/intel/xe_exec_basic.c        | 12 +++----
>  tests/intel/xe_exec_compute_mode.c |  8 ++---
>  tests/intel/xe_exec_fault_mode.c   |  8 ++---
>  tests/intel/xe_exec_reset.c        | 44 ++++++++++++------------
>  tests/intel/xe_exec_store.c        | 18 +++++-----
>  tests/intel/xe_exec_threads.c      | 24 ++++++-------
>  tests/intel/xe_guc_pc.c            |  4 +--
>  tests/intel/xe_huc_copy.c          |  2 +-
>  tests/intel/xe_intel_bb.c          |  2 +-
>  tests/intel/xe_noexec_ping_pong.c  |  2 +-
>  tests/intel/xe_perf_pmu.c          |  6 ++--
>  tests/intel/xe_pm.c                | 14 ++++----
>  tests/intel/xe_pm_residency.c      |  2 +-
>  tests/intel/xe_query.c             |  6 ++--
>  tests/intel/xe_spin_batch.c        | 10 +++---
>  tests/intel/xe_vm.c                | 54 +++++++++++++++---------------
>  24 files changed, 160 insertions(+), 160 deletions(-)
> 
> diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
> index d134b2dea..d451b8733 100644
> --- a/benchmarks/gem_wsim.c
> +++ b/benchmarks/gem_wsim.c
> @@ -542,7 +542,7 @@ static struct intel_engine_data *query_engines(void)
>  	if (is_xe) {
>  		struct drm_xe_engine_class_instance *hwe;
>  
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			engines.engines[engines.nengines].class = hwe->engine_class;
>  			engines.engines[engines.nengines].instance = hwe->engine_instance;
>  			engines.nengines++;
> @@ -669,7 +669,7 @@ xe_get_engine(enum intel_engine_id engine)
>  		igt_assert(0);
>  	};
>  
> -	xe_for_each_hw_engine(fd, hwe1) {
> +	xe_for_each_engine(fd, hwe1) {
>  		if (hwe.engine_class == hwe1->engine_class &&
>  		    hwe.engine_instance  == hwe1->engine_instance) {
>  			hwe = *hwe1;
> @@ -688,8 +688,8 @@ xe_get_default_engine(void)
>  	struct drm_xe_engine_class_instance default_hwe, *hwe;
>  
>  	/* select RCS0 | CCS0 or first available engine */
> -	default_hwe = *xe_hw_engine(fd, 0);
> -	xe_for_each_hw_engine(fd, hwe) {
> +	default_hwe = *xe_engine(fd, 0);
> +	xe_for_each_engine(fd, hwe) {
>  		if ((hwe->engine_class == DRM_XE_ENGINE_CLASS_RENDER ||
>  		     hwe->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE) &&
>  		    hwe->engine_instance == 0) {
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index fa17b46b6..ef7aaa6a1 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -75,7 +75,7 @@ static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
>  static struct drm_xe_engine_class_instance *
>  xe_query_engines_new(int fd, unsigned int *num_engines)
>  {
> -	struct drm_xe_engine_class_instance *hw_engines;
> +	struct drm_xe_engine_class_instance *engines;
>  	struct drm_xe_device_query query = {
>  		.extensions = 0,
>  		.query = DRM_XE_DEVICE_QUERY_ENGINES,
> @@ -86,15 +86,15 @@ xe_query_engines_new(int fd, unsigned int *num_engines)
>  	igt_assert(num_engines);
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
> -	hw_engines = malloc(query.size);
> -	igt_assert(hw_engines);
> +	engines = malloc(query.size);
> +	igt_assert(engines);
>  
> -	query.data = to_user_pointer(hw_engines);
> +	query.data = to_user_pointer(engines);
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
> -	*num_engines = query.size / sizeof(*hw_engines);
> +	*num_engines = query.size / sizeof(*engines);
>  
> -	return hw_engines;
> +	return engines;
>  }
>  
>  static struct drm_xe_query_mem_regions *xe_query_mem_regions_new(int fd)
> @@ -221,7 +221,7 @@ static void xe_device_free(struct xe_device *xe_dev)
>  {
>  	free(xe_dev->config);
>  	free(xe_dev->gt_list);
> -	free(xe_dev->hw_engines);
> +	free(xe_dev->engines);
>  	free(xe_dev->mem_regions);
>  	free(xe_dev->vram_size);
>  	free(xe_dev);
> @@ -253,7 +253,7 @@ struct xe_device *xe_device_get(int fd)
>  	xe_dev->dev_id = xe_dev->config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
>  	xe_dev->gt_list = xe_query_gt_list_new(fd);
>  	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
> -	xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
> +	xe_dev->engines = xe_query_engines_new(fd, &xe_dev->number_engines);
>  	xe_dev->mem_regions = xe_query_mem_regions_new(fd);
>  	xe_dev->vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->vram_size));
>  	xe_dev->visible_vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->visible_vram_size));
> @@ -422,29 +422,29 @@ uint64_t vram_if_possible(int fd, int gt)
>  }
>  
>  /**
> - * xe_hw_engines:
> + * xe_engines:
>   * @fd: xe device fd
>   *
>   * Returns engines array of xe device @fd.
>   */
> -xe_dev_FN(xe_hw_engines, hw_engines, struct drm_xe_engine_class_instance *);
> +xe_dev_FN(xe_engines, engines, struct drm_xe_engine_class_instance *);
>  
>  /**
> - * xe_hw_engine:
> + * xe_engine:
>   * @fd: xe device fd
>   * @idx: engine index
>   *
>   * Returns engine instance of xe device @fd and @idx.
>   */
> -struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx)
> +struct drm_xe_engine_class_instance *xe_engine(int fd, int idx)
>  {
>  	struct xe_device *xe_dev;
>  
>  	xe_dev = find_in_cache(fd);
>  	igt_assert(xe_dev);
> -	igt_assert(idx >= 0 && idx < xe_dev->number_hw_engines);
> +	igt_assert(idx >= 0 && idx < xe_dev->number_engines);
>  
> -	return &xe_dev->hw_engines[idx];
> +	return &xe_dev->engines[idx];
>  }
>  
>  /**
> @@ -529,12 +529,12 @@ uint32_t xe_min_page_size(int fd, uint64_t region)
>  xe_dev_FN(xe_config, config, struct drm_xe_query_config *);
>  
>  /**
> - * xe_number_hw_engine:
> + * xe_number_engine:
>   * @fd: xe device fd
>   *
>   * Returns number of hw engines of xe device @fd.
>   */
> -xe_dev_FN(xe_number_hw_engines, number_hw_engines, unsigned int);
> +xe_dev_FN(xe_number_engines, number_engines, unsigned int);
>  
>  /**
>   * xe_has_vram:
> @@ -657,8 +657,8 @@ bool xe_has_engine_class(int fd, uint16_t engine_class)
>  	xe_dev = find_in_cache(fd);
>  	igt_assert(xe_dev);
>  
> -	for (int i = 0; i < xe_dev->number_hw_engines; i++)
> -		if (xe_dev->hw_engines[i].engine_class == engine_class)
> +	for (int i = 0; i < xe_dev->number_engines; i++)
> +		if (xe_dev->engines[i].engine_class == engine_class)
>  			return true;
>  
>  	return false;
> diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> index be92ec5ed..bf9f2b955 100644
> --- a/lib/xe/xe_query.h
> +++ b/lib/xe/xe_query.h
> @@ -32,11 +32,11 @@ struct xe_device {
>  	/** @gt_list: bitmask of all memory regions */
>  	uint64_t memory_regions;
>  
> -	/** @hw_engines: array of hardware engines */
> -	struct drm_xe_engine_class_instance *hw_engines;
> +	/** @engines: array of hardware engines */
> +	struct drm_xe_engine_class_instance *engines;
>  
> -	/** @number_hw_engines: length of hardware engines array */
> -	unsigned int number_hw_engines;
> +	/** @number_engines: length of hardware engines array */
> +	unsigned int number_engines;
>  
>  	/** @mem_regions: regions memory information and usage */
>  	struct drm_xe_query_mem_regions *mem_regions;
> @@ -60,10 +60,10 @@ struct xe_device {
>  	uint16_t dev_id;
>  };
>  
> -#define xe_for_each_hw_engine(__fd, __hwe) \
> -	for (int __i = 0; __i < xe_number_hw_engines(__fd) && \
> -	     (__hwe = xe_hw_engine(__fd, __i)); ++__i)
> -#define xe_for_each_hw_engine_class(__class) \
> +#define xe_for_each_engine(__fd, __hwe) \
> +	for (int __i = 0; __i < xe_number_engines(__fd) && \
> +	     (__hwe = xe_engine(__fd, __i)); ++__i)
> +#define xe_for_each_engine_class(__class) \
>  	for (__class = 0; __class < DRM_XE_ENGINE_CLASS_COMPUTE + 1; \
>  	     ++__class)
>  #define xe_for_each_gt(__fd, __gt) \
> @@ -81,14 +81,14 @@ uint64_t all_memory_regions(int fd);
>  uint64_t system_memory(int fd);
>  uint64_t vram_memory(int fd, int gt);
>  uint64_t vram_if_possible(int fd, int gt);
> -struct drm_xe_engine_class_instance *xe_hw_engines(int fd);
> -struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx);
> +struct drm_xe_engine_class_instance *xe_engines(int fd);
> +struct drm_xe_engine_class_instance *xe_engine(int fd, int idx);
>  struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
>  const char *xe_region_name(uint64_t region);
>  uint16_t xe_region_class(int fd, uint64_t region);
>  uint32_t xe_min_page_size(int fd, uint64_t region);
>  struct drm_xe_query_config *xe_config(int fd);
> -unsigned int xe_number_hw_engines(int fd);
> +unsigned int xe_number_engines(int fd);
>  bool xe_has_vram(int fd);
>  uint64_t xe_vram_size(int fd, int gt);
>  uint64_t xe_visible_vram_size(int fd, int gt);
> diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
> index 4326b15e8..9d71b7463 100644
> --- a/tests/intel/xe_create.c
> +++ b/tests/intel/xe_create.c
> @@ -139,7 +139,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
>  	int nproc = sysconf(_SC_NPROCESSORS_ONLN), seconds;
>  
>  	fd = drm_reopen_driver(fd);
> -	num_engines = xe_number_hw_engines(fd);
> +	num_engines = xe_number_engines(fd);
>  	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
> @@ -156,7 +156,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
>  
>  		for (i = 0; i < exec_queues_per_process; i++) {
>  			idx = rand() % num_engines;
> -			hwe = xe_hw_engine(fd, idx);
> +			hwe = xe_engine(fd, idx);
>  			err = __xe_exec_queue_create(fd, vm, hwe, 0, &exec_queue);
>  			igt_debug("[%2d] Create exec_queue: err=%d, exec_queue=%u [idx = %d]\n",
>  				  n, err, exec_queue, i);
> diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
> index aeb4c4995..dfa957243 100644
> --- a/tests/intel/xe_dma_buf_sync.c
> +++ b/tests/intel/xe_dma_buf_sync.c
> @@ -229,7 +229,7 @@ igt_main
>  	igt_fixture {
>  		fd = drm_open_driver(DRIVER_XE);
>  
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			if (hwe0 == NULL) {
>  				hwe0 = hwe;
>  			} else {
> diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
> index 6bca5a6f1..d50cc6df1 100644
> --- a/tests/intel/xe_drm_fdinfo.c
> +++ b/tests/intel/xe_drm_fdinfo.c
> @@ -313,7 +313,7 @@ igt_main
>  
>  	igt_describe("Create and compare active memory consumption by client");
>  	igt_subtest("drm-active")
> -		test_active(xe, xe_hw_engine(xe, 0));
> +		test_active(xe, xe_engine(xe, 0));
>  
>  	igt_fixture {
>  		drm_close_driver(xe);
> diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
> index 436a2be02..2e2960b9b 100644
> --- a/tests/intel/xe_evict.c
> +++ b/tests/intel/xe_evict.c
> @@ -759,7 +759,7 @@ igt_main
>  		vram_size = xe_visible_vram_size(fd, 0);
>  		igt_assert(vram_size);
>  
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			if (hwe->engine_class != DRM_XE_ENGINE_CLASS_COPY)
>  				break;
>  	}
> diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> index fa3d7a338..ea06c23cd 100644
> --- a/tests/intel/xe_exec_balancer.c
> +++ b/tests/intel/xe_exec_balancer.c
> @@ -57,7 +57,7 @@ static void test_all_active(int fd, int gt, int class)
>  	struct drm_xe_engine_class_instance eci[MAX_INSTANCE];
>  	int i, num_placements = 0;
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class != class || hwe->gt_id != gt)
>  			continue;
>  
> @@ -199,7 +199,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  
>  	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class != class || hwe->gt_id != gt)
>  			continue;
>  
> @@ -426,7 +426,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  
>  	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class != class || hwe->gt_id != gt)
>  			continue;
>  
> @@ -632,25 +632,25 @@ igt_main
>  
>  	igt_subtest("virtual-all-active")
>  		xe_for_each_gt(fd, gt)
> -			xe_for_each_hw_engine_class(class)
> +			xe_for_each_engine_class(class)
>  				test_all_active(fd, gt, class);
>  
>  	for (const struct section *s = sections; s->name; s++) {
>  		igt_subtest_f("once-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_exec(fd, gt, class, 1, 1,
>  						  s->flags);
>  
>  		igt_subtest_f("twice-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_exec(fd, gt, class, 1, 2,
>  						  s->flags);
>  
>  		igt_subtest_f("many-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_exec(fd, gt, class, 1,
>  						  s->flags & (REBIND | INVALIDATE) ?
>  						  64 : 1024,
> @@ -658,7 +658,7 @@ igt_main
>  
>  		igt_subtest_f("many-execqueues-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_exec(fd, gt, class, 16,
>  						  s->flags & (REBIND | INVALIDATE) ?
>  						  64 : 1024,
> @@ -666,23 +666,23 @@ igt_main
>  
>  		igt_subtest_f("no-exec-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_exec(fd, gt, class, 1, 0,
>  						  s->flags);
>  
>  		igt_subtest_f("once-cm-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_cm(fd, gt, class, 1, 1, s->flags);
>  
>  		igt_subtest_f("twice-cm-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_cm(fd, gt, class, 1, 2, s->flags);
>  
>  		igt_subtest_f("many-cm-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_cm(fd, gt, class, 1,
>  						s->flags & (REBIND | INVALIDATE) ?
>  						64 : 1024,
> @@ -690,7 +690,7 @@ igt_main
>  
>  		igt_subtest_f("many-execqueues-cm-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_cm(fd, gt, class, 16,
>  						s->flags & (REBIND | INVALIDATE) ?
>  						64 : 1024,
> @@ -698,7 +698,7 @@ igt_main
>  
>  		igt_subtest_f("no-exec-cm-%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_cm(fd, gt, class, 1, 0, s->flags);
>  	}
>  
> diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> index 23acdd434..46b9dc2e0 100644
> --- a/tests/intel/xe_exec_basic.c
> +++ b/tests/intel/xe_exec_basic.c
> @@ -336,36 +336,36 @@ igt_main
>  
>  	for (const struct section *s = sections; s->name; s++) {
>  		igt_subtest_f("once-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1, 1, 1, s->flags);
>  
>  		igt_subtest_f("twice-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1, 2, 1, s->flags);
>  
>  		igt_subtest_f("many-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1,
>  					  s->flags & (REBIND | INVALIDATE) ?
>  					  64 : 1024, 1,
>  					  s->flags);
>  
>  		igt_subtest_f("many-execqueues-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 16,
>  					  s->flags & (REBIND | INVALIDATE) ?
>  					  64 : 1024, 1,
>  					  s->flags);
>  
>  		igt_subtest_f("many-execqueues-many-vm-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 16,
>  					  s->flags & (REBIND | INVALIDATE) ?
>  					  64 : 1024, 16,
>  					  s->flags);
>  
>  		igt_subtest_f("no-exec-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1, 0, 1, s->flags);
>  	}
>  
> diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
> index 98a98256e..a9f69deef 100644
> --- a/tests/intel/xe_exec_compute_mode.c
> +++ b/tests/intel/xe_exec_compute_mode.c
> @@ -321,15 +321,15 @@ igt_main
>  
>  	for (const struct section *s = sections; s->name; s++) {
>  		igt_subtest_f("once-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1, 1, s->flags);
>  
>  		igt_subtest_f("twice-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1, 2, s->flags);
>  
>  		igt_subtest_f("many-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1,
>  					  s->flags & (REBIND | INVALIDATE) ?
>  					  64 : 128,
> @@ -339,7 +339,7 @@ igt_main
>  			continue;
>  
>  		igt_subtest_f("many-execqueues-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 16,
>  					  s->flags & (REBIND | INVALIDATE) ?
>  					  64 : 128,
> diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> index 3eb448ef4..4c85fce76 100644
> --- a/tests/intel/xe_exec_fault_mode.c
> +++ b/tests/intel/xe_exec_fault_mode.c
> @@ -386,22 +386,22 @@ igt_main
>  
>  	for (const struct section *s = sections; s->name; s++) {
>  		igt_subtest_f("once-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1, 1, s->flags);
>  
>  		igt_subtest_f("twice-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1, 2, s->flags);
>  
>  		igt_subtest_f("many-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 1,
>  					  s->flags & (REBIND | INVALIDATE) ?
>  					  64 : 128,
>  					  s->flags);
>  
>  		igt_subtest_f("many-execqueues-%s", s->name)
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				test_exec(fd, hwe, 16,
>  					  s->flags & (REBIND | INVALIDATE) ?
>  					  64 : 128,
> diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> index d8b8e0355..988e63438 100644
> --- a/tests/intel/xe_exec_reset.c
> +++ b/tests/intel/xe_exec_reset.c
> @@ -168,7 +168,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	if (flags & CLOSE_FD)
>  		fd = drm_open_driver(DRIVER_XE);
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class != class || hwe->gt_id != gt)
>  			continue;
>  
> @@ -790,106 +790,106 @@ igt_main
>  		fd = drm_open_driver(DRIVER_XE);
>  
>  	igt_subtest("spin")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_spin(fd, hwe);
>  
>  	igt_subtest("cancel")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_legacy_mode(fd, hwe, 1, 1, CANCEL);
>  
>  	igt_subtest("execqueue-reset")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_legacy_mode(fd, hwe, 2, 2, EXEC_QUEUE_RESET);
>  
>  	igt_subtest("cat-error")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_legacy_mode(fd, hwe, 2, 2, CAT_ERROR);
>  
>  	igt_subtest("gt-reset")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_legacy_mode(fd, hwe, 2, 2, GT_RESET);
>  
>  	igt_subtest("close-fd-no-exec")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_legacy_mode(-1, hwe, 16, 0, CLOSE_FD);
>  
>  	igt_subtest("close-fd")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_legacy_mode(-1, hwe, 16, 256, CLOSE_FD);
>  
>  	igt_subtest("close-execqueues-close-fd")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_legacy_mode(-1, hwe, 16, 256, CLOSE_FD |
>  					 CLOSE_EXEC_QUEUES);
>  
>  	igt_subtest("cm-execqueue-reset")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_compute_mode(fd, hwe, 2, 2, EXEC_QUEUE_RESET);
>  
>  	igt_subtest("cm-cat-error")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_compute_mode(fd, hwe, 2, 2, CAT_ERROR);
>  
>  	igt_subtest("cm-gt-reset")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_compute_mode(fd, hwe, 2, 2, GT_RESET);
>  
>  	igt_subtest("cm-close-fd-no-exec")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_compute_mode(-1, hwe, 16, 0, CLOSE_FD);
>  
>  	igt_subtest("cm-close-fd")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_compute_mode(-1, hwe, 16, 256, CLOSE_FD);
>  
>  	igt_subtest("cm-close-execqueues-close-fd")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_compute_mode(-1, hwe, 16, 256, CLOSE_FD |
>  					  CLOSE_EXEC_QUEUES);
>  
>  	for (const struct section *s = sections; s->name; s++) {
>  		igt_subtest_f("%s-cancel", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_balancer(fd, gt, class, 1, 1,
>  						      CANCEL | s->flags);
>  
>  		igt_subtest_f("%s-execqueue-reset", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_balancer(fd, gt, class, MAX_INSTANCE + 1,
>  						      MAX_INSTANCE + 1,
>  						      EXEC_QUEUE_RESET | s->flags);
>  
>  		igt_subtest_f("%s-cat-error", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_balancer(fd, gt, class, MAX_INSTANCE + 1,
>  						      MAX_INSTANCE + 1,
>  						      CAT_ERROR | s->flags);
>  
>  		igt_subtest_f("%s-gt-reset", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_balancer(fd, gt, class, MAX_INSTANCE + 1,
>  						      MAX_INSTANCE + 1,
>  						      GT_RESET | s->flags);
>  
>  		igt_subtest_f("%s-close-fd-no-exec", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_balancer(-1, gt, class, 16, 0,
>  						      CLOSE_FD | s->flags);
>  
>  		igt_subtest_f("%s-close-fd", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_balancer(-1, gt, class, 16, 256,
>  						      CLOSE_FD | s->flags);
>  
>  		igt_subtest_f("%s-close-execqueues-close-fd", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					test_balancer(-1, gt, class, 16, 256, CLOSE_FD |
>  						      CLOSE_EXEC_QUEUES | s->flags);
>  	}
> diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
> index 9ee5edeb4..0b7b3d3e9 100644
> --- a/tests/intel/xe_exec_store.c
> +++ b/tests/intel/xe_exec_store.c
> @@ -63,7 +63,7 @@ static void store(int fd)
>  		.syncs = to_user_pointer(&sync),
>  	};
>  	struct data *data;
> -	struct drm_xe_engine_class_instance *hw_engine;
> +	struct drm_xe_engine_class_instance *engine;
>  	uint32_t vm;
>  	uint32_t exec_queue;
>  	uint32_t syncobj;
> @@ -80,16 +80,16 @@ static void store(int fd)
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
>  
> -	hw_engine = xe_hw_engine(fd, 1);
> +	engine = xe_engine(fd, 1);
>  	bo = xe_bo_create(fd, vm, bo_size,
> -			  vram_if_possible(fd, hw_engine->gt_id),
> +			  vram_if_possible(fd, engine->gt_id),
>  			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  
> -	xe_vm_bind_async(fd, vm, hw_engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
> +	xe_vm_bind_async(fd, vm, engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
>  	data = xe_bo_map(fd, bo, bo_size);
>  	store_dword_batch(data, addr, value);
>  
> -	exec_queue = xe_exec_queue_create(fd, vm, hw_engine, 0);
> +	exec_queue = xe_exec_queue_create(fd, vm, engine, 0);
>  	exec.exec_queue_id = exec_queue;
>  	exec.address = data->addr;
>  	sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL;
> @@ -242,7 +242,7 @@ static void store_all(int fd, int gt, int class)
>  			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>  	data = xe_bo_map(fd, bo, bo_size);
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class != class || hwe->gt_id != gt)
>  			continue;
>  		eci[num_placements++] = *hwe;
> @@ -309,16 +309,16 @@ igt_main
>  
>  	igt_subtest("basic-all") {
>  		xe_for_each_gt(fd, gt)
> -			xe_for_each_hw_engine_class(class)
> +			xe_for_each_engine_class(class)
>  				store_all(fd, gt, class);
>  	}
>  
>  	igt_subtest("cachelines")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			store_cachelines(fd, hwe, 0);
>  
>  	igt_subtest("page-sized")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			store_cachelines(fd, hwe, PAGES);
>  
>  	igt_fixture {
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index f37fc612a..8a01b150d 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -81,7 +81,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  		owns_vm = true;
>  	}
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class != class || hwe->gt_id != gt)
>  			continue;
>  
> @@ -969,22 +969,22 @@ static void threads(int fd, int flags)
>  	uint64_t userptr = 0x00007000eadbe000;
>  	pthread_mutex_t mutex;
>  	pthread_cond_t cond;
> -	int n_hw_engines = 0, class;
> +	int n_engines = 0, class;
>  	uint64_t i = 0;
>  	uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
>  	bool go = false;
>  	int n_threads = 0;
>  	int gt;
>  
> -	xe_for_each_hw_engine(fd, hwe)
> -		++n_hw_engines;
> +	xe_for_each_engine(fd, hwe)
> +		++n_engines;
>  
>  	if (flags & BALANCER) {
>  		xe_for_each_gt(fd, gt)
> -			xe_for_each_hw_engine_class(class) {
> +			xe_for_each_engine_class(class) {
>  				int num_placements = 0;
>  
> -				xe_for_each_hw_engine(fd, hwe) {
> +				xe_for_each_engine(fd, hwe) {
>  					if (hwe->engine_class != class ||
>  					    hwe->gt_id != gt)
>  						continue;
> @@ -992,11 +992,11 @@ static void threads(int fd, int flags)
>  				}
>  
>  				if (num_placements > 1)
> -					n_hw_engines += 2;
> +					n_engines += 2;
>  			}
>  	}
>  
> -	threads_data = calloc(n_hw_engines, sizeof(*threads_data));
> +	threads_data = calloc(n_engines, sizeof(*threads_data));
>  	igt_assert(threads_data);
>  
>  	pthread_mutex_init(&mutex, 0);
> @@ -1012,7 +1012,7 @@ static void threads(int fd, int flags)
>  					       0);
>  	}
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		threads_data[i].mutex = &mutex;
>  		threads_data[i].cond = &cond;
>  #define ADDRESS_SHIFT	39
> @@ -1045,10 +1045,10 @@ static void threads(int fd, int flags)
>  
>  	if (flags & BALANCER) {
>  		xe_for_each_gt(fd, gt)
> -			xe_for_each_hw_engine_class(class) {
> +			xe_for_each_engine_class(class) {
>  				int num_placements = 0;
>  
> -				xe_for_each_hw_engine(fd, hwe) {
> +				xe_for_each_engine(fd, hwe) {
>  					if (hwe->engine_class != class ||
>  					    hwe->gt_id != gt)
>  						continue;
> @@ -1123,7 +1123,7 @@ static void threads(int fd, int flags)
>  	pthread_cond_broadcast(&cond);
>  	pthread_mutex_unlock(&mutex);
>  
> -	for (i = 0; i < n_hw_engines; ++i)
> +	for (i = 0; i < n_engines; ++i)
>  		pthread_join(threads_data[i].thread, NULL);
>  
>  	if (vm_legacy_mode)
> diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
> index 8d7b677b4..dd768ecdc 100644
> --- a/tests/intel/xe_guc_pc.c
> +++ b/tests/intel/xe_guc_pc.c
> @@ -415,7 +415,7 @@ igt_main
>  
>  	igt_subtest("freq_fixed_exec") {
>  		xe_for_each_gt(fd, gt) {
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				igt_fork(child, ncpus) {
>  					igt_debug("Execution Started\n");
>  					exec_basic(fd, hwe, MAX_N_EXEC_QUEUES, 16);
> @@ -437,7 +437,7 @@ igt_main
>  
>  	igt_subtest("freq_range_exec") {
>  		xe_for_each_gt(fd, gt) {
> -			xe_for_each_hw_engine(fd, hwe)
> +			xe_for_each_engine(fd, hwe)
>  				igt_fork(child, ncpus) {
>  					igt_debug("Execution Started\n");
>  					exec_basic(fd, hwe, MAX_N_EXEC_QUEUES, 16);
> diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
> index eda9e5216..dbc5afc17 100644
> --- a/tests/intel/xe_huc_copy.c
> +++ b/tests/intel/xe_huc_copy.c
> @@ -158,7 +158,7 @@ test_huc_copy(int fd)
>  
>  	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class == DRM_XE_ENGINE_CLASS_VIDEO_DECODE &&
>  		    !(tested_gts & BIT(hwe->gt_id))) {
>  			tested_gts |= BIT(hwe->gt_id);
> diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
> index 00bd17d4c..e7a566f62 100644
> --- a/tests/intel/xe_intel_bb.c
> +++ b/tests/intel/xe_intel_bb.c
> @@ -192,7 +192,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
>  
>  	if (new_context) {
>  		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
> -		ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
> +		ctx = xe_exec_queue_create(xe, vm, xe_engine(xe, 0), 0);
>  		intel_bb_destroy(ibb);
>  		ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
>  		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
> index 5e3349247..e27cc4582 100644
> --- a/tests/intel/xe_noexec_ping_pong.c
> +++ b/tests/intel/xe_noexec_ping_pong.c
> @@ -98,7 +98,7 @@ igt_simple_main
>  
>  	fd = drm_open_driver(DRIVER_XE);
>  
> -	test_ping_pong(fd, xe_hw_engine(fd, 0));
> +	test_ping_pong(fd, xe_engine(fd, 0));
>  
>  	drm_close_driver(fd);
>  }
> diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
> index 8ef557a46..63a8eb9b2 100644
> --- a/tests/intel/xe_perf_pmu.c
> +++ b/tests/intel/xe_perf_pmu.c
> @@ -209,7 +209,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
>  
>  	config = engine_group_get_config(gt, class);
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class != class || hwe->gt_id != gt)
>  			continue;
>  
> @@ -315,13 +315,13 @@ igt_main
>  	for (const struct section *s = sections; s->name; s++) {
>  		igt_subtest_f("%s", s->name)
>  			xe_for_each_gt(fd, gt)
> -				xe_for_each_hw_engine_class(class)
> +				xe_for_each_engine_class(class)
>  					if (class == s->class)
>  						test_engine_group_busyness(fd, gt, class, s->name);
>  	}
>  
>  	igt_subtest("any-engine-group-busy")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_any_engine_busyness(fd, hwe);
>  
>  	igt_fixture {
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index 2e5c61b59..d78ca31a8 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -471,7 +471,7 @@ igt_main
>  		igt_device_get_pci_slot_name(device.fd_xe, device.pci_slot_name);
>  
>  		/* Always perform initial once-basic exec checking for health */
> -		xe_for_each_hw_engine(device.fd_xe, hwe)
> +		xe_for_each_engine(device.fd_xe, hwe)
>  			test_exec(device, hwe, 1, 1, NO_SUSPEND, NO_RPM);
>  
>  		igt_pm_get_d3cold_allowed(device.pci_slot_name, &d3cold_allowed);
> @@ -486,7 +486,7 @@ igt_main
>  		}
>  
>  		igt_subtest_f("%s-basic-exec", s->name) {
> -			xe_for_each_hw_engine(device.fd_xe, hwe)
> +			xe_for_each_engine(device.fd_xe, hwe)
>  				test_exec(device, hwe, 1, 2, s->state,
>  					  NO_RPM);
>  		}
> @@ -494,13 +494,13 @@ igt_main
>  		igt_subtest_f("%s-exec-after", s->name) {
>  			igt_system_suspend_autoresume(s->state,
>  						      SUSPEND_TEST_NONE);
> -			xe_for_each_hw_engine(device.fd_xe, hwe)
> +			xe_for_each_engine(device.fd_xe, hwe)
>  				test_exec(device, hwe, 1, 2, NO_SUSPEND,
>  					  NO_RPM);
>  		}
>  
>  		igt_subtest_f("%s-multiple-execs", s->name) {
> -			xe_for_each_hw_engine(device.fd_xe, hwe)
> +			xe_for_each_engine(device.fd_xe, hwe)
>  				test_exec(device, hwe, 16, 32, s->state,
>  					  NO_RPM);
>  		}
> @@ -508,7 +508,7 @@ igt_main
>  		for (const struct d_state *d = d_states; d->name; d++) {
>  			igt_subtest_f("%s-%s-basic-exec", s->name, d->name) {
>  				igt_assert(setup_d3(device, d->state));
> -				xe_for_each_hw_engine(device.fd_xe, hwe)
> +				xe_for_each_engine(device.fd_xe, hwe)
>  					test_exec(device, hwe, 1, 2, s->state,
>  						  NO_RPM);
>  			}
> @@ -523,14 +523,14 @@ igt_main
>  
>  		igt_subtest_f("%s-basic-exec", d->name) {
>  			igt_assert(setup_d3(device, d->state));
> -			xe_for_each_hw_engine(device.fd_xe, hwe)
> +			xe_for_each_engine(device.fd_xe, hwe)
>  				test_exec(device, hwe, 1, 1,
>  					  NO_SUSPEND, d->state);
>  		}
>  
>  		igt_subtest_f("%s-multiple-execs", d->name) {
>  			igt_assert(setup_d3(device, d->state));
> -			xe_for_each_hw_engine(device.fd_xe, hwe)
> +			xe_for_each_engine(device.fd_xe, hwe)
>  				test_exec(device, hwe, 16, 32,
>  					  NO_SUSPEND, d->state);
>  		}
> diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
> index 6c9a95429..4f590c83c 100644
> --- a/tests/intel/xe_pm_residency.c
> +++ b/tests/intel/xe_pm_residency.c
> @@ -346,7 +346,7 @@ igt_main
>  	igt_describe("Validate idle residency on exec");
>  	igt_subtest("idle-residency-on-exec") {
>  		xe_for_each_gt(fd, gt) {
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				if (gt == hwe->gt_id && !hwe->engine_instance)
>  					idle_residency_on_exec(fd, hwe);
>  			}
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index 4a23dcb60..48042337a 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -181,7 +181,7 @@ test_query_engines(int fd)
>  	struct drm_xe_engine_class_instance *hwe;
>  	int i = 0;
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		igt_assert(hwe);
>  		igt_info("engine %d: %s, engine instance: %d, tile: TILE-%d\n", i++,
>  			 xe_engine_class_string(hwe->engine_class), hwe->engine_instance,
> @@ -602,7 +602,7 @@ static void test_query_engine_cycles(int fd)
>  
>  	igt_require(query_engine_cycles_supported(fd));
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		igt_assert(hwe);
>  		__engine_cycles(fd, hwe);
>  	}
> @@ -626,7 +626,7 @@ static void test_engine_cycles_invalid(int fd)
>  	igt_require(query_engine_cycles_supported(fd));
>  
>  	/* get one engine */
> -	xe_for_each_hw_engine(fd, hwe)
> +	xe_for_each_engine(fd, hwe)
>  		break;
>  
>  	/* sanity check engine selection is valid */
> diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
> index 6abe700da..2e2a0ed0e 100644
> --- a/tests/intel/xe_spin_batch.c
> +++ b/tests/intel/xe_spin_batch.c
> @@ -72,8 +72,8 @@ static void spin_basic_all(int fd)
>  
>  	vm = xe_vm_create(fd, 0, 0);
>  	ahnd = intel_allocator_open(fd, vm, INTEL_ALLOCATOR_RELOC);
> -	spin = malloc(sizeof(*spin) * xe_number_hw_engines(fd));
> -	xe_for_each_hw_engine(fd, hwe) {
> +	spin = malloc(sizeof(*spin) * xe_number_engines(fd));
> +	xe_for_each_engine(fd, hwe) {
>  		igt_debug("Run on engine: %s:%d\n",
>  			  xe_engine_class_string(hwe->engine_class), hwe->engine_instance);
>  		spin[i] = igt_spin_new(fd, .ahnd = ahnd, .vm = vm, .hwe = hwe);
> @@ -104,7 +104,7 @@ static void spin_all(int fd, int gt, int class)
>  
>  	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
>  
> -	xe_for_each_hw_engine(fd, hwe) {
> +	xe_for_each_engine(fd, hwe) {
>  		if (hwe->engine_class != class || hwe->gt_id != gt)
>  			continue;
>  		eci[num_placements++] = *hwe;
> @@ -217,7 +217,7 @@ igt_main
>  		spin_basic(fd);
>  
>  	igt_subtest("spin-batch")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			spin(fd, hwe);
>  
>  	igt_subtest("spin-basic-all")
> @@ -225,7 +225,7 @@ igt_main
>  
>  	igt_subtest("spin-all") {
>  		xe_for_each_gt(fd, gt)
> -			xe_for_each_hw_engine_class(class)
> +			xe_for_each_engine_class(class)
>  				spin_all(fd, gt, class);
>  	}
>  
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index ec804febd..ea93d7b2e 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -1858,7 +1858,7 @@ igt_main
>  	igt_fixture {
>  		fd = drm_open_driver(DRIVER_XE);
>  
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			if (hwe->engine_class != DRM_XE_ENGINE_CLASS_COPY) {
>  				hwe_non_copy = hwe;
>  				break;
> @@ -1890,45 +1890,45 @@ igt_main
>  		userptr_invalid(fd);
>  
>  	igt_subtest("shared-pte-page")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			shared_pte_page(fd, hwe, 4,
>  					xe_get_default_alignment(fd));
>  
>  	igt_subtest("shared-pde-page")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			shared_pte_page(fd, hwe, 4, 0x1000ul * 512);
>  
>  	igt_subtest("shared-pde2-page")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			shared_pte_page(fd, hwe, 4, 0x1000ul * 512 * 512);
>  
>  	igt_subtest("shared-pde3-page")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			shared_pte_page(fd, hwe, 4, 0x1000ul * 512 * 512 * 512);
>  
>  	igt_subtest("bind-execqueues-independent")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_bind_execqueues_independent(fd, hwe, 0);
>  
>  	igt_subtest("bind-execqueues-conflict")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_bind_execqueues_independent(fd, hwe, CONFLICT);
>  
>  	igt_subtest("bind-array-twice")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_bind_array(fd, hwe, 2, 0);
>  
>  	igt_subtest("bind-array-many")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_bind_array(fd, hwe, 16, 0);
>  
>  	igt_subtest("bind-array-exec_queue-twice")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_bind_array(fd, hwe, 2,
>  					BIND_ARRAY_BIND_EXEC_QUEUE_FLAG);
>  
>  	igt_subtest("bind-array-exec_queue-many")
> -		xe_for_each_hw_engine(fd, hwe)
> +		xe_for_each_engine(fd, hwe)
>  			test_bind_array(fd, hwe, 16,
>  					BIND_ARRAY_BIND_EXEC_QUEUE_FLAG);
>  
> @@ -1936,41 +1936,41 @@ igt_main
>  	     bind_size = bind_size << 1) {
>  		igt_subtest_f("large-binds-%lld",
>  			      (long long)bind_size)
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				test_large_binds(fd, hwe, 4, 16, bind_size, 0);
>  				break;
>  			}
>  		igt_subtest_f("large-split-binds-%lld",
>  			      (long long)bind_size)
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				test_large_binds(fd, hwe, 4, 16, bind_size,
>  						 LARGE_BIND_FLAG_SPLIT);
>  				break;
>  			}
>  		igt_subtest_f("large-misaligned-binds-%lld",
>  			      (long long)bind_size)
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				test_large_binds(fd, hwe, 4, 16, bind_size,
>  						 LARGE_BIND_FLAG_MISALIGNED);
>  				break;
>  			}
>  		igt_subtest_f("large-split-misaligned-binds-%lld",
>  			      (long long)bind_size)
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				test_large_binds(fd, hwe, 4, 16, bind_size,
>  						 LARGE_BIND_FLAG_SPLIT |
>  						 LARGE_BIND_FLAG_MISALIGNED);
>  				break;
>  			}
>  		igt_subtest_f("large-userptr-binds-%lld", (long long)bind_size)
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				test_large_binds(fd, hwe, 4, 16, bind_size,
>  						 LARGE_BIND_FLAG_USERPTR);
>  				break;
>  			}
>  		igt_subtest_f("large-userptr-split-binds-%lld",
>  			      (long long)bind_size)
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				test_large_binds(fd, hwe, 4, 16, bind_size,
>  						 LARGE_BIND_FLAG_SPLIT |
>  						 LARGE_BIND_FLAG_USERPTR);
> @@ -1978,7 +1978,7 @@ igt_main
>  			}
>  		igt_subtest_f("large-userptr-misaligned-binds-%lld",
>  			      (long long)bind_size)
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				test_large_binds(fd, hwe, 4, 16, bind_size,
>  						 LARGE_BIND_FLAG_MISALIGNED |
>  						 LARGE_BIND_FLAG_USERPTR);
> @@ -1986,7 +1986,7 @@ igt_main
>  			}
>  		igt_subtest_f("large-userptr-split-misaligned-binds-%lld",
>  			      (long long)bind_size)
> -			xe_for_each_hw_engine(fd, hwe) {
> +			xe_for_each_engine(fd, hwe) {
>  				test_large_binds(fd, hwe, 4, 16, bind_size,
>  						 LARGE_BIND_FLAG_SPLIT |
>  						 LARGE_BIND_FLAG_MISALIGNED |
> @@ -1997,13 +1997,13 @@ igt_main
>  
>  	bind_size = (0x1ull << 21) + (0x1ull << 20);
>  	igt_subtest_f("mixed-binds-%lld", (long long)bind_size)
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			test_large_binds(fd, hwe, 4, 16, bind_size, 0);
>  			break;
>  		}
>  
>  	igt_subtest_f("mixed-misaligned-binds-%lld", (long long)bind_size)
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			test_large_binds(fd, hwe, 4, 16, bind_size,
>  					 LARGE_BIND_FLAG_MISALIGNED);
>  			break;
> @@ -2011,14 +2011,14 @@ igt_main
>  
>  	bind_size = (0x1ull << 30) + (0x1ull << 29) + (0x1ull << 20);
>  	igt_subtest_f("mixed-binds-%lld", (long long)bind_size)
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			test_large_binds(fd, hwe, 4, 16, bind_size, 0);
>  			break;
>  		}
>  
>  	bind_size = (0x1ull << 30) + (0x1ull << 29) + (0x1ull << 20);
>  	igt_subtest_f("mixed-misaligned-binds-%lld", (long long)bind_size)
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			test_large_binds(fd, hwe, 4, 16, bind_size,
>  					 LARGE_BIND_FLAG_MISALIGNED);
>  			break;
> @@ -2026,7 +2026,7 @@ igt_main
>  
>  	bind_size = (0x1ull << 21) + (0x1ull << 20);
>  	igt_subtest_f("mixed-userptr-binds-%lld", (long long) bind_size)
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			test_large_binds(fd, hwe, 4, 16, bind_size,
>  					 LARGE_BIND_FLAG_USERPTR);
>  			break;
> @@ -2034,7 +2034,7 @@ igt_main
>  
>  	igt_subtest_f("mixed-userptr-misaligned-binds-%lld",
>  		      (long long)bind_size)
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			test_large_binds(fd, hwe, 4, 16, bind_size,
>  					 LARGE_BIND_FLAG_MISALIGNED |
>  					 LARGE_BIND_FLAG_USERPTR);
> @@ -2043,7 +2043,7 @@ igt_main
>  
>  	bind_size = (0x1ull << 30) + (0x1ull << 29) + (0x1ull << 20);
>  	igt_subtest_f("mixed-userptr-binds-%lld", (long long)bind_size)
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			test_large_binds(fd, hwe, 4, 16, bind_size,
>  					 LARGE_BIND_FLAG_USERPTR);
>  			break;
> @@ -2052,7 +2052,7 @@ igt_main
>  	bind_size = (0x1ull << 30) + (0x1ull << 29) + (0x1ull << 20);
>  	igt_subtest_f("mixed-userptr-misaligned-binds-%lld",
>  		      (long long)bind_size)
> -		xe_for_each_hw_engine(fd, hwe) {
> +		xe_for_each_engine(fd, hwe) {
>  			test_large_binds(fd, hwe, 4, 16, bind_size,
>  					 LARGE_BIND_FLAG_MISALIGNED |
>  					 LARGE_BIND_FLAG_USERPTR);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 02/13] xe_ioctl: Converge bo_create to the most used version
  2023-11-21 17:13   ` Kamil Konieczny
@ 2023-11-28 16:11     ` Francois Dugast
  0 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-28 16:11 UTC (permalink / raw)
  To: Kamil Konieczny, igt-dev, Rodrigo Vivi

On Tue, Nov 21, 2023 at 06:13:08PM +0100, Kamil Konieczny wrote:
> Hi Francois,
> On 2023-11-16 at 14:53:37 +0000, Francois Dugast wrote:
> > From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > 
> > Let's unify the call instead of having 2 separated
> > options for the same goal.
> > 
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > ---
> >  lib/xe/xe_ioctl.c           | 15 ---------------
> >  lib/xe/xe_ioctl.h           |  1 -
> >  tests/intel/xe_perf_pmu.c   |  4 ++--
> >  tests/intel/xe_spin_batch.c |  2 +-
> >  tests/intel/xe_vm.c         |  9 +++++----
> >  5 files changed, 8 insertions(+), 23 deletions(-)
> > 
> > diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> > index 738c4ffdb..78d431ab2 100644
> > --- a/lib/xe/xe_ioctl.c
> > +++ b/lib/xe/xe_ioctl.c
> > @@ -253,21 +253,6 @@ uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags)
> >  	return handle;
> >  }
> >  
> > -uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size)
> > -{
> > -	struct drm_xe_gem_create create = {
> > -		.vm_id = vm,
> > -		.size = size,
> > -		.flags = vram_if_possible(fd, gt),
> > -	};
> > -	int err;
> > -
> > -	err = igt_ioctl(fd, DRM_IOCTL_XE_GEM_CREATE, &create);
> > -	igt_assert_eq(err, 0);
> > -
> > -	return create.handle;
> > -}
> > -
> >  uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext, bool async)
> >  {
> >  	struct drm_xe_engine_class_instance instance = {
> > diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> > index a9171bcf7..fb191d98f 100644
> > --- a/lib/xe/xe_ioctl.h
> > +++ b/lib/xe/xe_ioctl.h
> > @@ -67,7 +67,6 @@ void xe_vm_destroy(int fd, uint32_t vm);
> >  uint32_t __xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags,
> >  			      uint32_t *handle);
> >  uint32_t xe_bo_create_flags(int fd, uint32_t vm, uint64_t size, uint32_t flags);
> > -uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size);
> >  uint32_t xe_exec_queue_create(int fd, uint32_t vm,
> >  			  struct drm_xe_engine_class_instance *instance,
> >  			  uint64_t ext);
> > diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
> > index e9d05cf2b..2c549f778 100644
> > --- a/tests/intel/xe_perf_pmu.c
> > +++ b/tests/intel/xe_perf_pmu.c
> > @@ -103,7 +103,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
> >  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> >  			xe_get_default_alignment(fd));
> >  
> > -	bo = xe_bo_create(fd, eci->gt_id, vm, bo_size);
> > +	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> ----------------------------------------------------------------- ^
> s/fd, 0/fd, eci->gt_id/
> 
> >  	spin = xe_bo_map(fd, bo, bo_size);
> >  
> >  	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> > @@ -223,7 +223,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
> >  	bo_size = sizeof(*data) * num_placements;
> >  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> >  
> > -	bo = xe_bo_create(fd, gt, vm, bo_size);
> > +	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> ----------------------------------------------------------------- ^
> s/fd, 0/fd, gt/
> 
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  
> >  	for (i = 0; i < num_placements; i++) {
> > diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
> > index 6ab604d9b..261fde9af 100644
> > --- a/tests/intel/xe_spin_batch.c
> > +++ b/tests/intel/xe_spin_batch.c
> > @@ -169,7 +169,7 @@ static void xe_spin_fixed_duration(int fd)
> >  	exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
> >  	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
> >  	bo_size = ALIGN(sizeof(*spin) + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> > -	bo = xe_bo_create(fd, 0, vm, bo_size);
> > +	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> >  	spin = xe_bo_map(fd, bo, bo_size);
> >  	spin_addr = intel_allocator_alloc_with_strategy(ahnd, bo, bo_size, 0,
> >  							ALLOC_STRATEGY_LOW_TO_HIGH);
> > diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> > index 05e8e7516..eedd05b57 100644
> > --- a/tests/intel/xe_vm.c
> > +++ b/tests/intel/xe_vm.c
> > @@ -267,7 +267,7 @@ static void test_partial_unbinds(int fd)
> >  {
> >  	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
> >  	size_t bo_size = 3 * xe_get_default_alignment(fd);
> > -	uint32_t bo = xe_bo_create(fd, 0, vm, bo_size);
> > +	uint32_t bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> >  	uint64_t unbind_size = bo_size / 3;
> >  	uint64_t addr = 0x1a0000;
> >  
> > @@ -316,7 +316,7 @@ static void unbind_all(int fd, int n_vmas)
> >  	};
> >  
> >  	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
> > -	bo = xe_bo_create(fd, 0, vm, bo_size);
> > +	bo = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, 0));
> >  
> >  	for (i = 0; i < n_vmas; ++i)
> >  		xe_vm_bind_async(fd, vm, 0, bo, 0, addr + i * bo_size,
> > @@ -362,6 +362,7 @@ static void userptr_invalid(int fd)
> >  	xe_vm_destroy(fd, vm);
> >  }
> >  
> > +
> 
> Remove this newline.
> 
> Regards,
> Kamil

Sorry for the delay while finalizing the content of the kernel series. Many
thanks for the review and for catching those. It will be fixed in the next
revision.

Francois

> 
> >  /**
> >   * SUBTEST: shared-%s-page
> >   * Description: Test shared arg[1] page
> > @@ -1575,9 +1576,9 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
> >  		igt_assert(map0 != MAP_FAILED);
> >  		igt_assert(map1 != MAP_FAILED);
> >  	} else {
> > -		bo0 = xe_bo_create(fd, eci->gt_id, vm, bo_size);
> > +		bo0 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
> >  		map0 = xe_bo_map(fd, bo0, bo_size);
> > -		bo1 = xe_bo_create(fd, eci->gt_id, vm, bo_size);
> > +		bo1 = xe_bo_create_flags(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id));
> >  		map1 = xe_bo_map(fd, bo1, bo_size);
> >  	}
> >  	memset(map0, 0, bo_size);
> > -- 
> > 2.34.1
> > 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 04/13] xe_query: Add missing include.
  2023-11-21 17:00   ` Kamil Konieczny
@ 2023-11-28 17:48     ` Francois Dugast
  0 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-28 17:48 UTC (permalink / raw)
  To: Kamil Konieczny; +Cc: igt-dev, Rodrigo Vivi

On Tue, Nov 21, 2023 at 06:00:06PM +0100, Kamil Konieczny wrote:
> Hi Francois,
> On 2023-11-16 at 14:53:39 +0000, Francois Dugast wrote:
> > From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > 
> 
> LGTM, please remove final dot from subject line at merge.

Sure, will do.

Francois

> 
> Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>
> 
> > When trying to use xe_for_each_mem_region from a caller
> > that is not importing the igt_aux.h, the following build issue
> > will occur:
> > 
> > ../lib/xe/xe_query.h:76:38: error: implicit declaration of function ‘igt_fls’ [-Werror=implicit-function-declaration]
> >    76 |         for (uint64_t __i = 0; __i < igt_fls(__memreg); __i++) \
> > 
> > So, to avoid a dependency chain, let's include from the file
> > that is using the helper.
> > 
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > ---
> >  lib/xe/xe_query.h | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> > index 38e9aa440..7b3fc3100 100644
> > --- a/lib/xe/xe_query.h
> > +++ b/lib/xe/xe_query.h
> > @@ -11,6 +11,8 @@
> >  
> >  #include <stdint.h>
> >  #include <xe_drm.h>
> > +
> > +#include "igt_aux.h"
> >  #include "igt_list.h"
> >  #include "igt_sizes.h"
> >  
> > -- 
> > 2.34.1
> > 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 05/13] xe_query: Kill visible_vram_if_possible
  2023-11-21 17:40   ` Kamil Konieczny
@ 2023-11-28 19:49     ` Francois Dugast
  0 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-28 19:49 UTC (permalink / raw)
  To: Kamil Konieczny, igt-dev, Rodrigo Vivi, Juha-Pekka Heikkila,
	Bhanuprakash Modem

On Tue, Nov 21, 2023 at 06:40:10PM +0100, Kamil Konieczny wrote:
> Hi Francois,
> On 2023-11-16 at 14:53:40 +0000, Francois Dugast wrote:
> > From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > 
> > Let the caller set the flag and the xe_bo_query clear if
> > not needed.
> > 
> > Although the current helper makes the code cleaner, the
> > goal is to split the flags into placement and flags as two
> > different arguments on xe_bo_create. So, the flag decision
> > cannot be hidden under the helper.
> > 
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> 
> Overall looks good, I am thinking about adding igt_debug at
> __xe_bo_create (see below) but that can go in separate,
> follow-up patch.
> 
> Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>
> 
> > ---
> >  benchmarks/gem_wsim.c              |  3 ++-
> >  lib/igt_draw.c                     |  3 ++-
> >  lib/igt_fb.c                       |  3 ++-
> >  lib/intel_batchbuffer.c            |  6 ++++--
> >  lib/xe/xe_ioctl.c                  | 19 +++++++++++++++++++
> >  lib/xe/xe_query.c                  | 26 --------------------------
> >  lib/xe/xe_query.h                  |  1 -
> >  lib/xe/xe_spin.c                   |  7 ++++---
> >  tests/intel/kms_ccs.c              |  3 ++-
> >  tests/intel/xe_dma_buf_sync.c      |  3 ++-
> >  tests/intel/xe_exec_balancer.c     |  9 ++++++---
> >  tests/intel/xe_exec_basic.c        |  2 +-
> >  tests/intel/xe_exec_compute_mode.c |  3 ++-
> >  tests/intel/xe_exec_fault_mode.c   |  6 ++++--
> >  tests/intel/xe_exec_reset.c        | 14 +++++++++-----
> >  tests/intel/xe_exec_store.c        |  9 ++++++---
> >  tests/intel/xe_exec_threads.c      |  9 ++++++---
> >  tests/intel/xe_guc_pc.c            |  3 ++-
> >  tests/intel/xe_mmap.c              |  9 ++++++---
> >  tests/intel/xe_pm.c                |  3 ++-
> >  tests/intel/xe_pm_residency.c      |  3 ++-
> >  tests/intel/xe_prime_self_import.c | 27 ++++++++++++++++++---------
> >  tests/intel/xe_vm.c                | 21 ++++++++++++++-------
> >  23 files changed, 115 insertions(+), 77 deletions(-)
> > 
> > diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
> > index d6d3deb5f..966d9b465 100644
> > --- a/benchmarks/gem_wsim.c
> > +++ b/benchmarks/gem_wsim.c
> > @@ -1735,7 +1735,8 @@ xe_alloc_step_batch(struct workload *wrk, struct w_step *w)
> >  	int i;
> >  
> >  	w->bb_handle = xe_bo_create(fd, vm->id, PAGE_SIZE,
> > -				visible_vram_if_possible(fd, eq->hwe_list[0].gt_id));
> > +				    vram_if_possible(fd, eq->hwe_list[0].gt_id) |
> > +				    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	w->xe.data = xe_bo_map(fd, w->bb_handle, PAGE_SIZE);
> >  	w->xe.exec.address =
> >  		intel_allocator_alloc_with_strategy(vm->ahnd, w->bb_handle, PAGE_SIZE,
> > diff --git a/lib/igt_draw.c b/lib/igt_draw.c
> > index 5935eb058..b16afd799 100644
> > --- a/lib/igt_draw.c
> > +++ b/lib/igt_draw.c
> > @@ -797,7 +797,8 @@ static void draw_rect_render(int fd, struct cmd_data *cmd_data,
> >  	else
> >  		tmp.handle = xe_bo_create(fd, 0,
> >  					  ALIGN(tmp.size, xe_get_default_alignment(fd)),
> > -					  visible_vram_if_possible(fd, 0));
> > +					  vram_if_possible(fd, 0) |
> > +					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> >  	tmp.stride = rect->w * pixel_size;
> >  	tmp.bpp = buf->bpp;
> > diff --git a/lib/igt_fb.c b/lib/igt_fb.c
> > index f96dca7a4..0a6aa27c8 100644
> > --- a/lib/igt_fb.c
> > +++ b/lib/igt_fb.c
> > @@ -1206,7 +1206,8 @@ static int create_bo_for_fb(struct igt_fb *fb, bool prefer_sysmem)
> >  			igt_assert(err == 0 || err == -EOPNOTSUPP);
> >  		} else if (is_xe_device(fd)) {
> >  			fb->gem_handle = xe_bo_create(fd, 0, fb->size,
> > -						      visible_vram_if_possible(fd, 0));
> > +						      vram_if_possible(fd, 0)
> > +						      | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		} else if (is_vc4_device(fd)) {
> >  			fb->gem_handle = igt_vc4_create_bo(fd, fb->size);
> >  
> > diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> > index 7fa4e3487..45b1665f7 100644
> > --- a/lib/intel_batchbuffer.c
> > +++ b/lib/intel_batchbuffer.c
> > @@ -945,7 +945,8 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
> >  
> >  		ibb->alignment = xe_get_default_alignment(fd);
> >  		size = ALIGN(size, ibb->alignment);
> > -		ibb->handle = xe_bo_create(fd, 0, size, visible_vram_if_possible(fd, 0));
> > +		ibb->handle = xe_bo_create(fd, 0, size, vram_if_possible(fd, 0) |
> > +					   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> >  		/* Limit to 48-bit due to MI_* address limitation */
> >  		ibb->gtt_size = 1ull << min_t(uint32_t, xe_va_bits(fd), 48);
> > @@ -1404,7 +1405,8 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
> >  		ibb->handle = gem_create(ibb->fd, ibb->size);
> >  	else
> >  		ibb->handle = xe_bo_create(ibb->fd, 0, ibb->size,
> > -					   visible_vram_if_possible(ibb->fd, 0));
> > +					   vram_if_possible(ibb->fd, 0) |
> > +					   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> >  	/* Reacquire offset for RELOC and SIMPLE */
> >  	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
> > diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> > index 63fa2ae25..1d63081d6 100644
> > --- a/lib/xe/xe_ioctl.c
> > +++ b/lib/xe/xe_ioctl.c
> > @@ -226,6 +226,18 @@ void xe_vm_destroy(int fd, uint32_t vm)
> >  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_DESTROY, &destroy), 0);
> >  }
> >  
> > +static bool vram_selected(int fd, uint32_t selected_regions)
> > +{
> > +	uint64_t regions = all_memory_regions(fd) & selected_regions;
> > +	uint64_t region;
> > +
> > +	xe_for_each_mem_region(fd, regions, region)
> > +		if (xe_mem_region(fd, region)->mem_class == DRM_XE_MEM_REGION_CLASS_VRAM)
> > +			return true;
> > +
> > +	return false;
> > +}
> > +
> >  uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
> >  			uint32_t *handle)
> >  {
> > @@ -236,6 +248,13 @@ uint32_t __xe_bo_create(int fd, uint32_t vm, uint64_t size, uint32_t flags,
> >  	};
> >  	int err;
> >  
> > +	/*
> > +	 * In case vram_if_possible returned system_memory,
> > +	 *  visible VRAM cannot be requested through flags
> -------^
> Remove one space before "visible" word.

Sure, will fix.

> I am not sure, maybe we should add here igt_debug when
> 		flags & DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM
> 
> Adding Bhanu and Juha-Pekka to Cc.

Folks, we will proceed with this as it has been reviewed but feel free
to add the suggested igt_debug in another patch, thanks.

Francois

> 
> Regards,
> Kamil
> 
> 
> > +	 */
> > +	if (!vram_selected(fd, flags))
> > +		create.flags &= ~DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
> > +
> >  	err = igt_ioctl(fd, DRM_IOCTL_XE_GEM_CREATE, &create);
> >  	if (err)
> >  		return err;
> > diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> > index afd443be3..760a150db 100644
> > --- a/lib/xe/xe_query.c
> > +++ b/lib/xe/xe_query.c
> > @@ -442,32 +442,6 @@ uint64_t vram_if_possible(int fd, int gt)
> >  	return vram_memory(fd, gt) ?: system_memory(fd);
> >  }
> >  
> > -/**
> > - * visible_vram_if_possible:
> > - * @fd: xe device fd
> > - * @gt: gt id
> > - *
> > - * Returns vram memory bitmask for xe device @fd and @gt id or system memory if
> > - * there's no vram memory available for @gt. Also attaches the
> > - * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
> > - * when using vram.
> > - */
> > -uint64_t visible_vram_if_possible(int fd, int gt)
> > -{
> > -	uint64_t regions = all_memory_regions(fd);
> > -	uint64_t system_memory = regions & 0x1;
> > -	uint64_t vram = regions & (0x2 << gt);
> > -
> > -	/*
> > -	 * TODO: Keep it backwards compat for now. Fixup once the kernel side
> > -	 * has landed.
> > -	 */
> > -	if (__xe_visible_vram_size(fd, gt))
> > -		return vram ? vram | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
> > -	else
> > -		return vram ? vram : system_memory; /* older kernel */
> > -}
> > -
> >  /**
> >   * xe_hw_engines:
> >   * @fd: xe device fd
> > diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> > index 7b3fc3100..4dd0ad573 100644
> > --- a/lib/xe/xe_query.h
> > +++ b/lib/xe/xe_query.h
> > @@ -82,7 +82,6 @@ uint64_t system_memory(int fd);
> >  uint64_t vram_memory(int fd, int gt);
> >  uint64_t visible_vram_memory(int fd, int gt);
> >  uint64_t vram_if_possible(int fd, int gt);
> > -uint64_t visible_vram_if_possible(int fd, int gt);
> >  struct drm_xe_engine_class_instance *xe_hw_engines(int fd);
> >  struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx);
> >  struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
> > diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> > index 828938434..270b58bf5 100644
> > --- a/lib/xe/xe_spin.c
> > +++ b/lib/xe/xe_spin.c
> > @@ -220,7 +220,8 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
> >  	}
> >  
> >  	spin->handle = xe_bo_create(fd, spin->vm, bo_size,
> > -				    visible_vram_if_possible(fd, 0));
> > +				    vram_if_possible(fd, 0) |
> > +				    DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	xe_spin = xe_bo_map(fd, spin->handle, bo_size);
> >  	addr = intel_allocator_alloc_with_strategy(ahnd, spin->handle, bo_size, 0, ALLOC_STRATEGY_LOW_TO_HIGH);
> >  	xe_vm_bind_sync(fd, spin->vm, spin->handle, 0, addr, bo_size);
> > @@ -298,8 +299,8 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
> >  
> >  	vm = xe_vm_create(fd, 0, 0);
> >  
> > -	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, hwe->gt_id));
> > +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, hwe->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	spin = xe_bo_map(fd, bo, 0x1000);
> >  
> >  	xe_vm_bind_sync(fd, vm, bo, 0, addr, bo_size);
> > diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c
> > index 337afc00c..5ae28615f 100644
> > --- a/tests/intel/kms_ccs.c
> > +++ b/tests/intel/kms_ccs.c
> > @@ -435,7 +435,8 @@ static void test_bad_ccs_plane(data_t *data, int width, int height, int ccs_plan
> >  		bad_ccs_bo = is_i915_device(data->drm_fd) ?
> >  				gem_create(data->drm_fd, fb.size) :
> >  				xe_bo_create(data->drm_fd, 0, fb.size,
> > -					     visible_vram_if_possible(data->drm_fd, 0));
> > +					     vram_if_possible(data->drm_fd, 0) |
> > +					     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		f.handles[ccs_plane] = bad_ccs_bo;
> >  	}
> >  
> > diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
> > index ac9d9d767..9318647af 100644
> > --- a/tests/intel/xe_dma_buf_sync.c
> > +++ b/tests/intel/xe_dma_buf_sync.c
> > @@ -120,7 +120,8 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
> >  			xe_get_default_alignment(fd[0]));
> >  	for (i = 0; i < n_bo; ++i) {
> >  		bo[i] = xe_bo_create(fd[0], 0, bo_size,
> > -				     visible_vram_if_possible(fd[0], hwe0->gt_id));
> > +				     vram_if_possible(fd[0], hwe0->gt_id) |
> > +				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		dma_buf_fd[i] = prime_handle_to_fd(fd[0], bo[i]);
> >  		import_bo[i] = prime_fd_to_handle(fd[1], dma_buf_fd[i]);
> >  
> > diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> > index da34e117d..388bb6185 100644
> > --- a/tests/intel/xe_exec_balancer.c
> > +++ b/tests/intel/xe_exec_balancer.c
> > @@ -70,7 +70,8 @@ static void test_all_active(int fd, int gt, int class)
> >  	bo_size = sizeof(*data) * num_placements;
> >  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> >  
> > -	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> > +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  
> >  	for (i = 0; i < num_placements; i++) {
> > @@ -224,7 +225,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
> >  		}
> >  		memset(data, 0, bo_size);
> >  	} else {
> > -		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> > +		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
> > +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		data = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  
> > @@ -452,7 +454,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
> >  			igt_assert(data);
> >  		}
> >  	} else {
> > -		bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> > +		bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
> > +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		data = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  	memset(data, 0, bo_size);
> > diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> > index 841696b68..ca287b2e5 100644
> > --- a/tests/intel/xe_exec_basic.c
> > +++ b/tests/intel/xe_exec_basic.c
> > @@ -136,7 +136,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> >  	} else {
> >  		uint32_t bo_flags;
> >  
> > -		bo_flags = visible_vram_if_possible(fd, eci->gt_id);
> > +		bo_flags = vram_if_possible(fd, eci->gt_id) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
> >  		if (flags & DEFER_ALLOC)
> >  			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
> >  
> > diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
> > index beb962f79..07a27fd29 100644
> > --- a/tests/intel/xe_exec_compute_mode.c
> > +++ b/tests/intel/xe_exec_compute_mode.c
> > @@ -142,7 +142,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> >  		}
> >  	} else {
> >  		bo = xe_bo_create(fd, flags & VM_FOR_BO ? vm : 0,
> > -				  bo_size, visible_vram_if_possible(fd, eci->gt_id));
> > +				  bo_size, vram_if_possible(fd, eci->gt_id) |
> > +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		data = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  	memset(data, 0, bo_size);
> > diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> > index 903ad430d..bfd61c4ea 100644
> > --- a/tests/intel/xe_exec_fault_mode.c
> > +++ b/tests/intel/xe_exec_fault_mode.c
> > @@ -153,10 +153,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> >  		if (flags & PREFETCH)
> >  			bo = xe_bo_create(fd, 0, bo_size,
> >  					  all_memory_regions(fd) |
> > -					  visible_vram_if_possible(fd, 0));
> > +					  vram_if_possible(fd, 0) |
> > +					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		else
> >  			bo = xe_bo_create(fd, 0, bo_size,
> > -					  visible_vram_if_possible(fd, eci->gt_id));
> > +					  vram_if_possible(fd, eci->gt_id) |
> > +					  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		data = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  	memset(data, 0, bo_size);
> > diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> > index 704690e83..3affb19ae 100644
> > --- a/tests/intel/xe_exec_reset.c
> > +++ b/tests/intel/xe_exec_reset.c
> > @@ -51,7 +51,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
> >  			xe_get_default_alignment(fd));
> >  
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, eci->gt_id));
> > +			  vram_if_possible(fd, eci->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	spin = xe_bo_map(fd, bo, bo_size);
> >  
> >  	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> > @@ -181,7 +182,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
> >  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> >  			xe_get_default_alignment(fd));
> >  
> > -	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, gt));
> > +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, gt) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  
> >  	for (i = 0; i < n_exec_queues; i++) {
> > @@ -368,7 +370,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
> >  			xe_get_default_alignment(fd));
> >  
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, eci->gt_id));
> > +			  vram_if_possible(fd, eci->gt_id) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  
> >  	for (i = 0; i < n_exec_queues; i++) {
> > @@ -535,7 +537,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
> >  			xe_get_default_alignment(fd));
> >  
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, eci->gt_id));
> > +			  vram_if_possible(fd, eci->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  	memset(data, 0, bo_size);
> >  
> > @@ -661,7 +664,8 @@ static void submit_jobs(struct gt_thread_data *t)
> >  	uint32_t bo;
> >  	uint32_t *data;
> >  
> > -	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
> > +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  	data[0] = MI_BATCH_BUFFER_END;
> >  
> > diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
> > index bcc4de8d0..884183202 100644
> > --- a/tests/intel/xe_exec_store.c
> > +++ b/tests/intel/xe_exec_store.c
> > @@ -82,7 +82,8 @@ static void store(int fd)
> >  
> >  	hw_engine = xe_hw_engine(fd, 1);
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, hw_engine->gt_id));
> > +			  vram_if_possible(fd, hw_engine->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> >  	xe_vm_bind_async(fd, vm, hw_engine->gt_id, bo, 0, addr, bo_size, &sync, 1);
> >  	data = xe_bo_map(fd, bo, bo_size);
> > @@ -151,7 +152,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
> >  
> >  	for (i = 0; i < count; i++) {
> >  		bo[i] = xe_bo_create(fd, vm, bo_size,
> > -				     visible_vram_if_possible(fd, eci->gt_id));
> > +				     vram_if_possible(fd, eci->gt_id) |
> > +				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		bo_map[i] = xe_bo_map(fd, bo[i], bo_size);
> >  		dst_offset[i] = intel_allocator_alloc_with_strategy(ahnd, bo[i],
> >  								    bo_size, 0,
> > @@ -236,7 +238,8 @@ static void store_all(int fd, int gt, int class)
> >  			xe_get_default_alignment(fd));
> >  
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, 0));
> > +			  vram_if_possible(fd, 0) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  
> >  	xe_for_each_hw_engine(fd, hwe) {
> > diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> > index a9b0c0b09..ebc41dadd 100644
> > --- a/tests/intel/xe_exec_threads.c
> > +++ b/tests/intel/xe_exec_threads.c
> > @@ -107,7 +107,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
> >  		}
> >  	} else {
> >  		bo = xe_bo_create(fd, vm, bo_size,
> > -				  visible_vram_if_possible(fd, gt));
> > +				  vram_if_possible(fd, gt) |
> > +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		data = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  	memset(data, 0, bo_size);
> > @@ -308,7 +309,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> >  		}
> >  	} else {
> >  		bo = xe_bo_create(fd, 0, bo_size,
> > -				  visible_vram_if_possible(fd, eci->gt_id));
> > +				  vram_if_possible(fd, eci->gt_id) |
> > +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		data = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  	memset(data, 0, bo_size);
> > @@ -511,7 +513,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> >  		}
> >  	} else {
> >  		bo = xe_bo_create(fd, vm, bo_size,
> > -				  visible_vram_if_possible(fd, eci->gt_id));
> > +				  vram_if_possible(fd, eci->gt_id) |
> > +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		data = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  	memset(data, 0, bo_size);
> > diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
> > index 1e29d8905..4234475e0 100644
> > --- a/tests/intel/xe_guc_pc.c
> > +++ b/tests/intel/xe_guc_pc.c
> > @@ -66,7 +66,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
> >  			xe_get_default_alignment(fd));
> >  
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, eci->gt_id));
> > +			  vram_if_possible(fd, eci->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  
> >  	for (i = 0; i < n_exec_queues; i++) {
> > diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
> > index a805eabda..a4b53ad48 100644
> > --- a/tests/intel/xe_mmap.c
> > +++ b/tests/intel/xe_mmap.c
> > @@ -73,7 +73,8 @@ static void test_bad_flags(int fd)
> >  	uint64_t size = xe_get_default_alignment(fd);
> >  	struct drm_xe_gem_mmap_offset mmo = {
> >  		.handle = xe_bo_create(fd, 0, size,
> > -				       visible_vram_if_possible(fd, 0)),
> > +				       vram_if_possible(fd, 0) |
> > +				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
> >  		.flags = -1u,
> >  	};
> >  
> > @@ -93,7 +94,8 @@ static void test_bad_extensions(int fd)
> >  	struct xe_user_extension ext;
> >  	struct drm_xe_gem_mmap_offset mmo = {
> >  		.handle = xe_bo_create(fd, 0, size,
> > -				       visible_vram_if_possible(fd, 0)),
> > +				       vram_if_possible(fd, 0) |
> > +				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
> >  	};
> >  
> >  	mmo.extensions = to_user_pointer(&ext);
> > @@ -114,7 +116,8 @@ static void test_bad_object(int fd)
> >  	uint64_t size = xe_get_default_alignment(fd);
> >  	struct drm_xe_gem_mmap_offset mmo = {
> >  		.handle = xe_bo_create(fd, 0, size,
> > -				       visible_vram_if_possible(fd, 0)),
> > +				       vram_if_possible(fd, 0) |
> > +				       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM),
> >  	};
> >  
> >  	mmo.handle = 0xdeadbeef;
> > diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> > index 9bfe1acad..9fd3527f7 100644
> > --- a/tests/intel/xe_pm.c
> > +++ b/tests/intel/xe_pm.c
> > @@ -272,7 +272,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
> >  		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
> >  
> >  	bo = xe_bo_create(device.fd_xe, vm, bo_size,
> > -			  visible_vram_if_possible(device.fd_xe, eci->gt_id));
> > +			  vram_if_possible(device.fd_xe, eci->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(device.fd_xe, bo, bo_size);
> >  
> >  	for (i = 0; i < n_exec_queues; i++) {
> > diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
> > index cc133f5fb..40a1693b8 100644
> > --- a/tests/intel/xe_pm_residency.c
> > +++ b/tests/intel/xe_pm_residency.c
> > @@ -101,7 +101,8 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
> >  	bo_size = xe_get_default_alignment(fd);
> >  
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, hwe->gt_id));
> > +			  vram_if_possible(fd, hwe->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  	syncobj = syncobj_create(fd, 0);
> >  
> > diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
> > index 378368eaa..2c2f2898c 100644
> > --- a/tests/intel/xe_prime_self_import.c
> > +++ b/tests/intel/xe_prime_self_import.c
> > @@ -105,7 +105,8 @@ static void test_with_fd_dup(void)
> >  	fd1 = drm_open_driver(DRIVER_XE);
> >  	fd2 = drm_open_driver(DRIVER_XE);
> >  
> > -	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> > +	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
> > +			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> >  	dma_buf_fd1 = prime_handle_to_fd(fd1, handle);
> >  	gem_close(fd1, handle);
> > @@ -138,8 +139,10 @@ static void test_with_two_bos(void)
> >  	fd1 = drm_open_driver(DRIVER_XE);
> >  	fd2 = drm_open_driver(DRIVER_XE);
> >  
> > -	handle1 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> > -	handle2 = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> > +	handle1 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
> > +			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> > +	handle2 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
> > +			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> >  	dma_buf_fd = prime_handle_to_fd(fd1, handle1);
> >  	handle_import = prime_fd_to_handle(fd2, dma_buf_fd);
> > @@ -175,7 +178,8 @@ static void test_with_one_bo_two_files(void)
> >  	fd2 = drm_open_driver(DRIVER_XE);
> >  
> >  	handle_orig = xe_bo_create(fd1, 0, bo_size,
> > -				   visible_vram_if_possible(fd1, 0));
> > +				   vram_if_possible(fd1, 0) |
> > +				   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	dma_buf_fd1 = prime_handle_to_fd(fd1, handle_orig);
> >  
> >  	flink_name = gem_flink(fd1, handle_orig);
> > @@ -207,7 +211,8 @@ static void test_with_one_bo(void)
> >  	fd1 = drm_open_driver(DRIVER_XE);
> >  	fd2 = drm_open_driver(DRIVER_XE);
> >  
> > -	handle = xe_bo_create(fd1, 0, bo_size, visible_vram_if_possible(fd1, 0));
> > +	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0) |
> > +			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> >  	dma_buf_fd = prime_handle_to_fd(fd1, handle);
> >  	handle_import1 = prime_fd_to_handle(fd2, dma_buf_fd);
> > @@ -294,7 +299,8 @@ static void *thread_fn_reimport_vs_close(void *p)
> >  	fds[0] = drm_open_driver(DRIVER_XE);
> >  
> >  	handle = xe_bo_create(fds[0], 0, bo_size,
> > -			      visible_vram_if_possible(fds[0], 0));
> > +			      vram_if_possible(fds[0], 0) |
> > +			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> >  	fds[1] = prime_handle_to_fd(fds[0], handle);
> >  	pthread_barrier_init(&g_barrier, NULL, num_threads);
> > @@ -337,7 +343,8 @@ static void *thread_fn_export_vs_close(void *p)
> >  	igt_until_timeout(g_time_out) {
> >  		/* We want to race gem close against prime export on handle one.*/
> >  		handle = xe_bo_create(fd, 0, bo_size,
> > -				      visible_vram_if_possible(fd, 0));
> > +				      vram_if_possible(fd, 0) |
> > +				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		if (handle != 1)
> >  			gem_close(fd, handle);
> >  
> > @@ -434,7 +441,8 @@ static void test_llseek_size(void)
> >  		int bufsz = xe_get_default_alignment(fd) << i;
> >  
> >  		handle = xe_bo_create(fd, 0, bufsz,
> > -				      visible_vram_if_possible(fd, 0));
> > +				      vram_if_possible(fd, 0) |
> > +				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		dma_buf_fd = prime_handle_to_fd(fd, handle);
> >  
> >  		gem_close(fd, handle);
> > @@ -463,7 +471,8 @@ static void test_llseek_bad(void)
> >  	fd = drm_open_driver(DRIVER_XE);
> >  
> >  	handle = xe_bo_create(fd, 0, bo_size,
> > -			      visible_vram_if_possible(fd, 0));
> > +			      vram_if_possible(fd, 0) |
> > +			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	dma_buf_fd = prime_handle_to_fd(fd, handle);
> >  
> >  	gem_close(fd, handle);
> > diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> > index 52195737c..eb2e0078d 100644
> > --- a/tests/intel/xe_vm.c
> > +++ b/tests/intel/xe_vm.c
> > @@ -52,7 +52,8 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
> >  	batch_size = ALIGN(batch_size + xe_cs_prefetch_size(fd),
> >  			   xe_get_default_alignment(fd));
> >  	batch_bo = xe_bo_create(fd, vm, batch_size,
> > -				visible_vram_if_possible(fd, 0));
> > +				vram_if_possible(fd, 0) |
> > +				DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	batch_map = xe_bo_map(fd, batch_bo, batch_size);
> >  
> >  	for (i = 0; i < n_dwords; i++) {
> > @@ -116,7 +117,8 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
> >  		vms = malloc(sizeof(*vms) * n_addrs);
> >  		igt_assert(vms);
> >  	}
> > -	bo = xe_bo_create(fd, vm, bo_size, visible_vram_if_possible(fd, 0));
> > +	bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, 0) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	map = xe_bo_map(fd, bo, bo_size);
> >  	memset(map, 0, bo_size);
> >  
> > @@ -422,7 +424,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
> >  
> >  	for (i = 0; i < n_bo; ++i) {
> >  		bo[i] = xe_bo_create(fd, vm, bo_size,
> > -				     visible_vram_if_possible(fd, eci->gt_id));
> > +				     vram_if_possible(fd, eci->gt_id) |
> > +				     DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		data[i] = xe_bo_map(fd, bo[i], bo_size);
> >  	}
> >  
> > @@ -601,7 +604,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
> >  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> >  			xe_get_default_alignment(fd));
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, eci->gt_id));
> > +			  vram_if_possible(fd, eci->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  
> >  	for (i = 0; i < N_EXEC_QUEUES; i++) {
> > @@ -782,7 +786,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
> >  			xe_get_default_alignment(fd));
> >  
> >  	bo = xe_bo_create(fd, vm, bo_size,
> > -			  visible_vram_if_possible(fd, eci->gt_id));
> > +			  vram_if_possible(fd, eci->gt_id) |
> > +			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	data = xe_bo_map(fd, bo, bo_size);
> >  
> >  	if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
> > @@ -980,7 +985,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
> >  			    xe_visible_vram_size(fd, 0));
> >  
> >  		bo = xe_bo_create(fd, vm, bo_size,
> > -				  visible_vram_if_possible(fd, eci->gt_id));
> > +				  vram_if_possible(fd, eci->gt_id) |
> > +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		map = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  
> > @@ -1272,7 +1278,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
> >  		igt_assert(map != MAP_FAILED);
> >  	} else {
> >  		bo = xe_bo_create(fd, vm, bo_size,
> > -				  visible_vram_if_possible(fd, eci->gt_id));
> > +				  vram_if_possible(fd, eci->gt_id) |
> > +				  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  		map = xe_bo_map(fd, bo, bo_size);
> >  	}
> >  	memset(map, 0, bo_size);
> > -- 
> > 2.34.1
> > 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [igt-dev] [PATCH v1 09/13] drm-uapi/xe: Reject bo creation of unaligned size
  2023-11-17 18:44   ` Kamil Konieczny
@ 2023-11-28 20:31     ` Francois Dugast
  0 siblings, 0 replies; 30+ messages in thread
From: Francois Dugast @ 2023-11-28 20:31 UTC (permalink / raw)
  To: Kamil Konieczny, igt-dev

On Fri, Nov 17, 2023 at 07:44:16PM +0100, Kamil Konieczny wrote:
> Hi Francois,
> On 2023-11-16 at 14:53:44 +0000, Francois Dugast wrote:
> > Align with kernel commit ("drm/xe/uapi: Reject bo creation of unaligned size")
> > 
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > ---
> >  include/drm-uapi/xe_drm.h          | 17 +++++++++--------
> >  tests/intel/xe_mmap.c              | 22 ++++++++++++----------
> >  tests/intel/xe_prime_self_import.c | 26 +++++++++++++++++++++++++-
> 
> There are now compilation warnings:
> 
> [1315/1777] Compiling C object tests/xe_prime_self_import.p/intel_xe_prime_self_import.c.o
> ../tests/intel/xe_prime_self_import.c: In function 'check_bo':
> ../tests/intel/xe_prime_self_import.c:73:16: warning: declaration of 'bo_size' shadows a global declaration [-Wshadow]
>    73 |         size_t bo_size = get_min_bo_size(fd1, fd2);
>       |                ^~~~~~~
> 
> so please delete global var bo_size and remove it also from fixup.

It will be removed in the next revision, thanks.

Francois

> 
> Regards,
> Kamil
> 
> >  tests/intel/xe_vm.c                | 13 ++++++-------
> >  4 files changed, 52 insertions(+), 26 deletions(-)
> > 
> > diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> > index 7aff66830..aa66b62e2 100644
> > --- a/include/drm-uapi/xe_drm.h
> > +++ b/include/drm-uapi/xe_drm.h
> > @@ -206,11 +206,13 @@ struct drm_xe_query_mem_region {
> >  	 *
> >  	 * When the kernel allocates memory for this region, the
> >  	 * underlying pages will be at least @min_page_size in size.
> > -	 *
> > -	 * Important note: When userspace allocates a GTT address which
> > -	 * can point to memory allocated from this region, it must also
> > -	 * respect this minimum alignment. This is enforced by the
> > -	 * kernel.
> > +	 * Buffer objects with an allowable placement in this region must be
> > +	 * created with a size aligned to this value.
> > +	 * GPU virtual address mappings of (parts of) buffer objects that
> > +	 * may be placed in this region must also have their GPU virtual
> > +	 * address and range aligned to this value.
> > +	 * Affected IOCTLS will return %-EINVAL if alignment restrictions are
> > +	 * not met.
> >  	 */
> >  	__u32 min_page_size;
> >  	/**
> > @@ -515,9 +517,8 @@ struct drm_xe_gem_create {
> >  	__u64 extensions;
> >  
> >  	/**
> > -	 * @size: Requested size for the object
> > -	 *
> > -	 * The (page-aligned) allocated size for the object will be returned.
> > +	 * @size: Size of the object to be created, must match region
> > +	 * (system or vram) minimum alignment (&min_page_size).
> >  	 */
> >  	__u64 size;
> >  
> > diff --git a/tests/intel/xe_mmap.c b/tests/intel/xe_mmap.c
> > index 965644e22..d6c8d5114 100644
> > --- a/tests/intel/xe_mmap.c
> > +++ b/tests/intel/xe_mmap.c
> > @@ -47,17 +47,18 @@
> >  static void
> >  test_mmap(int fd, uint32_t placement, uint32_t flags)
> >  {
> > +	size_t bo_size = xe_get_default_alignment(fd);
> >  	uint32_t bo;
> >  	void *map;
> >  
> >  	igt_require_f(placement, "Device doesn't support such memory region\n");
> >  
> > -	bo = xe_bo_create(fd, 0, 4096, placement, flags);
> > +	bo = xe_bo_create(fd, 0, bo_size, placement, flags);
> >  
> > -	map = xe_bo_map(fd, bo, 4096);
> > +	map = xe_bo_map(fd, bo, bo_size);
> >  	strcpy(map, "Write some data to the BO!");
> >  
> > -	munmap(map, 4096);
> > +	munmap(map, bo_size);
> >  
> >  	gem_close(fd, bo);
> >  }
> > @@ -156,13 +157,14 @@ static void trap_sigbus(uint32_t *ptr)
> >   */
> >  static void test_small_bar(int fd)
> >  {
> > +	size_t page_size = xe_get_default_alignment(fd);
> >  	uint32_t visible_size = xe_visible_vram_size(fd, 0);
> >  	uint32_t bo;
> >  	uint64_t mmo;
> >  	uint32_t *map;
> >  
> >  	/* 2BIG invalid case */
> > -	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + 4096,
> > +	igt_assert_neq(__xe_bo_create(fd, 0, visible_size + page_size,
> >  				      vram_memory(fd, 0),
> >  				      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM,
> >  				      &bo),
> > @@ -172,12 +174,12 @@ static void test_small_bar(int fd)
> >  	bo = xe_bo_create(fd, 0, visible_size / 4, vram_memory(fd, 0),
> >  			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	mmo = xe_bo_mmap_offset(fd, bo);
> > -	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
> > +	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
> >  	igt_assert(map != MAP_FAILED);
> >  
> >  	map[0] = 0xdeadbeaf;
> >  
> > -	munmap(map, 4096);
> > +	munmap(map, page_size);
> >  	gem_close(fd, bo);
> >  
> >  	/* Normal operation with system memory spilling */
> > @@ -186,18 +188,18 @@ static void test_small_bar(int fd)
> >  			  system_memory(fd),
> >  			  DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	mmo = xe_bo_mmap_offset(fd, bo);
> > -	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
> > +	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
> >  	igt_assert(map != MAP_FAILED);
> >  
> >  	map[0] = 0xdeadbeaf;
> >  
> > -	munmap(map, 4096);
> > +	munmap(map, page_size);
> >  	gem_close(fd, bo);
> >  
> >  	/* Bogus operation with SIGBUS */
> > -	bo = xe_bo_create(fd, 0, visible_size + 4096, vram_memory(fd, 0), 0);
> > +	bo = xe_bo_create(fd, 0, visible_size + page_size, vram_memory(fd, 0), 0);
> >  	mmo = xe_bo_mmap_offset(fd, bo);
> > -	map = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, fd, mmo);
> > +	map = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd, mmo);
> >  	igt_assert(map != MAP_FAILED);
> >  
> >  	trap_sigbus(map);
> > diff --git a/tests/intel/xe_prime_self_import.c b/tests/intel/xe_prime_self_import.c
> > index 9a263d326..504e6a13d 100644
> > --- a/tests/intel/xe_prime_self_import.c
> > +++ b/tests/intel/xe_prime_self_import.c
> > @@ -61,13 +61,19 @@ static int g_time_out = 5;
> >  static pthread_barrier_t g_barrier;
> >  static size_t bo_size;
> >  
> > +static size_t get_min_bo_size(int fd1, int fd2)
> > +{
> > +	return 4 * max(xe_get_default_alignment(fd1),
> > +		       xe_get_default_alignment(fd2));
> > +}
> > +
> >  static void
> >  check_bo(int fd1, uint32_t handle1, int fd2, uint32_t handle2)
> >  {
> > +	size_t bo_size = get_min_bo_size(fd1, fd2);
> >  	char *ptr1, *ptr2;
> >  	int i;
> >  
> > -
> >  	ptr1 = xe_bo_map(fd1, handle1, bo_size);
> >  	ptr2 = xe_bo_map(fd2, handle2, bo_size);
> >  
> > @@ -97,6 +103,7 @@ check_bo(int fd1, uint32_t handle1, int fd2, uint32_t handle2)
> >  static void test_with_fd_dup(void)
> >  {
> >  	int fd1, fd2;
> > +	size_t bo_size;
> >  	uint32_t handle, handle_import;
> >  	int dma_buf_fd1, dma_buf_fd2;
> >  
> > @@ -105,6 +112,8 @@ static void test_with_fd_dup(void)
> >  	fd1 = drm_open_driver(DRIVER_XE);
> >  	fd2 = drm_open_driver(DRIVER_XE);
> >  
> > +	bo_size = get_min_bo_size(fd1, fd2);
> > +
> >  	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
> >  			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> > @@ -131,6 +140,7 @@ static void test_with_fd_dup(void)
> >  static void test_with_two_bos(void)
> >  {
> >  	int fd1, fd2;
> > +	size_t bo_size;
> >  	uint32_t handle1, handle2, handle_import;
> >  	int dma_buf_fd;
> >  
> > @@ -139,6 +149,8 @@ static void test_with_two_bos(void)
> >  	fd1 = drm_open_driver(DRIVER_XE);
> >  	fd2 = drm_open_driver(DRIVER_XE);
> >  
> > +	bo_size = get_min_bo_size(fd1, fd2);
> > +
> >  	handle1 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
> >  			       DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  	handle2 = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
> > @@ -171,12 +183,15 @@ static void test_with_two_bos(void)
> >  static void test_with_one_bo_two_files(void)
> >  {
> >  	int fd1, fd2;
> > +	size_t bo_size;
> >  	uint32_t handle_import, handle_open, handle_orig, flink_name;
> >  	int dma_buf_fd1, dma_buf_fd2;
> >  
> >  	fd1 = drm_open_driver(DRIVER_XE);
> >  	fd2 = drm_open_driver(DRIVER_XE);
> >  
> > +	bo_size = get_min_bo_size(fd1, fd2);
> > +
> >  	handle_orig = xe_bo_create(fd1, 0, bo_size,
> >  				   vram_if_possible(fd1, 0),
> >  				   DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> > @@ -205,12 +220,15 @@ static void test_with_one_bo_two_files(void)
> >  static void test_with_one_bo(void)
> >  {
> >  	int fd1, fd2;
> > +	size_t bo_size;
> >  	uint32_t handle, handle_import1, handle_import2, handle_selfimport;
> >  	int dma_buf_fd;
> >  
> >  	fd1 = drm_open_driver(DRIVER_XE);
> >  	fd2 = drm_open_driver(DRIVER_XE);
> >  
> > +	bo_size = get_min_bo_size(fd1, fd2);
> > +
> >  	handle = xe_bo_create(fd1, 0, bo_size, vram_if_possible(fd1, 0),
> >  			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> >  
> > @@ -279,6 +297,7 @@ static void *thread_fn_reimport_vs_close(void *p)
> >  	pthread_t *threads;
> >  	int r, i, num_threads;
> >  	int fds[2];
> > +	size_t bo_size;
> >  	int obj_count;
> >  	void *status;
> >  	uint32_t handle;
> > @@ -298,6 +317,8 @@ static void *thread_fn_reimport_vs_close(void *p)
> >  
> >  	fds[0] = drm_open_driver(DRIVER_XE);
> >  
> > +	bo_size = xe_get_default_alignment(fds[0]);
> > +
> >  	handle = xe_bo_create(fds[0], 0, bo_size,
> >  			      vram_if_possible(fds[0], 0),
> >  			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> > @@ -336,6 +357,7 @@ static void *thread_fn_export_vs_close(void *p)
> >  	struct drm_prime_handle prime_h2f;
> >  	struct drm_gem_close close_bo;
> >  	int fd = (uintptr_t)p;
> > +	size_t bo_size = xe_get_default_alignment(fd);
> >  	uint32_t handle;
> >  
> >  	pthread_barrier_wait(&g_barrier);
> > @@ -463,6 +485,7 @@ static void test_llseek_size(void)
> >  static void test_llseek_bad(void)
> >  {
> >  	int fd;
> > +	size_t bo_size;
> >  	uint32_t handle;
> >  	int dma_buf_fd;
> >  
> > @@ -470,6 +493,7 @@ static void test_llseek_bad(void)
> >  
> >  	fd = drm_open_driver(DRIVER_XE);
> >  
> > +	bo_size = 4 * xe_get_default_alignment(fd);
> >  	handle = xe_bo_create(fd, 0, bo_size,
> >  			      vram_if_possible(fd, 0),
> >  			      DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> > diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> > index ea93d7b2e..2c563c64f 100644
> > --- a/tests/intel/xe_vm.c
> > +++ b/tests/intel/xe_vm.c
> > @@ -1310,11 +1310,10 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
> >  	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
> >  		t.fd = fd;
> >  		t.vm = vm;
> > -#define PAGE_SIZE	4096
> > -		t.addr = addr + PAGE_SIZE / 2;
> > +		t.addr = addr + page_size / 2;
> >  		t.eci = eci;
> >  		t.exit = &exit;
> > -		t.map = map + PAGE_SIZE / 2;
> > +		t.map = map + page_size / 2;
> >  		t.barrier = &barrier;
> >  		pthread_barrier_init(&barrier, NULL, 2);
> >  		pthread_create(&t.thread, 0, hammer_thread, &t);
> > @@ -1367,8 +1366,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
> >  		igt_assert_eq(data->data, 0xc0ffee);
> >  	}
> >  	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
> > -		memset(map, 0, PAGE_SIZE / 2);
> > -		memset(map + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
> > +		memset(map, 0, page_size / 2);
> > +		memset(map + page_size, 0, bo_size - page_size);
> >  	} else {
> >  		memset(map, 0, bo_size);
> >  	}
> > @@ -1417,8 +1416,8 @@ try_again_after_invalidate:
> >  		}
> >  	}
> >  	if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
> > -		memset(map, 0, PAGE_SIZE / 2);
> > -		memset(map + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
> > +		memset(map, 0, page_size / 2);
> > +		memset(map + page_size, 0, bo_size - page_size);
> >  	} else {
> >  		memset(map, 0, bo_size);
> >  	}
> > -- 
> > 2.34.1
> > 

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2023-11-28 20:32 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-16 14:53 [igt-dev] [PATCH v1 00/13] uAPI Alignment - Cleanup and future proof Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 01/13] drm-uapi/xe: Extend drm_xe_vm_bind_op Francois Dugast
2023-11-21 17:01   ` Kamil Konieczny
2023-11-16 14:53 ` [igt-dev] [PATCH v1 02/13] xe_ioctl: Converge bo_create to the most used version Francois Dugast
2023-11-21 17:13   ` Kamil Konieczny
2023-11-28 16:11     ` Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 03/13] xe_ioctl: Rename *xe_bo_create_flags to simply xe_bo_create Francois Dugast
2023-11-21 17:24   ` Kamil Konieczny
2023-11-16 14:53 ` [igt-dev] [PATCH v1 04/13] xe_query: Add missing include Francois Dugast
2023-11-21 17:00   ` Kamil Konieczny
2023-11-28 17:48     ` Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 05/13] xe_query: Kill visible_vram_if_possible Francois Dugast
2023-11-21 17:40   ` Kamil Konieczny
2023-11-28 19:49     ` Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 06/13] drm-uapi/xe: Separate bo_create placement from flags Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 07/13] xe: s/hw_engine/engine Francois Dugast
2023-11-21 18:15   ` Kamil Konieczny
2023-11-16 14:53 ` [igt-dev] [PATCH v1 08/13] drm-uapi/xe: Align with drm_xe_query_engine_info Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 09/13] drm-uapi/xe: Reject bo creation of unaligned size Francois Dugast
2023-11-17 18:44   ` Kamil Konieczny
2023-11-28 20:31     ` Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 10/13] drm-uapi/xe: Align on a common way to return arrays (memory regions) Francois Dugast
2023-11-17 18:46   ` Kamil Konieczny
2023-11-16 14:53 ` [igt-dev] [PATCH v1 11/13] drm-uapi/xe: Align on a common way to return arrays (gt) Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 12/13] drm-uapi/xe: Align on a common way to return arrays (engines) Francois Dugast
2023-11-16 14:53 ` [igt-dev] [PATCH v1 13/13] drm-uapi/xe: Add Tile ID information to the GT info query Francois Dugast
2023-11-16 15:20 ` [igt-dev] ✗ Fi.CI.BUILD: failure for uAPI Alignment - Cleanup and future proof Patchwork
2023-11-17 18:12 ` [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - Cleanup and future proof (rev2) Patchwork
2023-11-17 19:53 ` [igt-dev] ✗ CI.xeBAT: failure " Patchwork
2023-11-18 14:55 ` [igt-dev] ✗ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.