All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3
@ 2023-09-26 13:00 Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
                   ` (26 more replies)
  0 siblings, 27 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev

As a result of the uAPI review efforts started by Thomas[1],
we have identified many updates on our uAPI that would lead to
breakage in the compatibility. What it is not acceptable after
we are merged upstream. So, let's break it before it is too late,
and start upstreaming a good, reliable and clean uapi.

Most of this work on putting these patches together for a single
shot was led by Francois.

[1] - https://lore.kernel.org/all/863bebd0c624d6fc2b38c0a06b63e468b4185128.camel@linux.intel.com/

Francois Dugast (9):
  drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with
    latest uapi
  drm-uapi/xe_drm: Remove MMIO ioctl and align with latest uapi
  drm-uapi/xe: Align with uAPI to query micro-controler firmware version
  drm-uapi/xe: Align with DRM_XE_DEVICE_QUERY_HWCONFIG documentation
  drm-uapi/xe: Align with uAPI to pad to drm_xe_engine_class_instance
  drm-uapi/xe: Align with uAPI update query HuC micro-controler firmware
    version
  drm-uapi/xe: Align with uAPI update for query config num_params
  drm-uapi/xe: Align with uAPI update to add DRM_ prefix in uAPI
    constants
  drm-uapi/xe: Align with uAPI update to add _FLAG to constants usable
    for flags

Matthew Brost (4):
  xe_exec_balancer: Enable parallel submission and compute mode
  xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a
    compute VM
  xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE
  xe: Update to new VM bind uAPI

Rodrigo Vivi (10):
  drm-uapi/xe_drm: Align with new PMU interface
  drm-uapi/xe: Use common drm_xe_ext_set_property extension
  drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension
  drm-uapi/xe: Replace useless 'instance' per unique gt_id
  drm-uapi/xe: Remove unused field of drm_xe_query_gt
  drm-uapi/xe: Rename gts to gt_list
  drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY
  drm-uapi/xe: Align with documentation updates
  drm-uapi/xe: Align with Crystal Reference Clock updates
  drm-uapi/xe: Align with extension of drm_xe_vm_bind_op

Umesh Nerlige Ramappa (1):
  tests/intel/xe_query: Add a test for querying cs cycles

 include/drm-uapi/xe_drm.h                | 542 +++++++++++++----------
 lib/igt_fb.c                             |   2 +-
 lib/intel_batchbuffer.c                  |  25 +-
 lib/intel_compute.c                      |   6 +-
 lib/intel_ctx.c                          |   4 +-
 lib/xe/xe_ioctl.c                        |  54 +--
 lib/xe/xe_ioctl.h                        |   9 +-
 lib/xe/xe_query.c                        |  93 ++--
 lib/xe/xe_query.h                        |  15 +-
 lib/xe/xe_spin.c                         |  15 +-
 lib/xe/xe_util.c                         |  15 +-
 lib/xe/xe_util.h                         |   4 +-
 tests/intel-ci/xe-fast-feedback.testlist |   4 +-
 tests/intel/xe_ccs.c                     |   8 +-
 tests/intel/xe_create.c                  |   6 +-
 tests/intel/xe_debugfs.c                 |  14 +-
 tests/intel/xe_dma_buf_sync.c            |   4 +-
 tests/intel/xe_drm_fdinfo.c              |  20 +-
 tests/intel/xe_evict.c                   |  51 +--
 tests/intel/xe_exec_balancer.c           |  63 +--
 tests/intel/xe_exec_basic.c              |  28 +-
 tests/intel/xe_exec_compute_mode.c       |  30 +-
 tests/intel/xe_exec_fault_mode.c         |  20 +-
 tests/intel/xe_exec_reset.c              |  80 ++--
 tests/intel/xe_exec_store.c              |  16 +-
 tests/intel/xe_exec_threads.c            | 179 +++-----
 tests/intel/xe_exercise_blt.c            |   6 +-
 tests/intel/xe_guc_pc.c                  |  12 +-
 tests/intel/xe_huc_copy.c                |   4 +-
 tests/intel/xe_intel_bb.c                |   2 +-
 tests/intel/xe_mmio.c                    |  91 ----
 tests/intel/xe_noexec_ping_pong.c        |  12 +-
 tests/intel/xe_pm.c                      |  14 +-
 tests/intel/xe_pm_residency.c            |   2 +-
 tests/intel/xe_query.c                   | 272 ++++++++++--
 tests/intel/xe_spin_batch.c              |   2 +-
 tests/intel/xe_vm.c                      | 316 +++----------
 tests/intel/xe_waitfence.c               |  21 +-
 tests/kms_flip.c                         |   0
 tests/meson.build                        |   1 -
 tools/meson.build                        |   1 -
 tools/xe_reg.c                           | 366 ---------------
 42 files changed, 979 insertions(+), 1450 deletions(-)
 delete mode 100644 tests/intel/xe_mmio.c
 mode change 100755 => 100644 tests/kms_flip.c
 delete mode 100644 tools/xe_reg.c

-- 
2.34.1

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 16:50   ` Tvrtko Ursulin
  2023-09-27  4:58   ` Aravind Iddamsetty
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 02/24] tests/intel/xe_query: Add a test for querying cs cycles Francois Dugast
                   ` (25 subsequent siblings)
  26 siblings, 2 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe/pmu: Enable PMU interface")

Cc: Francois Dugast <francois.dugast@intel.com>
Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 include/drm-uapi/xe_drm.h | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 804c02270..643eb6e82 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -1053,6 +1053,44 @@ struct drm_xe_vm_madvise {
 	__u64 reserved[2];
 };
 
+/**
+ * DOC: XE PMU event config IDs
+ *
+ * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
+ * as part of perf_event_open syscall to read a particular event.
+ *
+ * For example to open the XE_PMU_INTERRUPTS(0):
+ *
+ * .. code-block:: C
+ *	struct perf_event_attr attr;
+ *	long long count;
+ *	int cpu = 0;
+ *	int fd;
+ *
+ *	memset(&attr, 0, sizeof(struct perf_event_attr));
+ *	attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
+ *	attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
+ *	attr.use_clockid = 1;
+ *	attr.clockid = CLOCK_MONOTONIC;
+ *	attr.config = XE_PMU_INTERRUPTS(0);
+ *
+ *	fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
+ */
+
+/*
+ * Top bits of every counter are GT id.
+ */
+#define __XE_PMU_GT_SHIFT (56)
+
+#define ___XE_PMU_OTHER(gt, x) \
+	(((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
+
+#define XE_PMU_INTERRUPTS(gt)			___XE_PMU_OTHER(gt, 0)
+#define XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
+#define XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
+#define XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 3)
+#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 4)
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 02/24] tests/intel/xe_query: Add a test for querying cs cycles
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 03/24] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi Francois Dugast
                   ` (24 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>

The DRM_XE_QUERY_CS_CYCLES query provides a way for the user to obtain
CPU and GPU timestamps as close to each other as possible.

Add a test to query cs cycles and GPU/CPU time correlation as well as
validate the parameters.

Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 include/drm-uapi/xe_drm.h |  95 ++++++++++++++----
 tests/intel/xe_query.c    | 200 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 277 insertions(+), 18 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 643eb6e82..437f57972 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -128,6 +128,24 @@ struct xe_user_extension {
 #define DRM_IOCTL_XE_WAIT_USER_FENCE		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
 #define DRM_IOCTL_XE_VM_MADVISE			 DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_MADVISE, struct drm_xe_vm_madvise)
 
+/** struct drm_xe_engine_class_instance - instance of an engine class */
+struct drm_xe_engine_class_instance {
+#define DRM_XE_ENGINE_CLASS_RENDER		0
+#define DRM_XE_ENGINE_CLASS_COPY		1
+#define DRM_XE_ENGINE_CLASS_VIDEO_DECODE	2
+#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE	3
+#define DRM_XE_ENGINE_CLASS_COMPUTE		4
+	/*
+	 * Kernel only class (not actual hardware engine class). Used for
+	 * creating ordered queues of VM bind operations.
+	 */
+#define DRM_XE_ENGINE_CLASS_VM_BIND		5
+	__u16 engine_class;
+
+	__u16 engine_instance;
+	__u16 gt_id;
+};
+
 /**
  * enum drm_xe_memory_class - Supported memory classes.
  */
@@ -219,6 +237,64 @@ struct drm_xe_query_mem_region {
 	__u64 reserved[6];
 };
 
+/**
+ * struct drm_xe_query_cs_cycles - correlate CPU and GPU timestamps
+ *
+ * If a query is made with a struct drm_xe_device_query where .query
+ * is equal to DRM_XE_QUERY_CS_CYCLES, then the reply uses
+ * struct drm_xe_query_cs_cycles in .data.
+ *
+ * struct drm_xe_query_cs_cycles is allocated by the user and .data points to
+ * this allocated structure. The user must pass .eci and .clockid as inputs to
+ * this query. eci determines the engine and tile info required to fetch the
+ * relevant GPU timestamp. clockid is used to return the specific CPU
+ * timestamp.
+ *
+ * The query returns the command streamer cycles and the frequency that can
+ * be used to calculate the command streamer timestamp. In addition the
+ * query returns a set of cpu timestamps that indicate when the command
+ * streamer cycle count was captured.
+ */
+struct drm_xe_query_cs_cycles {
+	/** Engine for which command streamer cycles is queried. */
+	struct drm_xe_engine_class_instance eci;
+
+	/** MBZ (pad eci to 64 bit) */
+	__u16 rsvd;
+
+	/**
+	 * Command streamer cycles as read from the command streamer
+	 * register at 0x358 offset.
+	 */
+	__u64 cs_cycles;
+
+	/** Frequency of the cs cycles in Hz. */
+	__u64 cs_frequency;
+
+	/**
+	 * CPU timestamp in ns. The timestamp is captured before reading the
+	 * cs_cycles register using the reference clockid set by the user.
+	 */
+	__u64 cpu_timestamp;
+
+	/**
+	 * Time delta in ns captured around reading the lower dword of the
+	 * cs_cycles register.
+	 */
+	__u64 cpu_delta;
+
+	/**
+	 * Reference clock id for CPU timestamp. For definition, see
+	 * clock_gettime(2) and perf_event_open(2). Supported clock ids are
+	 * CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, CLOCK_BOOTTIME,
+	 * CLOCK_TAI.
+	 */
+	__s32 clockid;
+
+	/** Width of the cs cycle counter in bits. */
+	__u32 width;
+};
+
 /**
  * struct drm_xe_query_mem_usage - describe memory regions and usage
  *
@@ -391,6 +467,7 @@ struct drm_xe_device_query {
 #define DRM_XE_DEVICE_QUERY_GTS		3
 #define DRM_XE_DEVICE_QUERY_HWCONFIG	4
 #define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY	5
+#define DRM_XE_QUERY_CS_CYCLES		6
 	/** @query: The type of data to query */
 	__u32 query;
 
@@ -732,24 +809,6 @@ struct drm_xe_exec_queue_set_property {
 	__u64 reserved[2];
 };
 
-/** struct drm_xe_engine_class_instance - instance of an engine class */
-struct drm_xe_engine_class_instance {
-#define DRM_XE_ENGINE_CLASS_RENDER		0
-#define DRM_XE_ENGINE_CLASS_COPY		1
-#define DRM_XE_ENGINE_CLASS_VIDEO_DECODE	2
-#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE	3
-#define DRM_XE_ENGINE_CLASS_COMPUTE		4
-	/*
-	 * Kernel only class (not actual hardware engine class). Used for
-	 * creating ordered queues of VM bind operations.
-	 */
-#define DRM_XE_ENGINE_CLASS_VM_BIND		5
-	__u16 engine_class;
-
-	__u16 engine_instance;
-	__u16 gt_id;
-};
-
 struct drm_xe_exec_queue_create {
 #define XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY               0
 	/** @extensions: Pointer to the first extension struct, if any */
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 5966968d3..acf069f46 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -476,6 +476,200 @@ test_query_invalid_extension(int fd)
 	do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
 }
 
+static bool
+query_cs_cycles_supported(int fd)
+{
+	struct drm_xe_device_query query = {
+		.extensions = 0,
+		.query = DRM_XE_QUERY_CS_CYCLES,
+		.size = 0,
+		.data = 0,
+	};
+
+	return igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query) == 0;
+}
+
+static void
+query_cs_cycles(int fd, struct drm_xe_query_cs_cycles *resp)
+{
+	struct drm_xe_device_query query = {
+		.extensions = 0,
+		.query = DRM_XE_QUERY_CS_CYCLES,
+		.size = sizeof(*resp),
+		.data = to_user_pointer(resp),
+	};
+
+	do_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
+	igt_assert(query.size);
+}
+
+static void
+__cs_cycles(int fd, struct drm_xe_engine_class_instance *hwe)
+{
+	struct drm_xe_query_cs_cycles ts1 = {};
+	struct drm_xe_query_cs_cycles ts2 = {};
+	uint64_t delta_cpu, delta_cs, delta_delta;
+	unsigned int exec_queue;
+	int i, usable = 0;
+	igt_spin_t *spin;
+	uint64_t ahnd;
+	uint32_t vm;
+	struct {
+		int32_t id;
+		const char *name;
+	} clock[] = {
+		{ CLOCK_MONOTONIC, "CLOCK_MONOTONIC" },
+		{ CLOCK_MONOTONIC_RAW, "CLOCK_MONOTONIC_RAW" },
+		{ CLOCK_REALTIME, "CLOCK_REALTIME" },
+		{ CLOCK_BOOTTIME, "CLOCK_BOOTTIME" },
+		{ CLOCK_TAI, "CLOCK_TAI" },
+	};
+
+	igt_debug("engine[%u:%u]\n",
+		  hwe->engine_class,
+		  hwe->engine_instance);
+
+	vm = xe_vm_create(fd, 0, 0);
+	exec_queue = xe_exec_queue_create(fd, vm, hwe, 0);
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
+	spin = igt_spin_new(fd, .ahnd = ahnd, .engine = exec_queue, .vm = vm);
+
+	/* Try a new clock every 10 iterations. */
+#define NUM_SNAPSHOTS 10
+	for (i = 0; i < NUM_SNAPSHOTS * ARRAY_SIZE(clock); i++) {
+		int index = i / NUM_SNAPSHOTS;
+
+		ts1.eci = *hwe;
+		ts1.clockid = clock[index].id;
+
+		ts2.eci = *hwe;
+		ts2.clockid = clock[index].id;
+
+		query_cs_cycles(fd, &ts1);
+		query_cs_cycles(fd, &ts2);
+
+		igt_debug("[1] cpu_ts before %llu, reg read time %llu\n",
+			  ts1.cpu_timestamp,
+			  ts1.cpu_delta);
+		igt_debug("[1] cs_ts %llu, freq %llu Hz, width %u\n",
+			  ts1.cs_cycles, ts1.cs_frequency, ts1.width);
+
+		igt_debug("[2] cpu_ts before %llu, reg read time %llu\n",
+			  ts2.cpu_timestamp,
+			  ts2.cpu_delta);
+		igt_debug("[2] cs_ts %llu, freq %llu Hz, width %u\n",
+			  ts2.cs_cycles, ts2.cs_frequency, ts2.width);
+
+		delta_cpu = ts2.cpu_timestamp - ts1.cpu_timestamp;
+
+		if (ts2.cs_cycles >= ts1.cs_cycles)
+			delta_cs = (ts2.cs_cycles - ts1.cs_cycles) *
+				   NSEC_PER_SEC / ts1.cs_frequency;
+		else
+			delta_cs = (((1 << ts2.width) - ts2.cs_cycles) + ts1.cs_cycles) *
+				   NSEC_PER_SEC / ts1.cs_frequency;
+
+		igt_debug("delta_cpu[%lu], delta_cs[%lu]\n",
+			  delta_cpu, delta_cs);
+
+		delta_delta = delta_cpu > delta_cs ?
+			       delta_cpu - delta_cs :
+			       delta_cs - delta_cpu;
+		igt_debug("delta_delta %lu\n", delta_delta);
+
+		if (delta_delta < 5000)
+			usable++;
+
+		/*
+		 * User needs few good snapshots of the timestamps to
+		 * synchronize cpu time with cs time. Check if we have enough
+		 * usable values before moving to the next clockid.
+		 */
+		if (!((i + 1) % NUM_SNAPSHOTS)) {
+			igt_debug("clock %s\n", clock[index].name);
+			igt_debug("usable %d\n", usable);
+			igt_assert(usable > 2);
+			usable = 0;
+		}
+	}
+
+	igt_spin_free(fd, spin);
+	xe_exec_queue_destroy(fd, exec_queue);
+	xe_vm_destroy(fd, vm);
+	put_ahnd(ahnd);
+}
+
+/**
+ * SUBTEST: query-cs-cycles
+ * Description: Query CPU-GPU timestamp correlation
+ */
+static void test_query_cs_cycles(int fd)
+{
+	struct drm_xe_engine_class_instance *hwe;
+
+	igt_require(query_cs_cycles_supported(fd));
+
+	xe_for_each_hw_engine(fd, hwe) {
+		igt_assert(hwe);
+		__cs_cycles(fd, hwe);
+	}
+}
+
+/**
+ * SUBTEST: query-invalid-cs-cycles
+ * Description: Check query with invalid arguments returns expected error code.
+ */
+static void test_cs_cycles_invalid(int fd)
+{
+	struct drm_xe_engine_class_instance *hwe;
+	struct drm_xe_query_cs_cycles ts = {};
+	struct drm_xe_device_query query = {
+		.extensions = 0,
+		.query = DRM_XE_QUERY_CS_CYCLES,
+		.size = sizeof(ts),
+		.data = to_user_pointer(&ts),
+	};
+
+	igt_require(query_cs_cycles_supported(fd));
+
+	/* get one engine */
+	xe_for_each_hw_engine(fd, hwe)
+		break;
+
+	/* sanity check engine selection is valid */
+	ts.eci = *hwe;
+	query_cs_cycles(fd, &ts);
+
+	/* bad instance */
+	ts.eci = *hwe;
+	ts.eci.engine_instance = 0xffff;
+	do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+	ts.eci = *hwe;
+
+	/* bad class */
+	ts.eci.engine_class = 0xffff;
+	do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+	ts.eci = *hwe;
+
+	/* bad gt */
+	ts.eci.gt_id = 0xffff;
+	do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+	ts.eci = *hwe;
+
+	/* non zero rsvd field */
+	ts.rsvd = 1;
+	do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+	ts.rsvd = 0;
+
+	/* bad clockid */
+	ts.clockid = -1;
+	do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+	ts.clockid = 0;
+
+	/* sanity check */
+	query_cs_cycles(fd, &ts);
+}
+
 igt_main
 {
 	int xe;
@@ -501,6 +695,12 @@ igt_main
 	igt_subtest("query-topology")
 		test_query_gt_topology(xe);
 
+	igt_subtest("query-cs-cycles")
+		test_query_cs_cycles(xe);
+
+	igt_subtest("query-invalid-cs-cycles")
+		test_cs_cycles_invalid(xe);
+
 	igt_subtest("query-invalid-query")
 		test_query_invalid_query(xe);
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 03/24] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 02/24] tests/intel/xe_query: Add a test for querying cs cycles Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 04/24] drm-uapi/xe_drm: Remove MMIO ioctl and " Francois Dugast
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Align with commit ("drm/xe/uapi: Separate VM_BIND's operation and flag")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 include/drm-uapi/xe_drm.h     | 14 ++++++++------
 lib/intel_batchbuffer.c       | 11 +++++++----
 lib/xe/xe_ioctl.c             | 31 ++++++++++++++++---------------
 lib/xe/xe_ioctl.h             |  6 +++---
 lib/xe/xe_util.c              |  9 ++++++---
 tests/intel/xe_exec_basic.c   |  2 +-
 tests/intel/xe_exec_threads.c |  2 +-
 tests/intel/xe_vm.c           | 16 +++++++++-------
 8 files changed, 51 insertions(+), 40 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 437f57972..fce42f62f 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -663,8 +663,10 @@ struct drm_xe_vm_bind_op {
 #define XE_VM_BIND_OP_RESTART		0x3
 #define XE_VM_BIND_OP_UNMAP_ALL		0x4
 #define XE_VM_BIND_OP_PREFETCH		0x5
+	/** @op: Bind operation to perform */
+	__u32 op;
 
-#define XE_VM_BIND_FLAG_READONLY	(0x1 << 16)
+#define XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
 	/*
 	 * A bind ops completions are always async, hence the support for out
 	 * sync. This flag indicates the allocation of the memory for new page
@@ -689,12 +691,12 @@ struct drm_xe_vm_bind_op {
 	 * configured in the VM and must be set if the VM is configured with
 	 * DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
 	 */
-#define XE_VM_BIND_FLAG_ASYNC		(0x1 << 17)
+#define XE_VM_BIND_FLAG_ASYNC		(0x1 << 1)
 	/*
 	 * Valid on a faulting VM only, do the MAP operation immediately rather
 	 * than deferring the MAP to the page fault handler.
 	 */
-#define XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 18)
+#define XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
 	/*
 	 * When the NULL flag is set, the page tables are setup with a special
 	 * bit which indicates writes are dropped and all reads return zero.  In
@@ -702,9 +704,9 @@ struct drm_xe_vm_bind_op {
 	 * operations, the BO handle MBZ, and the BO offset MBZ. This flag is
 	 * intended to implement VK sparse bindings.
 	 */
-#define XE_VM_BIND_FLAG_NULL		(0x1 << 19)
-	/** @op: Operation to perform (lower 16 bits) and flags (upper 16 bits) */
-	__u32 op;
+#define XE_VM_BIND_FLAG_NULL		(0x1 << 3)
+	/** @flags: Bind flags */
+	__u32 flags;
 
 	/** @mem_region: Memory region to prefetch VMA to, instance not a mask */
 	__u32 region;
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index e7b1b755f..6e668d28c 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1281,7 +1281,8 @@ void intel_bb_destroy(struct intel_bb *ibb)
 }
 
 static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
-						   uint32_t op, uint32_t region)
+						   uint32_t op, uint32_t flags,
+						   uint32_t region)
 {
 	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
 	struct drm_xe_vm_bind_op *bind_ops, *ops;
@@ -1298,6 +1299,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
 			ops->obj = objects[i]->handle;
 
 		ops->op = op;
+		ops->flags = flags;
 		ops->obj_offset = 0;
 		ops->addr = objects[i]->offset;
 		ops->range = objects[i]->rsvd1;
@@ -1323,9 +1325,10 @@ static void __unbind_xe_objects(struct intel_bb *ibb)
 
 	if (ibb->num_objects > 1) {
 		struct drm_xe_vm_bind_op *bind_ops;
-		uint32_t op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+		uint32_t op = XE_VM_BIND_OP_UNMAP;
+		uint32_t flags = XE_VM_BIND_FLAG_ASYNC;
 
-		bind_ops = xe_alloc_bind_ops(ibb, op, 0);
+		bind_ops = xe_alloc_bind_ops(ibb, op, flags, 0);
 		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
 				 ibb->num_objects, syncs, 2);
 		free(bind_ops);
@@ -2354,7 +2357,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
 
 	syncs[0].handle = syncobj_create(ibb->fd, 0);
 	if (ibb->num_objects > 1) {
-		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, 0);
+		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, 0);
 		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
 				 ibb->num_objects, syncs, 1);
 		free(bind_ops);
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 730dcfd16..48cd185de 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -67,7 +67,7 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
 			    uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, 0, 0, 0,
-			    XE_VM_BIND_OP_UNMAP_ALL | XE_VM_BIND_FLAG_ASYNC,
+			    XE_VM_BIND_OP_UNMAP_ALL, XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -91,8 +91,8 @@ void xe_vm_bind_array(int fd, uint32_t vm, uint32_t exec_queue,
 
 int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
-		  struct drm_xe_sync *sync, uint32_t num_syncs, uint32_t region,
-		  uint64_t ext)
+		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+		  uint32_t region, uint64_t ext)
 {
 	struct drm_xe_vm_bind bind = {
 		.extensions = ext,
@@ -103,6 +103,7 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		.bind.range = size,
 		.bind.addr = addr,
 		.bind.op = op,
+		.bind.flags = flags,
 		.bind.region = region,
 		.num_syncs = num_syncs,
 		.syncs = (uintptr_t)sync,
@@ -117,11 +118,11 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 
 void  __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 			  uint64_t offset, uint64_t addr, uint64_t size,
-			  uint32_t op, struct drm_xe_sync *sync,
+			  uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
 			  uint32_t num_syncs, uint32_t region, uint64_t ext)
 {
 	igt_assert_eq(__xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
-				   op, sync, num_syncs, region, ext), 0);
+				   op, flags, sync, num_syncs, region, ext), 0);
 }
 
 void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
@@ -129,7 +130,7 @@ void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP, sync, num_syncs, 0, 0);
+			    XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
 }
 
 void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
@@ -137,7 +138,7 @@ void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
 		  struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, 0, 0, offset, addr, size,
-			    XE_VM_BIND_OP_UNMAP, sync, num_syncs, 0, 0);
+			    XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
 }
 
 void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t offset,
@@ -146,7 +147,7 @@ void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t off
 			  uint32_t region)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
-			    XE_VM_BIND_OP_PREFETCH | XE_VM_BIND_FLAG_ASYNC,
+			    XE_VM_BIND_OP_PREFETCH, XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, region, 0);
 }
 
@@ -155,7 +156,7 @@ void xe_vm_bind_async(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		      struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, sync,
+			    XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, sync,
 			    num_syncs, 0, 0);
 }
 
@@ -165,7 +166,7 @@ void xe_vm_bind_async_flags(int fd, uint32_t vm, uint32_t exec_queue, uint32_t b
 			    uint32_t flags)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC | flags,
+			    XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC | flags,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -174,7 +175,7 @@ void xe_vm_bind_userptr_async(int fd, uint32_t vm, uint32_t exec_queue,
 			      struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
-			    XE_VM_BIND_OP_MAP_USERPTR | XE_VM_BIND_FLAG_ASYNC,
+			    XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -184,7 +185,7 @@ void xe_vm_bind_userptr_async_flags(int fd, uint32_t vm, uint32_t exec_queue,
 				    uint32_t num_syncs, uint32_t flags)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
-			    XE_VM_BIND_OP_MAP_USERPTR | XE_VM_BIND_FLAG_ASYNC |
+			    XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC |
 			    flags, sync, num_syncs, 0, 0);
 }
 
@@ -193,7 +194,7 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
 			struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
-			    XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC, sync,
+			    XE_VM_BIND_OP_UNMAP, XE_VM_BIND_FLAG_ASYNC, sync,
 			    num_syncs, 0, 0);
 }
 
@@ -205,8 +206,8 @@ static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		.handle = syncobj_create(fd, 0),
 	};
 
-	__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, &sync, 1, 0,
-			    0);
+	__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
+			    0, 0);
 
 	igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
 	syncobj_destroy(fd, sync.handle);
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index 6c281b3bf..f0e4109dc 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -19,11 +19,11 @@ uint32_t xe_cs_prefetch_size(int fd);
 uint32_t xe_vm_create(int fd, uint32_t flags, uint64_t ext);
 int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
-		  struct drm_xe_sync *sync, uint32_t num_syncs, uint32_t region,
-		  uint64_t ext);
+		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+		  uint32_t region, uint64_t ext);
 void  __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 			  uint64_t offset, uint64_t addr, uint64_t size,
-			  uint32_t op, struct drm_xe_sync *sync,
+			  uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
 			  uint32_t num_syncs, uint32_t region, uint64_t ext);
 void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		uint64_t addr, uint64_t size,
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 2f9ffe2f1..5fa4d4610 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -116,7 +116,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
 {
 	struct drm_xe_vm_bind_op *bind_ops, *ops;
 	struct xe_object *obj;
-	uint32_t num_objects = 0, i = 0, op;
+	uint32_t num_objects = 0, i = 0, op, flags;
 
 	igt_list_for_each_entry(obj, obj_list, link)
 		num_objects++;
@@ -134,13 +134,16 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
 		ops = &bind_ops[i];
 
 		if (obj->bind_op == XE_OBJECT_BIND) {
-			op = XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC;
+			op = XE_VM_BIND_OP_MAP;
+			flags = XE_VM_BIND_FLAG_ASYNC;
 			ops->obj = obj->handle;
 		} else {
-			op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+			op = XE_VM_BIND_OP_UNMAP;
+			flags = XE_VM_BIND_FLAG_ASYNC;
 		}
 
 		ops->op = op;
+		ops->flags = flags;
 		ops->obj_offset = 0;
 		ops->addr = obj->offset;
 		ops->range = obj->size;
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index a4414e052..e29398aaa 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -170,7 +170,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & SPARSE)
 			__xe_vm_bind_assert(fd, vm[i], bind_exec_queues[i],
 					    0, 0, sparse_addr[i], bo_size,
-					    XE_VM_BIND_OP_MAP |
+					    XE_VM_BIND_OP_MAP,
 					    XE_VM_BIND_FLAG_ASYNC |
 					    XE_VM_BIND_FLAG_NULL, sync,
 					    1, 0, 0);
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 12e76874e..1f9af894f 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -609,7 +609,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 			if (rebind_error_inject == i)
 				__xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
 						    0, 0, addr, bo_size,
-						    XE_VM_BIND_OP_UNMAP |
+						    XE_VM_BIND_OP_UNMAP,
 						    XE_VM_BIND_FLAG_ASYNC |
 						    INJECT_ERROR, sync_all,
 						    n_exec_queues, 0, 0);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 4952ea786..f96305851 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -316,7 +316,7 @@ static void userptr_invalid(int fd)
 	vm = xe_vm_create(fd, 0, 0);
 	munmap(data, size);
 	ret = __xe_vm_bind(fd, vm, 0, 0, to_user_pointer(data), 0x40000,
-			   size, XE_VM_BIND_OP_MAP_USERPTR, NULL, 0, 0, 0);
+			   size, XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
 	igt_assert(ret == -EFAULT);
 
 	xe_vm_destroy(fd, vm);
@@ -437,7 +437,7 @@ static void vm_async_ops_err(int fd, bool destroy)
 		if (i == N_BINDS / 8)	/* Inject error on this bind */
 			__xe_vm_bind_assert(fd, vm, 0, bo, 0,
 					    addr + i * bo_size * 2,
-					    bo_size, XE_VM_BIND_OP_MAP |
+					    bo_size, XE_VM_BIND_OP_MAP,
 					    XE_VM_BIND_FLAG_ASYNC |
 					    INJECT_ERROR, &sync, 1, 0, 0);
 		else
@@ -451,7 +451,7 @@ static void vm_async_ops_err(int fd, bool destroy)
 		if (i == N_BINDS / 8)
 			__xe_vm_bind_assert(fd, vm, 0, 0, 0,
 					    addr + i * bo_size * 2,
-					    bo_size, XE_VM_BIND_OP_UNMAP |
+					    bo_size, XE_VM_BIND_OP_UNMAP,
 					    XE_VM_BIND_FLAG_ASYNC |
 					    INJECT_ERROR, &sync, 1, 0, 0);
 		else
@@ -465,7 +465,7 @@ static void vm_async_ops_err(int fd, bool destroy)
 		if (i == N_BINDS / 8)
 			__xe_vm_bind_assert(fd, vm, 0, bo, 0,
 					    addr + i * bo_size * 2,
-					    bo_size, XE_VM_BIND_OP_MAP |
+					    bo_size, XE_VM_BIND_OP_MAP,
 					    XE_VM_BIND_FLAG_ASYNC |
 					    INJECT_ERROR, &sync, 1, 0, 0);
 		else
@@ -479,7 +479,7 @@ static void vm_async_ops_err(int fd, bool destroy)
 		if (i == N_BINDS / 8)
 			__xe_vm_bind_assert(fd, vm, 0, 0, 0,
 					    addr + i * bo_size * 2,
-					    bo_size, XE_VM_BIND_OP_UNMAP |
+					    bo_size, XE_VM_BIND_OP_UNMAP,
 					    XE_VM_BIND_FLAG_ASYNC |
 					    INJECT_ERROR, &sync, 1, 0, 0);
 		else
@@ -928,7 +928,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 		bind_ops[i].range = bo_size;
 		bind_ops[i].addr = addr;
 		bind_ops[i].tile_mask = 0x1 << eci->gt_id;
-		bind_ops[i].op = XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC;
+		bind_ops[i].op = XE_VM_BIND_OP_MAP;
+		bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
 		bind_ops[i].region = 0;
 		bind_ops[i].reserved[0] = 0;
 		bind_ops[i].reserved[1] = 0;
@@ -972,7 +973,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 
 	for (i = 0; i < n_execs; ++i) {
 		bind_ops[i].obj = 0;
-		bind_ops[i].op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+		bind_ops[i].op = XE_VM_BIND_OP_UNMAP;
+		bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
 	}
 
 	syncobj_reset(fd, &sync[0].handle, 1);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 04/24] drm-uapi/xe_drm: Remove MMIO ioctl and align with latest uapi
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (2 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 03/24] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 05/24] xe_exec_balancer: Enable parallel submission and compute mode Francois Dugast
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Align with commit ("drm/xe/uapi: Remove MMIO ioctl")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 include/drm-uapi/xe_drm.h                |  31 +-
 tests/intel-ci/xe-fast-feedback.testlist |   2 -
 tests/intel/xe_mmio.c                    |  91 ------
 tests/meson.build                        |   1 -
 tools/meson.build                        |   1 -
 tools/xe_reg.c                           | 366 -----------------------
 6 files changed, 4 insertions(+), 488 deletions(-)
 delete mode 100644 tests/intel/xe_mmio.c
 delete mode 100644 tools/xe_reg.c

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index fce42f62f..ed33be898 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -106,11 +106,10 @@ struct xe_user_extension {
 #define DRM_XE_EXEC_QUEUE_CREATE		0x06
 #define DRM_XE_EXEC_QUEUE_DESTROY		0x07
 #define DRM_XE_EXEC			0x08
-#define DRM_XE_MMIO			0x09
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY	0x0a
-#define DRM_XE_WAIT_USER_FENCE		0x0b
-#define DRM_XE_VM_MADVISE		0x0c
-#define DRM_XE_EXEC_QUEUE_GET_PROPERTY	0x0d
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY	0x09
+#define DRM_XE_WAIT_USER_FENCE		0x0a
+#define DRM_XE_VM_MADVISE		0x0b
+#define DRM_XE_EXEC_QUEUE_GET_PROPERTY	0x0c
 
 /* Must be kept compact -- no holes */
 #define DRM_IOCTL_XE_DEVICE_QUERY		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEVICE_QUERY, struct drm_xe_device_query)
@@ -123,7 +122,6 @@ struct xe_user_extension {
 #define DRM_IOCTL_XE_EXEC_QUEUE_GET_PROPERTY	DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_GET_PROPERTY, struct drm_xe_exec_queue_get_property)
 #define DRM_IOCTL_XE_EXEC_QUEUE_DESTROY		 DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_DESTROY, struct drm_xe_exec_queue_destroy)
 #define DRM_IOCTL_XE_EXEC			 DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
-#define DRM_IOCTL_XE_MMIO			DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MMIO, struct drm_xe_mmio)
 #define DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY	 DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY, struct drm_xe_exec_queue_set_property)
 #define DRM_IOCTL_XE_WAIT_USER_FENCE		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
 #define DRM_IOCTL_XE_VM_MADVISE			 DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_MADVISE, struct drm_xe_vm_madvise)
@@ -939,27 +937,6 @@ struct drm_xe_exec {
 	__u64 reserved[2];
 };
 
-struct drm_xe_mmio {
-	/** @extensions: Pointer to the first extension struct, if any */
-	__u64 extensions;
-
-	__u32 addr;
-
-#define DRM_XE_MMIO_8BIT	0x0
-#define DRM_XE_MMIO_16BIT	0x1
-#define DRM_XE_MMIO_32BIT	0x2
-#define DRM_XE_MMIO_64BIT	0x3
-#define DRM_XE_MMIO_BITS_MASK	0x3
-#define DRM_XE_MMIO_READ	0x4
-#define DRM_XE_MMIO_WRITE	0x8
-	__u32 flags;
-
-	__u64 value;
-
-	/** @reserved: Reserved */
-	__u64 reserved[2];
-};
-
 /**
  * struct drm_xe_wait_user_fence - wait user fence
  *
diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index 610cc958c..a9fe43b08 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -141,8 +141,6 @@ igt@xe_mmap@bad-object
 igt@xe_mmap@system
 igt@xe_mmap@vram
 igt@xe_mmap@vram-system
-igt@xe_mmio@mmio-timestamp
-igt@xe_mmio@mmio-invalid
 igt@xe_pm_residency@gt-c6-on-idle
 igt@xe_prime_self_import@basic-with_one_bo
 igt@xe_prime_self_import@basic-with_fd_dup
diff --git a/tests/intel/xe_mmio.c b/tests/intel/xe_mmio.c
deleted file mode 100644
index 9ac544770..000000000
--- a/tests/intel/xe_mmio.c
+++ /dev/null
@@ -1,91 +0,0 @@
-// SPDX-License-Identifier: MIT
-/*
- * Copyright © 2023 Intel Corporation
- */
-
-/**
- * TEST: Test if mmio feature
- * Category: Software building block
- * Sub-category: mmio
- * Functionality: mmap
- */
-
-#include "igt.h"
-
-#include "xe_drm.h"
-#include "xe/xe_ioctl.h"
-#include "xe/xe_query.h"
-
-#include <string.h>
-
-#define RCS_TIMESTAMP 0x2358
-
-/**
- * SUBTEST: mmio-timestamp
- * Test category: functionality test
- * Description:
- *	Try to run mmio ioctl with 32 and 64 bits and check it a timestamp
- *	matches
- */
-
-static void test_xe_mmio_timestamp(int fd)
-{
-	int ret;
-	struct drm_xe_mmio mmio = {
-		.addr = RCS_TIMESTAMP,
-		.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_64BIT,
-	};
-	ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
-	if (!ret)
-		igt_debug("RCS_TIMESTAMP 64b = 0x%llx\n", mmio.value);
-	igt_assert(!ret);
-	mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_32BIT;
-	mmio.value = 0;
-	ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
-	if (!ret)
-		igt_debug("RCS_TIMESTAMP 32b = 0x%llx\n", mmio.value);
-	igt_assert(!ret);
-}
-
-
-/**
- * SUBTEST: mmio-invalid
- * Test category: negative test
- * Description: Try to run mmio ioctl with 8, 16 and 32 and 64 bits mmio
- */
-
-static void test_xe_mmio_invalid(int fd)
-{
-	int ret;
-	struct drm_xe_mmio mmio = {
-		.addr = RCS_TIMESTAMP,
-		.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_8BIT,
-	};
-	ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
-	igt_assert(ret);
-	mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_16BIT;
-	mmio.value = 0;
-	ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
-	igt_assert(ret);
-	mmio.addr = RCS_TIMESTAMP;
-	mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_64BIT;
-	mmio.value = 0x1;
-	ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
-	igt_assert(ret);
-}
-
-igt_main
-{
-	int fd;
-
-	igt_fixture
-		fd = drm_open_driver(DRIVER_XE);
-
-	igt_subtest("mmio-timestamp")
-		test_xe_mmio_timestamp(fd);
-	igt_subtest("mmio-invalid")
-		test_xe_mmio_invalid(fd);
-
-	igt_fixture
-		drm_close_driver(fd);
-}
diff --git a/tests/meson.build b/tests/meson.build
index 974cb433b..c3de337c8 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -293,7 +293,6 @@ intel_xe_progs = [
 	'xe_live_ktest',
 	'xe_media_fill',
 	'xe_mmap',
-	'xe_mmio',
 	'xe_module_load',
 	'xe_noexec_ping_pong',
 	'xe_pm',
diff --git a/tools/meson.build b/tools/meson.build
index 21e244c24..ac79d8b58 100644
--- a/tools/meson.build
+++ b/tools/meson.build
@@ -42,7 +42,6 @@ tools_progs = [
 	'intel_gvtg_test',
 	'dpcd_reg',
 	'lsgpu',
-	'xe_reg',
 ]
 tool_deps = igt_deps
 tool_deps += zlib
diff --git a/tools/xe_reg.c b/tools/xe_reg.c
deleted file mode 100644
index 1f7b384d3..000000000
--- a/tools/xe_reg.c
+++ /dev/null
@@ -1,366 +0,0 @@
-// SPDX-License-Identifier: MIT
-/*
- * Copyright © 2021 Intel Corporation
- */
-
-#include "igt.h"
-#include "igt_device_scan.h"
-
-#include "xe_drm.h"
-
-#include <stdlib.h>
-#include <stdio.h>
-#include <string.h>
-
-#define DECL_XE_MMIO_READ_FN(bits) \
-static inline uint##bits##_t \
-xe_mmio_read##bits(int fd, uint32_t reg) \
-{ \
-	struct drm_xe_mmio mmio = { \
-		.addr = reg, \
-		.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_##bits##BIT, \
-	}; \
-\
-	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio), 0); \
-\
-	return mmio.value;\
-}\
-static inline void \
-xe_mmio_write##bits(int fd, uint32_t reg, uint##bits##_t value) \
-{ \
-	struct drm_xe_mmio mmio = { \
-		.addr = reg, \
-		.flags = DRM_XE_MMIO_WRITE | DRM_XE_MMIO_##bits##BIT, \
-		.value = value, \
-	}; \
-\
-	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio), 0); \
-}
-
-DECL_XE_MMIO_READ_FN(8)
-DECL_XE_MMIO_READ_FN(16)
-DECL_XE_MMIO_READ_FN(32)
-DECL_XE_MMIO_READ_FN(64)
-
-static void print_help(FILE *fp)
-{
-	fprintf(fp, "usage: xe_reg read REG1 [REG2]...\n");
-	fprintf(fp, "       xe_reg write REG VALUE\n");
-}
-
-enum ring {
-	RING_UNKNOWN = -1,
-	RING_RCS0,
-	RING_BCS0,
-};
-
-static const struct ring_info {
-	enum ring ring;
-	const char *name;
-	uint32_t mmio_base;
-} ring_info[] = {
-	{RING_RCS0, "rcs0", 0x02000, },
-	{RING_BCS0, "bcs0", 0x22000, },
-};
-
-static const struct ring_info *ring_info_for_name(const char *name)
-{
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(ring_info); i++)
-		if (strcmp(name, ring_info[i].name) == 0)
-			return &ring_info[i];
-
-	return NULL;
-}
-
-struct reg_info {
-	const char *name;
-	bool is_ring;
-	uint32_t addr_low;
-	uint32_t addr_high;
-} reg_info[] = {
-#define REG32(name, addr) { #name, false, addr }
-#define REG64(name, low, high) { #name, false, low, high }
-#define RING_REG32(name, addr) { #name, true, addr }
-#define RING_REG64(name, low, high) { #name, true, low, high }
-
-	RING_REG64(ACTHD, 0x74, 0x5c),
-	RING_REG32(BB_ADDR_DIFF, 0x154),
-	RING_REG64(BB_ADDR, 0x140, 0x168),
-	RING_REG32(BB_PER_CTX_PTR, 0x2c0),
-	RING_REG64(EXECLIST_STATUS, 0x234, 0x238),
-	RING_REG64(EXECLIST_SQ0, 0x510, 0x514),
-	RING_REG64(EXECLIST_SQ1, 0x518, 0x51c),
-	RING_REG32(HWS_PGA, 0x80),
-	RING_REG32(INDIRECT_CTX, 0x1C4),
-	RING_REG32(INDIRECT_CTX_OFFSET, 0x1C8),
-	RING_REG32(NOPID, 0x94),
-	RING_REG64(PML4E, 0x270, 0x274),
-	RING_REG32(RING_BUFFER_CTL, 0x3c),
-	RING_REG32(RING_BUFFER_HEAD, 0x34),
-	RING_REG32(RING_BUFFER_START, 0x38),
-	RING_REG32(RING_BUFFER_TAIL, 0x30),
-	RING_REG64(SBB_ADDR, 0x114, 0x11c),
-	RING_REG32(SBB_STATE, 0x118),
-
-#undef REG32
-#undef REG64
-#undef RING_REG32
-#undef RING_REG64
-};
-
-static const struct reg_info *reg_info_for_name(const char *name)
-{
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(reg_info); i++)
-		if (strcmp(name, reg_info[i].name) == 0)
-			return &reg_info[i];
-
-	return NULL;
-}
-
-static int print_reg_for_info(int xe, FILE *fp, const struct reg_info *reg,
-			      const struct ring_info *ring)
-{
-	if (reg->is_ring) {
-		if (!ring) {
-			fprintf(stderr, "%s is a ring register but --ring "
-					"not set\n", reg->name);
-			return EXIT_FAILURE;
-		}
-
-		if (reg->addr_high) {
-			uint32_t low = xe_mmio_read32(xe, reg->addr_low +
-							  ring->mmio_base);
-			uint32_t high = xe_mmio_read32(xe, reg->addr_high +
-							   ring->mmio_base);
-
-			fprintf(fp, "%s[%s] = 0x%08x %08x\n", reg->name,
-				ring->name, high, low);
-		} else {
-			uint32_t value = xe_mmio_read32(xe, reg->addr_low +
-							    ring->mmio_base);
-
-			fprintf(fp, "%s[%s] = 0x%08x\n", reg->name,
-				ring->name, value);
-		}
-	} else {
-		if (reg->addr_high) {
-			uint32_t low = xe_mmio_read32(xe, reg->addr_low);
-			uint32_t high = xe_mmio_read32(xe, reg->addr_high);
-
-			fprintf(fp, "%s = 0x%08x %08x\n", reg->name, high, low);
-		} else {
-			uint32_t value = xe_mmio_read32(xe, reg->addr_low);
-
-			fprintf(fp, "%s = 0x%08x\n", reg->name, value);
-		}
-	}
-
-	return 0;
-}
-
-static void print_reg_for_addr(int xe, FILE *fp, uint32_t addr)
-{
-	uint32_t value = xe_mmio_read32(xe, addr);
-
-	fprintf(fp, "MMIO[0x%05x] = 0x%08x\n", addr, value);
-}
-
-enum opt {
-	OPT_UNKNOWN = '?',
-	OPT_END = -1,
-	OPT_DEVICE,
-	OPT_RING,
-	OPT_ALL,
-};
-
-static int read_reg(int argc, char *argv[])
-{
-	int xe, i, err, index;
-	unsigned long reg_addr;
-	char *endp = NULL;
-	const struct ring_info *ring = NULL;
-	enum opt opt;
-	bool dump_all = false;
-
-	static struct option options[] = {
-		{ "device",	required_argument,	NULL,	OPT_DEVICE },
-		{ "ring",	required_argument,	NULL,	OPT_RING },
-		{ "all",	no_argument,		NULL,	OPT_ALL },
-	};
-
-	for (opt = 0; opt != OPT_END; ) {
-		opt = getopt_long(argc, argv, "", options, &index);
-
-		switch (opt) {
-		case OPT_DEVICE:
-			igt_device_filter_add(optarg);
-			break;
-		case OPT_RING:
-			ring = ring_info_for_name(optarg);
-			if (!ring) {
-				fprintf(stderr, "invalid ring: %s\n", optarg);
-				return EXIT_FAILURE;
-			}
-			break;
-		case OPT_ALL:
-			dump_all = true;
-			break;
-		case OPT_END:
-			break;
-		case OPT_UNKNOWN:
-			return EXIT_FAILURE;
-		}
-	}
-
-	argc -= optind;
-	argv += optind;
-
-	xe = drm_open_driver(DRIVER_XE);
-	if (dump_all) {
-		for (i = 0; i < ARRAY_SIZE(reg_info); i++) {
-			if (reg_info[i].is_ring != !!ring)
-				continue;
-
-			print_reg_for_info(xe, stdout, &reg_info[i], ring);
-		}
-	} else {
-		for (i = 0; i < argc; i++) {
-			const struct reg_info *reg = reg_info_for_name(argv[i]);
-			if (reg) {
-				err = print_reg_for_info(xe, stdout, reg, ring);
-				if (err)
-					return err;
-				continue;
-			}
-			reg_addr = strtoul(argv[i], &endp, 16);
-			if (!reg_addr || reg_addr >= (4 << 20) || *endp) {
-				fprintf(stderr, "invalid reg address '%s'\n",
-					argv[i]);
-				return EXIT_FAILURE;
-			}
-			print_reg_for_addr(xe, stdout, reg_addr);
-		}
-	}
-
-	return 0;
-}
-
-static int write_reg_for_info(int xe, const struct reg_info *reg,
-			      const struct ring_info *ring,
-			      uint64_t value)
-{
-	if (reg->is_ring) {
-		if (!ring) {
-			fprintf(stderr, "%s is a ring register but --ring "
-					"not set\n", reg->name);
-			return EXIT_FAILURE;
-		}
-
-		xe_mmio_write32(xe, reg->addr_low + ring->mmio_base, value);
-		if (reg->addr_high) {
-			xe_mmio_write32(xe, reg->addr_high + ring->mmio_base,
-					value >> 32);
-		}
-	} else {
-		xe_mmio_write32(xe, reg->addr_low, value);
-		if (reg->addr_high)
-			xe_mmio_write32(xe, reg->addr_high, value >> 32);
-	}
-
-	return 0;
-}
-
-static void write_reg_for_addr(int xe, uint32_t addr, uint32_t value)
-{
-	xe_mmio_write32(xe, addr, value);
-}
-
-static int write_reg(int argc, char *argv[])
-{
-	int xe, index;
-	unsigned long reg_addr;
-	char *endp = NULL;
-	const struct ring_info *ring = NULL;
-	enum opt opt;
-	const char *reg_name;
-	const struct reg_info *reg;
-	uint64_t value;
-
-	static struct option options[] = {
-		{ "device",	required_argument,	NULL,	OPT_DEVICE },
-		{ "ring",	required_argument,	NULL,	OPT_RING },
-	};
-
-	for (opt = 0; opt != OPT_END; ) {
-		opt = getopt_long(argc, argv, "", options, &index);
-
-		switch (opt) {
-		case OPT_DEVICE:
-			igt_device_filter_add(optarg);
-			break;
-		case OPT_RING:
-			ring = ring_info_for_name(optarg);
-			if (!ring) {
-				fprintf(stderr, "invalid ring: %s\n", optarg);
-				return EXIT_FAILURE;
-			}
-			break;
-		case OPT_END:
-			break;
-		case OPT_UNKNOWN:
-			return EXIT_FAILURE;
-		default:
-			break;
-		}
-	}
-
-	argc -= optind;
-	argv += optind;
-
-	if (argc != 2) {
-		print_help(stderr);
-		return EXIT_FAILURE;
-	}
-
-	reg_name = argv[0];
-	value = strtoull(argv[1], &endp, 0);
-	if (*endp) {
-		fprintf(stderr, "Invalid register value: %s\n", argv[1]);
-		return EXIT_FAILURE;
-	}
-
-	xe = drm_open_driver(DRIVER_XE);
-
-	reg = reg_info_for_name(reg_name);
-	if (reg)
-		return write_reg_for_info(xe, reg, ring, value);
-
-	reg_addr = strtoul(reg_name, &endp, 16);
-	if (!reg_addr || reg_addr >= (4 << 20) || *endp) {
-		fprintf(stderr, "invalid reg address '%s'\n", reg_name);
-		return EXIT_FAILURE;
-	}
-	write_reg_for_addr(xe, reg_addr, value);
-
-	return 0;
-}
-
-int main(int argc, char *argv[])
-{
-	if (argc < 2) {
-		print_help(stderr);
-		return EXIT_FAILURE;
-	}
-
-	if (strcmp(argv[1], "read") == 0)
-		return read_reg(argc - 1, argv + 1);
-	else if (strcmp(argv[1], "write") == 0)
-		return write_reg(argc - 1, argv + 1);
-
-	fprintf(stderr, "invalid sub-command: %s", argv[1]);
-	return EXIT_FAILURE;
-}
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 05/24] xe_exec_balancer: Enable parallel submission and compute mode
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (3 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 04/24] drm-uapi/xe_drm: Remove MMIO ioctl and " Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 06/24] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM Francois Dugast
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Matthew Brost <matthew.brost@intel.com>

This is now supported. Test it.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 tests/intel/xe_exec_balancer.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index 0314b4cd2..a4a438db7 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -383,6 +383,12 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
  * @virtual-userptr-rebind:		virtual userptr rebind
  * @virtual-userptr-invalidate:		virtual userptr invalidate
  * @virtual-userptr-invalidate-race:	virtual userptr invalidate racy
+ * @parallel-basic:			parallel basic
+ * @parallel-userptr:			parallel userptr
+ * @parallel-rebind:			parallel rebind
+ * @parallel-userptr-rebind:		parallel userptr rebind
+ * @parallel-userptr-invalidate:	parallel userptr invalidate
+ * @parallel-userptr-invalidate-race:	parallel userptr invalidate racy
  */
 
 static void
@@ -460,8 +466,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		};
 		struct drm_xe_exec_queue_create create = {
 			.vm_id = vm,
-			.width = 1,
-			.num_placements = num_placements,
+			.width = flags & PARALLEL ? num_placements : 1,
+			.num_placements = flags & PARALLEL ? 1 : num_placements,
 			.instances = to_user_pointer(eci),
 			.extensions = to_user_pointer(&ext),
 		};
@@ -470,6 +476,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 					&create), 0);
 		exec_queues[i] = create.exec_queue_id;
 	}
+	exec.num_batch_buffer = flags & PARALLEL ? num_placements : 1;
 
 	sync[0].addr = to_user_pointer(&data[0].vm_sync);
 	if (bo)
@@ -487,8 +494,12 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		uint64_t batch_addr = addr + batch_offset;
 		uint64_t sdi_offset = (char *)&data[i].data - (char *)data;
 		uint64_t sdi_addr = addr + sdi_offset;
+		uint64_t batches[MAX_INSTANCE];
 		int e = i % n_exec_queues;
 
+		for (j = 0; j < num_placements && flags & PARALLEL; ++j)
+			batches[j] = batch_addr;
+
 		b = 0;
 		data[i].batch[b++] = MI_STORE_DWORD_IMM_GEN4;
 		data[i].batch[b++] = sdi_addr;
@@ -500,7 +511,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		sync[0].addr = addr + (char *)&data[i].exec_sync - (char *)data;
 
 		exec.exec_queue_id = exec_queues[e];
-		exec.address = batch_addr;
+		exec.address = flags & PARALLEL ?
+			to_user_pointer(batches) : batch_addr;
 		xe_exec(fd, &exec);
 
 		if (flags & REBIND && i + 1 != n_execs) {
@@ -661,9 +673,6 @@ igt_main
 					test_exec(fd, gt, class, 1, 0,
 						  s->flags);
 
-		if (s->flags & PARALLEL)
-			continue;
-
 		igt_subtest_f("once-cm-%s", s->name)
 			xe_for_each_gt(fd, gt)
 				xe_for_each_hw_engine_class(class)
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 06/24] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (4 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 05/24] xe_exec_balancer: Enable parallel submission and compute mode Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 07/24] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE Francois Dugast
                   ` (20 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Matthew Brost <matthew.brost@intel.com>

XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE was used creating a compute VM.
This just happened to work as it is same value as
DRM_XE_VM_CREATE_COMPUTE_MODE. Fix this and use correct flag,
DRM_XE_VM_CREATE_COMPUTE_MODE.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 tests/intel/xe_exec_threads.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 1f9af894f..d19708f80 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -286,7 +286,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 
 	if (!vm) {
 		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
-				  XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE, 0);
+				  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
 		owns_vm = true;
 	}
 
@@ -1076,7 +1076,7 @@ static void threads(int fd, int flags)
 					      to_user_pointer(&ext));
 		vm_compute_mode = xe_vm_create(fd,
 					       DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
-					       XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
+					       DRM_XE_VM_CREATE_COMPUTE_MODE,
 					       0);
 
 		vm_err_thread.capture = &capture;
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 07/24] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (5 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 06/24] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 08/24] drm-uapi/xe: Use common drm_xe_ext_set_property extension Francois Dugast
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Matthew Brost <matthew.brost@intel.com>

XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE has been removed from uAPI,
remove all references in Xe tests.

Align with commits
 ("drm/xe: Remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE from uAPI") and
("drm/xe: Deprecate XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE implementation")

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
[Rodrigo updated header with built version from make header_install]
[Rodrigo added the commit subjects of the kernel uapi changes]
---
 include/drm-uapi/xe_drm.h          | 19 ++++++-------------
 tests/intel/xe_evict.c             | 14 +++-----------
 tests/intel/xe_exec_balancer.c     |  8 +-------
 tests/intel/xe_exec_compute_mode.c | 20 ++------------------
 tests/intel/xe_exec_reset.c        | 10 ++--------
 tests/intel/xe_exec_threads.c      | 13 ++-----------
 tests/intel/xe_noexec_ping_pong.c  | 10 +---------
 7 files changed, 17 insertions(+), 77 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index ed33be898..a9060bcf8 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -784,21 +784,14 @@ struct drm_xe_exec_queue_set_property {
 	/** @exec_queue_id: Exec queue ID */
 	__u32 exec_queue_id;
 
-#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY			0
+#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY		0
 #define XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
 #define XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT	2
-	/*
-	 * Long running or ULLS engine mode. DMA fences not allowed in this
-	 * mode. Must match the value of DRM_XE_VM_CREATE_COMPUTE_MODE, serves
-	 * as a sanity check the UMD knows what it is doing. Can only be set at
-	 * engine create time.
-	 */
-#define XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE		3
-#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE		4
-#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT		5
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER		6
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY		7
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY		8
+#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE		3
+#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT		4
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER		5
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY		6
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY	7
 	/** @property: property to set */
 	__u32 property;
 
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 5b64e56b4..5d8981f8d 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -252,19 +252,11 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
 	}
 
 	for (i = 0; i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property ext = {
-			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
-			.value = 1,
-		};
-
 		if (flags & MULTI_VM)
-			exec_queues[i] = xe_exec_queue_create(fd, i & 1 ? vm2 : vm, eci,
-						      to_user_pointer(&ext));
+			exec_queues[i] = xe_exec_queue_create(fd, i & 1 ? vm2 :
+							      vm, eci, 0);
 		else
-			exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
-						      to_user_pointer(&ext));
+			exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 	}
 
 	for (i = 0; i < n_execs; i++) {
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index a4a438db7..f4f5440f4 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -458,18 +458,12 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	memset(data, 0, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property ext = {
-			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
-			.value = 1,
-		};
 		struct drm_xe_exec_queue_create create = {
 			.vm_id = vm,
 			.width = flags & PARALLEL ? num_placements : 1,
 			.num_placements = flags & PARALLEL ? 1 : num_placements,
 			.instances = to_user_pointer(eci),
-			.extensions = to_user_pointer(&ext),
+			.extensions = 0,
 		};
 
 		igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE,
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 6d1084727..02e7ef201 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -120,15 +120,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 			xe_get_default_alignment(fd));
 
 	for (i = 0; (flags & EXEC_QUEUE_EARLY) && i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property ext = {
-			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
-			.value = 1,
-		};
-
-		exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
-					      to_user_pointer(&ext));
+		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 		if (flags & BIND_EXECQUEUE)
 			bind_exec_queues[i] =
 				xe_bind_exec_queue_create(fd, vm, 0);
@@ -156,15 +148,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	memset(data, 0, bo_size);
 
 	for (i = 0; !(flags & EXEC_QUEUE_EARLY) && i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property ext = {
-			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
-			.value = 1,
-		};
-
-		exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
-					      to_user_pointer(&ext));
+		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 		if (flags & BIND_EXECQUEUE)
 			bind_exec_queues[i] =
 				xe_bind_exec_queue_create(fd, vm, 0);
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 6e3f0aa4b..68e17cc98 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -540,14 +540,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	memset(data, 0, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property compute = {
-			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
-			.value = 1,
-		};
 		struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
-			.base.next_extension = to_user_pointer(&compute),
+			.base.next_extension = 0,
 			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
 			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
@@ -557,7 +551,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & EXEC_QUEUE_RESET)
 			ext = to_user_pointer(&preempt_timeout);
 		else
-			ext = to_user_pointer(&compute);
+			ext = 0;
 
 		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, ext);
 	};
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index d19708f80..306d8113d 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -313,17 +313,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 	memset(data, 0, bo_size);
 
-	for (i = 0; i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property ext = {
-			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
-			.value = 1,
-		};
-
-		exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
-					      to_user_pointer(&ext));
-	};
+	for (i = 0; i < n_exec_queues; i++)
+		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 
 	pthread_barrier_wait(&barrier);
 
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 3f486adf9..88b22ed11 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -64,13 +64,6 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
 	 * stats.
 	 */
 	for (i = 0; i < NUM_VMS; ++i) {
-		struct drm_xe_ext_exec_queue_set_property ext = {
-			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
-			.value = 1,
-		};
-
 		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
 		for (j = 0; j < NUM_BOS; ++j) {
 			igt_debug("Creating bo size %lu for vm %u\n",
@@ -82,8 +75,7 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
 			xe_vm_bind(fd, vm[i], bo[i][j], 0, 0x40000 + j*bo_size,
 				   bo_size, NULL, 0);
 		}
-		exec_queues[i] = xe_exec_queue_create(fd, vm[i], eci,
-					      to_user_pointer(&ext));
+		exec_queues[i] = xe_exec_queue_create(fd, vm[i], eci, 0);
 	}
 
 	igt_info("Now sleeping for %ds.\n", SECONDS_TO_WAIT);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 08/24] drm-uapi/xe: Use common drm_xe_ext_set_property extension
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (6 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 07/24] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 09/24] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension Francois Dugast
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe/uapi: Use common drm_xe_ext_set_property extension")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h     | 21 +++------------------
 tests/intel/xe_exec_reset.c   | 10 +++++-----
 tests/intel/xe_exec_threads.c |  4 ++--
 tests/intel/xe_vm.c           |  2 +-
 4 files changed, 11 insertions(+), 26 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index a9060bcf8..66acf49c4 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -572,12 +572,11 @@ struct drm_xe_vm_bind_op_error_capture {
 	__u64 size;
 };
 
-/** struct drm_xe_ext_vm_set_property - VM set property extension */
-struct drm_xe_ext_vm_set_property {
+/** struct drm_xe_ext_set_property - XE set property extension */
+struct drm_xe_ext_set_property {
 	/** @base: base user extension */
 	struct xe_user_extension base;
 
-#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS		0
 	/** @property: property to set */
 	__u32 property;
 
@@ -593,6 +592,7 @@ struct drm_xe_ext_vm_set_property {
 
 struct drm_xe_vm_create {
 #define XE_VM_EXTENSION_SET_PROPERTY	0
+#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS		0
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
@@ -757,21 +757,6 @@ struct drm_xe_vm_bind {
 	__u64 reserved[2];
 };
 
-/** struct drm_xe_ext_exec_queue_set_property - exec queue set property extension */
-struct drm_xe_ext_exec_queue_set_property {
-	/** @base: base user extension */
-	struct xe_user_extension base;
-
-	/** @property: property to set */
-	__u32 property;
-
-	/** @pad: MBZ */
-	__u32 pad;
-
-	/** @value: property value */
-	__u64 value;
-};
-
 /**
  * struct drm_xe_exec_queue_set_property - exec queue set property
  *
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 68e17cc98..ca8d7cc13 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -185,13 +185,13 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property job_timeout = {
+		struct drm_xe_ext_set_property job_timeout = {
 			.base.next_extension = 0,
 			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
 			.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
 			.value = 50,
 		};
-		struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
 			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
 			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -372,13 +372,13 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	data = xe_bo_map(fd, bo, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property job_timeout = {
+		struct drm_xe_ext_set_property job_timeout = {
 			.base.next_extension = 0,
 			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
 			.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
 			.value = 50,
 		};
-		struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
 			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
 			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -540,7 +540,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	memset(data, 0, bo_size);
 
 	for (i = 0; i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
 			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
 			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 306d8113d..b22c9c052 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -518,7 +518,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 
 	memset(sync_all, 0, sizeof(sync_all));
 	for (i = 0; i < n_exec_queues; i++) {
-		struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
 			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
 			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -1054,7 +1054,7 @@ static void threads(int fd, int flags)
 	pthread_cond_init(&cond, 0);
 
 	if (flags & SHARED_VM) {
-		struct drm_xe_ext_vm_set_property ext = {
+		struct drm_xe_ext_set_property ext = {
 			.base.next_extension = 0,
 			.base.name = XE_VM_EXTENSION_SET_PROPERTY,
 			.property =
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index f96305851..75e7a384b 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -404,7 +404,7 @@ static void vm_async_ops_err(int fd, bool destroy)
 	};
 #define N_BINDS		32
 	struct drm_xe_vm_bind_op_error_capture capture = {};
-	struct drm_xe_ext_vm_set_property ext = {
+	struct drm_xe_ext_set_property ext = {
 		.base.next_extension = 0,
 		.base.name = XE_VM_EXTENSION_SET_PROPERTY,
 		.property = XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 09/24] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (7 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 08/24] drm-uapi/xe: Use common drm_xe_ext_set_property extension Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 10/24] xe: Update to new VM bind uAPI Francois Dugast
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h     | 23 +----------------------
 tests/intel/xe_exec_threads.c | 14 +-------------
 tests/intel/xe_vm.c           | 13 +------------
 3 files changed, 3 insertions(+), 47 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 66acf49c4..336b77074 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -555,23 +555,6 @@ struct drm_xe_gem_mmap_offset {
 	__u64 reserved[2];
 };
 
-/**
- * struct drm_xe_vm_bind_op_error_capture - format of VM bind op error capture
- */
-struct drm_xe_vm_bind_op_error_capture {
-	/** @error: errno that occurred */
-	__s32 error;
-
-	/** @op: operation that encounter an error */
-	__u32 op;
-
-	/** @addr: address of bind op */
-	__u64 addr;
-
-	/** @size: size of bind */
-	__u64 size;
-};
-
 /** struct drm_xe_ext_set_property - XE set property extension */
 struct drm_xe_ext_set_property {
 	/** @base: base user extension */
@@ -592,7 +575,6 @@ struct drm_xe_ext_set_property {
 
 struct drm_xe_vm_create {
 #define XE_VM_EXTENSION_SET_PROPERTY	0
-#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS		0
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
@@ -677,10 +659,7 @@ struct drm_xe_vm_bind_op {
 	 * practice the bind op is good and will complete.
 	 *
 	 * If this flag is set and doesn't return an error, the bind op can
-	 * still fail and recovery is needed. If configured, the bind op that
-	 * caused the error will be captured in drm_xe_vm_bind_op_error_capture.
-	 * Once the user sees the error (via a ufence +
-	 * XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS), it should free memory
+	 * still fail and recovery is needed. It should free memory
 	 * via non-async unbinds, and then restart all queued async binds op via
 	 * XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
 	 * VM.
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index b22c9c052..c9a51fc00 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -740,7 +740,6 @@ static void *thread(void *data)
 
 struct vm_thread_data {
 	pthread_t thread;
-	struct drm_xe_vm_bind_op_error_capture *capture;
 	int fd;
 	int vm;
 };
@@ -772,7 +771,6 @@ static void *vm_async_ops_err_thread(void *data)
 		/* Restart and wait for next error */
 		igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
 					&bind), 0);
-		args->capture->error = 0;
 		ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
 	}
 
@@ -1021,7 +1019,6 @@ static void threads(int fd, int flags)
 	int n_hw_engines = 0, class;
 	uint64_t i = 0;
 	uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
-	struct drm_xe_vm_bind_op_error_capture capture = {};
 	struct vm_thread_data vm_err_thread = {};
 	bool go = false;
 	int n_threads = 0;
@@ -1054,23 +1051,14 @@ static void threads(int fd, int flags)
 	pthread_cond_init(&cond, 0);
 
 	if (flags & SHARED_VM) {
-		struct drm_xe_ext_set_property ext = {
-			.base.next_extension = 0,
-			.base.name = XE_VM_EXTENSION_SET_PROPERTY,
-			.property =
-				XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
-			.value = to_user_pointer(&capture),
-		};
-
 		vm_legacy_mode = xe_vm_create(fd,
 					      DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
-					      to_user_pointer(&ext));
+					      0);
 		vm_compute_mode = xe_vm_create(fd,
 					       DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
 					       DRM_XE_VM_CREATE_COMPUTE_MODE,
 					       0);
 
-		vm_err_thread.capture = &capture;
 		vm_err_thread.fd = fd;
 		vm_err_thread.vm = vm_legacy_mode;
 		pthread_create(&vm_err_thread.thread, 0,
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 75e7a384b..89df6149a 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -324,7 +324,6 @@ static void userptr_invalid(int fd)
 
 struct vm_thread_data {
 	pthread_t thread;
-	struct drm_xe_vm_bind_op_error_capture *capture;
 	int fd;
 	int vm;
 	uint32_t bo;
@@ -388,7 +387,6 @@ static void *vm_async_ops_err_thread(void *data)
 		/* Restart and wait for next error */
 		igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
 					&bind), 0);
-		args->capture->error = 0;
 		ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
 	}
 
@@ -403,24 +401,15 @@ static void vm_async_ops_err(int fd, bool destroy)
 		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
 	};
 #define N_BINDS		32
-	struct drm_xe_vm_bind_op_error_capture capture = {};
-	struct drm_xe_ext_set_property ext = {
-		.base.next_extension = 0,
-		.base.name = XE_VM_EXTENSION_SET_PROPERTY,
-		.property = XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
-		.value = to_user_pointer(&capture),
-	};
 	struct vm_thread_data thread = {};
 	uint32_t syncobjs[N_BINDS];
 	size_t bo_size = 0x1000 * 32;
 	uint32_t bo;
 	int i, j;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
-			  to_user_pointer(&ext));
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
 	bo = xe_bo_create(fd, 0, vm, bo_size);
 
-	thread.capture = &capture;
 	thread.fd = fd;
 	thread.vm = vm;
 	thread.bo = bo;
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 10/24] xe: Update to new VM bind uAPI
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (8 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 09/24] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 11/24] drm-uapi/xe: Replace useless 'instance' per unique gt_id Francois Dugast
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Matthew Brost <matthew.brost@intel.com>

Sync vs. async changes and new error handling.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
[Rodrigo rebased and fixed conflicts]
---
 include/drm-uapi/xe_drm.h          |  50 ++------
 lib/igt_fb.c                       |   2 +-
 lib/intel_batchbuffer.c            |   2 +-
 lib/intel_compute.c                |   2 +-
 lib/xe/xe_ioctl.c                  |  15 +--
 lib/xe/xe_ioctl.h                  |   3 +-
 lib/xe/xe_query.c                  |   2 +-
 tests/intel/xe_ccs.c               |   4 +-
 tests/intel/xe_create.c            |   6 +-
 tests/intel/xe_drm_fdinfo.c        |   4 +-
 tests/intel/xe_evict.c             |  23 ++--
 tests/intel/xe_exec_balancer.c     |   6 +-
 tests/intel/xe_exec_basic.c        |   6 +-
 tests/intel/xe_exec_compute_mode.c |   6 +-
 tests/intel/xe_exec_fault_mode.c   |   6 +-
 tests/intel/xe_exec_reset.c        |   8 +-
 tests/intel/xe_exec_store.c        |   4 +-
 tests/intel/xe_exec_threads.c      | 112 +++++------------
 tests/intel/xe_exercise_blt.c      |   2 +-
 tests/intel/xe_guc_pc.c            |   2 +-
 tests/intel/xe_huc_copy.c          |   2 +-
 tests/intel/xe_intel_bb.c          |   2 +-
 tests/intel/xe_pm.c                |   2 +-
 tests/intel/xe_vm.c                | 189 ++---------------------------
 tests/intel/xe_waitfence.c         |  19 +--
 25 files changed, 102 insertions(+), 377 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 336b77074..13c693393 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -134,10 +134,11 @@ struct drm_xe_engine_class_instance {
 #define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE	3
 #define DRM_XE_ENGINE_CLASS_COMPUTE		4
 	/*
-	 * Kernel only class (not actual hardware engine class). Used for
+	 * Kernel only classes (not actual hardware engine class). Used for
 	 * creating ordered queues of VM bind operations.
 	 */
-#define DRM_XE_ENGINE_CLASS_VM_BIND		5
+#define DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC	5
+#define DRM_XE_ENGINE_CLASS_VM_BIND_SYNC	6
 	__u16 engine_class;
 
 	__u16 engine_instance;
@@ -580,7 +581,7 @@ struct drm_xe_vm_create {
 
 #define DRM_XE_VM_CREATE_SCRATCH_PAGE	(0x1 << 0)
 #define DRM_XE_VM_CREATE_COMPUTE_MODE	(0x1 << 1)
-#define DRM_XE_VM_CREATE_ASYNC_BIND_OPS	(0x1 << 2)
+#define DRM_XE_VM_CREATE_ASYNC_DEFAULT	(0x1 << 2)
 #define DRM_XE_VM_CREATE_FAULT_MODE	(0x1 << 3)
 	/** @flags: Flags */
 	__u32 flags;
@@ -640,34 +641,12 @@ struct drm_xe_vm_bind_op {
 #define XE_VM_BIND_OP_MAP		0x0
 #define XE_VM_BIND_OP_UNMAP		0x1
 #define XE_VM_BIND_OP_MAP_USERPTR	0x2
-#define XE_VM_BIND_OP_RESTART		0x3
-#define XE_VM_BIND_OP_UNMAP_ALL		0x4
-#define XE_VM_BIND_OP_PREFETCH		0x5
+#define XE_VM_BIND_OP_UNMAP_ALL		0x3
+#define XE_VM_BIND_OP_PREFETCH		0x4
 	/** @op: Bind operation to perform */
 	__u32 op;
 
 #define XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
-	/*
-	 * A bind ops completions are always async, hence the support for out
-	 * sync. This flag indicates the allocation of the memory for new page
-	 * tables and the job to program the pages tables is asynchronous
-	 * relative to the IOCTL. That part of a bind operation can fail under
-	 * memory pressure, the job in practice can't fail unless the system is
-	 * totally shot.
-	 *
-	 * If this flag is clear and the IOCTL doesn't return an error, in
-	 * practice the bind op is good and will complete.
-	 *
-	 * If this flag is set and doesn't return an error, the bind op can
-	 * still fail and recovery is needed. It should free memory
-	 * via non-async unbinds, and then restart all queued async binds op via
-	 * XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
-	 * VM.
-	 *
-	 * This flag is only allowed when DRM_XE_VM_CREATE_ASYNC_BIND_OPS is
-	 * configured in the VM and must be set if the VM is configured with
-	 * DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
-	 */
 #define XE_VM_BIND_FLAG_ASYNC		(0x1 << 1)
 	/*
 	 * Valid on a faulting VM only, do the MAP operation immediately rather
@@ -908,18 +887,10 @@ struct drm_xe_wait_user_fence {
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
-	union {
-		/**
-		 * @addr: user pointer address to wait on, must qword aligned
-		 */
-		__u64 addr;
-
-		/**
-		 * @vm_id: The ID of the VM which encounter an error used with
-		 * DRM_XE_UFENCE_WAIT_VM_ERROR. Upper 32 bits must be clear.
-		 */
-		__u64 vm_id;
-	};
+	/**
+	 * @addr: user pointer address to wait on, must qword aligned
+	 */
+	__u64 addr;
 
 #define DRM_XE_UFENCE_WAIT_EQ	0
 #define DRM_XE_UFENCE_WAIT_NEQ	1
@@ -932,7 +903,6 @@ struct drm_xe_wait_user_fence {
 
 #define DRM_XE_UFENCE_WAIT_SOFT_OP	(1 << 0)	/* e.g. Wait on VM bind */
 #define DRM_XE_UFENCE_WAIT_ABSTIME	(1 << 1)
-#define DRM_XE_UFENCE_WAIT_VM_ERROR	(1 << 2)
 	/** @flags: wait flags */
 	__u16 flags;
 
diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index f0c0681ab..34934855a 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2892,7 +2892,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
 							  &bb_size,
 							  mem_region) == 0);
 	} else if (is_xe) {
-		vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+		vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 		exec_queue = xe_exec_queue_create(dst_fb->fd, vm, &inst, 0);
 		xe_ctx = intel_ctx_xe(dst_fb->fd, vm, exec_queue, 0, 0, 0);
 		mem_region = vram_if_possible(dst_fb->fd, 0);
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 6e668d28c..df82ef5f5 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -953,7 +953,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
 
 		if (!vm) {
 			igt_assert_f(!ctx, "No vm provided for engine");
-			vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+			vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 		}
 
 		ibb->uses_full_ppgtt = true;
diff --git a/lib/intel_compute.c b/lib/intel_compute.c
index 0c30f39c1..1ae33cdfc 100644
--- a/lib/intel_compute.c
+++ b/lib/intel_compute.c
@@ -79,7 +79,7 @@ static void bo_execenv_create(int fd, struct bo_execenv *execenv)
 		else
 			engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
 
-		execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+		execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 		execenv->exec_queue = xe_exec_queue_create_class(fd, execenv->vm,
 								 engine_class);
 	}
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 48cd185de..895e3bd4e 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -201,16 +201,8 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
 static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 			      uint64_t addr, uint64_t size, uint32_t op)
 {
-	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
-		.handle = syncobj_create(fd, 0),
-	};
-
-	__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
+	__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, NULL, 0,
 			    0, 0);
-
-	igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
-	syncobj_destroy(fd, sync.handle);
 }
 
 void xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
@@ -276,10 +268,11 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size)
 	return create.handle;
 }
 
-uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext)
+uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext, bool async)
 {
 	struct drm_xe_engine_class_instance instance = {
-		.engine_class = DRM_XE_ENGINE_CLASS_VM_BIND,
+		.engine_class = async ? DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC :
+			DRM_XE_ENGINE_CLASS_VM_BIND_SYNC,
 	};
 	struct drm_xe_exec_queue_create create = {
 		.extensions = ext,
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index f0e4109dc..a8dbcf376 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -71,7 +71,8 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size);
 uint32_t xe_exec_queue_create(int fd, uint32_t vm,
 			  struct drm_xe_engine_class_instance *instance,
 			  uint64_t ext);
-uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext);
+uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext,
+				   bool async);
 uint32_t xe_exec_queue_create_class(int fd, uint32_t vm, uint16_t class);
 void xe_exec_queue_destroy(int fd, uint32_t exec_queue);
 uint64_t xe_bo_mmap_offset(int fd, uint32_t bo);
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index c356abe1e..ab7b31188 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -316,7 +316,7 @@ bool xe_supports_faults(int fd)
 	bool supports_faults;
 
 	struct drm_xe_vm_create create = {
-		.flags = DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+		.flags = DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 			 DRM_XE_VM_CREATE_FAULT_MODE,
 	};
 
diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
index 20bbc4448..300b734c8 100644
--- a/tests/intel/xe_ccs.c
+++ b/tests/intel/xe_ccs.c
@@ -343,7 +343,7 @@ static void block_copy(int xe,
 		uint32_t vm, exec_queue;
 
 		if (config->new_ctx) {
-			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 			exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 			surf_ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
 			surf_ahnd = intel_allocator_open(xe, surf_ctx->vm,
@@ -550,7 +550,7 @@ static void block_copy_test(int xe,
 				      copyfns[copy_function].suffix) {
 				uint32_t sync_bind, sync_out;
 
-				vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+				vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 				exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 				sync_bind = syncobj_create(xe, 0);
 				sync_out = syncobj_create(xe, 0);
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index 8d845e5c8..d99bd51cf 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -54,7 +54,7 @@ static void create_invalid_size(int fd)
 	uint32_t handle;
 	int ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 
 	xe_for_each_mem_region(fd, memreg, region) {
 		memregion = xe_mem_region(fd, region);
@@ -140,7 +140,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
 
 	fd = drm_reopen_driver(fd);
 	num_engines = xe_number_hw_engines(fd);
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 
 	exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
 	igt_debug("nproc: %u, exec_queues per process: %u\n", nproc, exec_queues_per_process);
@@ -199,7 +199,7 @@ static void create_massive_size(int fd)
 	uint32_t handle;
 	int ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 
 	xe_for_each_mem_region(fd, memreg, region) {
 		ret = __create_bo(fd, vm, -1ULL << 32, region, &handle);
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index 22e410e14..64168ed19 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -71,7 +71,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 	struct xe_spin_opts spin_opts = { .preempt = true };
 	int i, b, ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * N_EXEC_QUEUES;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -90,7 +90,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 
 		for (i = 0; i < N_EXEC_QUEUES; i++) {
 			exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
-			bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
+			bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
 			syncobjs[i] = syncobj_create(fd, 0);
 		}
 		syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 5d8981f8d..eec001218 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -63,15 +63,17 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 
 	fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	if (flags & BIND_EXEC_QUEUE)
-		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
+		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
 	if (flags & MULTI_VM) {
-		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
-		vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 		if (flags & BIND_EXEC_QUEUE) {
-			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
-			bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3, 0);
+			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
+									0, true);
+			bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3,
+									0, true);
 		}
 	}
 
@@ -240,15 +242,16 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
 
 	fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
 	if (flags & BIND_EXEC_QUEUE)
-		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
+		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
 	if (flags & MULTI_VM) {
-		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 				   DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
 		if (flags & BIND_EXEC_QUEUE)
-			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
+			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
+									0, true);
 	}
 
 	for (i = 0; i < n_exec_queues; i++) {
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index f4f5440f4..3ca3de881 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -66,7 +66,7 @@ static void test_all_active(int fd, int gt, int class)
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
@@ -207,7 +207,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
@@ -433,7 +433,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index e29398aaa..8dbce524d 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -109,7 +109,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	igt_assert(n_vm <= MAX_N_EXEC_QUEUES);
 
 	for (i = 0; i < n_vm; ++i)
-		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -151,7 +151,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 		exec_queues[i] = xe_exec_queue_create(fd, __vm, eci, 0);
 		if (flags & BIND_EXEC_QUEUE)
-			bind_exec_queues[i] = xe_bind_exec_queue_create(fd, __vm, 0);
+			bind_exec_queues[i] = xe_bind_exec_queue_create(fd,
+									__vm, 0,
+									true);
 		else
 			bind_exec_queues[i] = 0;
 		syncobjs[i] = syncobj_create(fd, 0);
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 02e7ef201..b0a677dca 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -113,7 +113,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 	igt_assert(n_exec_queues <= MAX_N_EXECQUEUES);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
@@ -123,7 +123,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 		if (flags & BIND_EXECQUEUE)
 			bind_exec_queues[i] =
-				xe_bind_exec_queue_create(fd, vm, 0);
+				xe_bind_exec_queue_create(fd, vm, 0, true);
 		else
 			bind_exec_queues[i] = 0;
 	};
@@ -151,7 +151,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 		if (flags & BIND_EXECQUEUE)
 			bind_exec_queues[i] =
-				xe_bind_exec_queue_create(fd, vm, 0);
+				xe_bind_exec_queue_create(fd, vm, 0, true);
 		else
 			bind_exec_queues[i] = 0;
 	};
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index c5d6bdcd5..92d8690a1 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -131,7 +131,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 			  DRM_XE_VM_CREATE_FAULT_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
@@ -165,7 +165,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 		if (flags & BIND_EXEC_QUEUE)
 			bind_exec_queues[i] =
-				xe_bind_exec_queue_create(fd, vm, 0);
+				xe_bind_exec_queue_create(fd, vm, 0, true);
 		else
 			bind_exec_queues[i] = 0;
 	};
@@ -375,7 +375,7 @@ test_atomic(int fd, struct drm_xe_engine_class_instance *eci,
 	uint32_t *ptr;
 	int i, b, wait_idx = 0;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 			  DRM_XE_VM_CREATE_FAULT_MODE, 0);
 	bo_size = sizeof(*data) * n_atomic;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index ca8d7cc13..44248776b 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -45,7 +45,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 	struct xe_spin *spin;
 	struct xe_spin_opts spin_opts = { .addr = addr, .preempt = false };
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*spin);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -176,7 +176,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -362,7 +362,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	if (flags & CLOSE_FD)
 		fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -528,7 +528,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	if (flags & CLOSE_FD)
 		fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 14f7c9bec..90684b8cb 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -75,7 +75,7 @@ static void store(int fd)
 	syncobj = syncobj_create(fd, 0);
 	sync.handle = syncobj;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -132,7 +132,7 @@ static void store_all(int fd, int gt, int class)
 	struct drm_xe_engine_class_instance *hwe;
 	int i, num_placements = 0;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index c9a51fc00..bb16bdd88 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -77,7 +77,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 		owns_vm = true;
 	}
 
@@ -285,7 +285,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 				  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
 		owns_vm = true;
 	}
@@ -454,7 +454,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 static void
 test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		 struct drm_xe_engine_class_instance *eci, int n_exec_queues,
-		 int n_execs, int rebind_error_inject, unsigned int flags)
+		 int n_execs, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
 		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
@@ -489,7 +489,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 		owns_vm = true;
 	}
 
@@ -531,7 +531,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		else
 			exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 		if (flags & BIND_EXEC_QUEUE)
-			bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
+			bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm,
+									0, true);
 		else
 			bind_exec_queues[i] = 0;
 		syncobjs[i] = syncobj_create(fd, 0);
@@ -583,8 +584,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		exec.address = exec_addr;
 		if (e != i && !(flags & HANG))
 			 syncobj_reset(fd, &syncobjs[e], 1);
-		if ((flags & HANG && e == hang_exec_queue) ||
-		    rebind_error_inject > 0) {
+		if ((flags & HANG && e == hang_exec_queue)) {
 			int err;
 
 			do {
@@ -594,20 +594,10 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 			xe_exec(fd, &exec);
 		}
 
-		if (flags & REBIND && i &&
-		    (!(i & 0x1f) || rebind_error_inject == i)) {
-#define INJECT_ERROR	(0x1 << 31)
-			if (rebind_error_inject == i)
-				__xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
-						    0, 0, addr, bo_size,
-						    XE_VM_BIND_OP_UNMAP,
-						    XE_VM_BIND_FLAG_ASYNC |
-						    INJECT_ERROR, sync_all,
-						    n_exec_queues, 0, 0);
-			else
-				xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
-						   0, addr, bo_size,
-						   sync_all, n_exec_queues);
+		if (flags & REBIND && i && !(i & 0x1f)) {
+			xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
+					   0, addr, bo_size,
+					   sync_all, n_exec_queues);
 
 			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
 			addr += bo_size;
@@ -709,7 +699,6 @@ struct thread_data {
 	int n_exec_queue;
 	int n_exec;
 	int flags;
-	int rebind_error_inject;
 	bool *go;
 };
 
@@ -733,46 +722,7 @@ static void *thread(void *data)
 	else
 		test_legacy_mode(t->fd, t->vm_legacy_mode, t->addr, t->userptr,
 				 t->eci, t->n_exec_queue, t->n_exec,
-				 t->rebind_error_inject, t->flags);
-
-	return NULL;
-}
-
-struct vm_thread_data {
-	pthread_t thread;
-	int fd;
-	int vm;
-};
-
-static void *vm_async_ops_err_thread(void *data)
-{
-	struct vm_thread_data *args = data;
-	int fd = args->fd;
-	int ret;
-
-	struct drm_xe_wait_user_fence wait = {
-		.vm_id = args->vm,
-		.op = DRM_XE_UFENCE_WAIT_NEQ,
-		.flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
-		.mask = DRM_XE_UFENCE_WAIT_U32,
-#define BASICALLY_FOREVER	0xffffffffffff
-		.timeout = BASICALLY_FOREVER,
-	};
-
-	ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
-
-	while (!ret) {
-		struct drm_xe_vm_bind bind = {
-			.vm_id = args->vm,
-			.num_binds = 1,
-			.bind.op = XE_VM_BIND_OP_RESTART,
-		};
-
-		/* Restart and wait for next error */
-		igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
-					&bind), 0);
-		ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
-	}
+				 t->flags);
 
 	return NULL;
 }
@@ -826,6 +776,10 @@ static void *vm_async_ops_err_thread(void *data)
  *	shared vm rebind err
  * @shared-vm-userptr-rebind-err:
  *	shared vm userptr rebind err
+ * @rebind-err:
+ *	rebind err
+ * @userptr-rebind-err:
+ *	userptr rebind err
  * @shared-vm-userptr-invalidate:
  *	shared vm userptr invalidate
  * @shared-vm-userptr-invalidate-race:
@@ -842,7 +796,7 @@ static void *vm_async_ops_err_thread(void *data)
  *	fd userptr invalidate race
  * @hang-basic:
  *	hang basic
-  * @hang-userptr:
+ * @hang-userptr:
  *	hang userptr
  * @hang-rebind:
  *	hang rebind
@@ -864,6 +818,10 @@ static void *vm_async_ops_err_thread(void *data)
  *	hang shared vm rebind err
  * @hang-shared-vm-userptr-rebind-err:
  *	hang shared vm userptr rebind err
+ * @hang-rebind-err:
+ *	hang rebind err
+ * @hang-userptr-rebind-err:
+ *	hang userptr rebind err
  * @hang-shared-vm-userptr-invalidate:
  *	hang shared vm userptr invalidate
  * @hang-shared-vm-userptr-invalidate-race:
@@ -1019,7 +977,6 @@ static void threads(int fd, int flags)
 	int n_hw_engines = 0, class;
 	uint64_t i = 0;
 	uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
-	struct vm_thread_data vm_err_thread = {};
 	bool go = false;
 	int n_threads = 0;
 	int gt;
@@ -1052,18 +1009,12 @@ static void threads(int fd, int flags)
 
 	if (flags & SHARED_VM) {
 		vm_legacy_mode = xe_vm_create(fd,
-					      DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
+					      DRM_XE_VM_CREATE_ASYNC_DEFAULT,
 					      0);
 		vm_compute_mode = xe_vm_create(fd,
-					       DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+					       DRM_XE_VM_CREATE_ASYNC_DEFAULT |
 					       DRM_XE_VM_CREATE_COMPUTE_MODE,
 					       0);
-
-		vm_err_thread.fd = fd;
-		vm_err_thread.vm = vm_legacy_mode;
-		pthread_create(&vm_err_thread.thread, 0,
-			       vm_async_ops_err_thread, &vm_err_thread);
-
 	}
 
 	xe_for_each_hw_engine(fd, hwe) {
@@ -1083,11 +1034,6 @@ static void threads(int fd, int flags)
 		threads_data[i].n_exec_queue = N_EXEC_QUEUE;
 #define N_EXEC		1024
 		threads_data[i].n_exec = N_EXEC;
-		if (flags & REBIND_ERROR)
-			threads_data[i].rebind_error_inject =
-				(N_EXEC / (n_hw_engines + 1)) * (i + 1);
-		else
-			threads_data[i].rebind_error_inject = -1;
 		threads_data[i].flags = flags;
 		if (flags & MIXED_MODE) {
 			threads_data[i].flags &= ~MIXED_MODE;
@@ -1190,8 +1136,6 @@ static void threads(int fd, int flags)
 	if (vm_compute_mode)
 		xe_vm_destroy(fd, vm_compute_mode);
 	free(threads_data);
-	if (flags & SHARED_VM)
-		pthread_join(vm_err_thread.thread, NULL);
 	pthread_barrier_destroy(&barrier);
 }
 
@@ -1214,9 +1158,8 @@ igt_main
 		{ "shared-vm-rebind-bindexecqueue", SHARED_VM | REBIND |
 			BIND_EXEC_QUEUE },
 		{ "shared-vm-userptr-rebind", SHARED_VM | USERPTR | REBIND },
-		{ "shared-vm-rebind-err", SHARED_VM | REBIND | REBIND_ERROR },
-		{ "shared-vm-userptr-rebind-err", SHARED_VM | USERPTR |
-			REBIND | REBIND_ERROR},
+		{ "rebind-err", REBIND | REBIND_ERROR },
+		{ "userptr-rebind-err", USERPTR | REBIND | REBIND_ERROR},
 		{ "shared-vm-userptr-invalidate", SHARED_VM | USERPTR |
 			INVALIDATE },
 		{ "shared-vm-userptr-invalidate-race", SHARED_VM | USERPTR |
@@ -1240,10 +1183,9 @@ igt_main
 		{ "hang-shared-vm-rebind", HANG | SHARED_VM | REBIND },
 		{ "hang-shared-vm-userptr-rebind", HANG | SHARED_VM | USERPTR |
 			REBIND },
-		{ "hang-shared-vm-rebind-err", HANG | SHARED_VM | REBIND |
+		{ "hang-rebind-err", HANG | REBIND | REBIND_ERROR },
+		{ "hang-userptr-rebind-err", HANG | USERPTR | REBIND |
 			REBIND_ERROR },
-		{ "hang-shared-vm-userptr-rebind-err", HANG | SHARED_VM |
-			USERPTR | REBIND | REBIND_ERROR },
 		{ "hang-shared-vm-userptr-invalidate", HANG | SHARED_VM |
 			USERPTR | INVALIDATE },
 		{ "hang-shared-vm-userptr-invalidate-race", HANG | SHARED_VM |
diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
index ca85f5f18..2f349b16d 100644
--- a/tests/intel/xe_exercise_blt.c
+++ b/tests/intel/xe_exercise_blt.c
@@ -280,7 +280,7 @@ static void fast_copy_test(int xe,
 			region1 = igt_collection_get_value(regions, 0);
 			region2 = igt_collection_get_value(regions, 1);
 
-			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 			exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 			ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
 
diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
index 0327d8e0e..3f2c4ae23 100644
--- a/tests/intel/xe_guc_pc.c
+++ b/tests/intel/xe_guc_pc.c
@@ -60,7 +60,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
 	igt_assert(n_execs > 0);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
index c9891a729..c71ff74a1 100644
--- a/tests/intel/xe_huc_copy.c
+++ b/tests/intel/xe_huc_copy.c
@@ -117,7 +117,7 @@ test_huc_copy(int fd)
 		{ .addr = ADDR_BATCH, .size = SIZE_BATCH }, // batch
 	};
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_VIDEO_DECODE);
 	sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
 	sync.handle = syncobj_create(fd, 0);
diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
index 0159a3164..26e4dcc85 100644
--- a/tests/intel/xe_intel_bb.c
+++ b/tests/intel/xe_intel_bb.c
@@ -191,7 +191,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
 	intel_bb_reset(ibb, true);
 
 	if (new_context) {
-		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 		ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
 		intel_bb_destroy(ibb);
 		ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index fd28d5630..b2976ec84 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -259,7 +259,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 	if (check_rpm)
 		igt_assert(in_d3(device, d_state));
 
-	vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 
 	if (check_rpm)
 		igt_assert(out_of_d3(device, d_state));
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 89df6149a..dd3302337 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -275,7 +275,7 @@ static void unbind_all(int fd, int n_vmas)
 		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
 	};
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo = xe_bo_create(fd, 0, vm, bo_size);
 
 	for (i = 0; i < n_vmas; ++i)
@@ -322,171 +322,6 @@ static void userptr_invalid(int fd)
 	xe_vm_destroy(fd, vm);
 }
 
-struct vm_thread_data {
-	pthread_t thread;
-	int fd;
-	int vm;
-	uint32_t bo;
-	size_t bo_size;
-	bool destroy;
-};
-
-/**
- * SUBTEST: vm-async-ops-err
- * Description: Test VM async ops error
- * Functionality: VM
- * Test category: negative test
- *
- * SUBTEST: vm-async-ops-err-destroy
- * Description: Test VM async ops error destroy
- * Functionality: VM
- * Test category: negative test
- */
-
-static void *vm_async_ops_err_thread(void *data)
-{
-	struct vm_thread_data *args = data;
-	int fd = args->fd;
-	uint64_t addr = 0x201a0000;
-	int num_binds = 0;
-	int ret;
-
-	struct drm_xe_wait_user_fence wait = {
-		.vm_id = args->vm,
-		.op = DRM_XE_UFENCE_WAIT_NEQ,
-		.flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
-		.mask = DRM_XE_UFENCE_WAIT_U32,
-		.timeout = MS_TO_NS(1000),
-	};
-
-	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE,
-				&wait), 0);
-	if (args->destroy) {
-		usleep(5000);	/* Wait other binds to queue up */
-		xe_vm_destroy(fd, args->vm);
-		return NULL;
-	}
-
-	while (!ret) {
-		struct drm_xe_vm_bind bind = {
-			.vm_id = args->vm,
-			.num_binds = 1,
-			.bind.op = XE_VM_BIND_OP_RESTART,
-		};
-
-		/* VM sync ops should work */
-		if (!(num_binds++ % 2)) {
-			xe_vm_bind_sync(fd, args->vm, args->bo, 0, addr,
-					args->bo_size);
-		} else {
-			xe_vm_unbind_sync(fd, args->vm, 0, addr,
-					  args->bo_size);
-			addr += args->bo_size * 2;
-		}
-
-		/* Restart and wait for next error */
-		igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
-					&bind), 0);
-		ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
-	}
-
-	return NULL;
-}
-
-static void vm_async_ops_err(int fd, bool destroy)
-{
-	uint32_t vm;
-	uint64_t addr = 0x1a0000;
-	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
-	};
-#define N_BINDS		32
-	struct vm_thread_data thread = {};
-	uint32_t syncobjs[N_BINDS];
-	size_t bo_size = 0x1000 * 32;
-	uint32_t bo;
-	int i, j;
-
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
-	bo = xe_bo_create(fd, 0, vm, bo_size);
-
-	thread.fd = fd;
-	thread.vm = vm;
-	thread.bo = bo;
-	thread.bo_size = bo_size;
-	thread.destroy = destroy;
-	pthread_create(&thread.thread, 0, vm_async_ops_err_thread, &thread);
-
-	for (i = 0; i < N_BINDS; i++)
-		syncobjs[i] = syncobj_create(fd, 0);
-
-	for (j = 0, i = 0; i < N_BINDS / 4; i++, j++) {
-		sync.handle = syncobjs[j];
-#define INJECT_ERROR	(0x1 << 31)
-		if (i == N_BINDS / 8)	/* Inject error on this bind */
-			__xe_vm_bind_assert(fd, vm, 0, bo, 0,
-					    addr + i * bo_size * 2,
-					    bo_size, XE_VM_BIND_OP_MAP,
-					    XE_VM_BIND_FLAG_ASYNC |
-					    INJECT_ERROR, &sync, 1, 0, 0);
-		else
-			xe_vm_bind_async(fd, vm, 0, bo, 0,
-					 addr + i * bo_size * 2,
-					 bo_size, &sync, 1);
-	}
-
-	for (i = 0; i < N_BINDS / 4; i++, j++) {
-		sync.handle = syncobjs[j];
-		if (i == N_BINDS / 8)
-			__xe_vm_bind_assert(fd, vm, 0, 0, 0,
-					    addr + i * bo_size * 2,
-					    bo_size, XE_VM_BIND_OP_UNMAP,
-					    XE_VM_BIND_FLAG_ASYNC |
-					    INJECT_ERROR, &sync, 1, 0, 0);
-		else
-			xe_vm_unbind_async(fd, vm, 0, 0,
-					   addr + i * bo_size * 2,
-					   bo_size, &sync, 1);
-	}
-
-	for (i = 0; i < N_BINDS / 4; i++, j++) {
-		sync.handle = syncobjs[j];
-		if (i == N_BINDS / 8)
-			__xe_vm_bind_assert(fd, vm, 0, bo, 0,
-					    addr + i * bo_size * 2,
-					    bo_size, XE_VM_BIND_OP_MAP,
-					    XE_VM_BIND_FLAG_ASYNC |
-					    INJECT_ERROR, &sync, 1, 0, 0);
-		else
-			xe_vm_bind_async(fd, vm, 0, bo, 0,
-					 addr + i * bo_size * 2,
-					 bo_size, &sync, 1);
-	}
-
-	for (i = 0; i < N_BINDS / 4; i++, j++) {
-		sync.handle = syncobjs[j];
-		if (i == N_BINDS / 8)
-			__xe_vm_bind_assert(fd, vm, 0, 0, 0,
-					    addr + i * bo_size * 2,
-					    bo_size, XE_VM_BIND_OP_UNMAP,
-					    XE_VM_BIND_FLAG_ASYNC |
-					    INJECT_ERROR, &sync, 1, 0, 0);
-		else
-			xe_vm_unbind_async(fd, vm, 0, 0,
-					   addr + i * bo_size * 2,
-					   bo_size, &sync, 1);
-	}
-
-	for (i = 0; i < N_BINDS; i++)
-		igt_assert(syncobj_wait(fd, &syncobjs[i], 1, INT64_MAX, 0,
-					NULL));
-
-	if (!destroy)
-		xe_vm_destroy(fd, vm);
-
-	pthread_join(thread.thread, NULL);
-}
-
 /**
  * SUBTEST: shared-%s-page
  * Description: Test shared arg[1] page
@@ -537,7 +372,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 	data = malloc(sizeof(*data) * n_bo);
 	igt_assert(data);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(struct shared_pte_page_data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -718,7 +553,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 	struct xe_spin_opts spin_opts = { .preempt = true };
 	int i, b;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * N_EXEC_QUEUES;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -728,7 +563,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 
 	for (i = 0; i < N_EXEC_QUEUES; i++) {
 		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
-		bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
+		bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
 		syncobjs[i] = syncobj_create(fd, 0);
 	}
 	syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
@@ -898,7 +733,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 
 	igt_assert(n_execs <= BIND_ARRAY_MAX_N_EXEC);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -908,7 +743,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 	data = xe_bo_map(fd, bo, bo_size);
 
 	if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
-		bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0);
+		bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0, true);
 	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
 
 	for (i = 0; i < n_execs; ++i) {
@@ -1092,7 +927,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 	}
 
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 
 	if (flags & LARGE_BIND_FLAG_USERPTR) {
 		map = aligned_alloc(xe_get_default_alignment(fd), bo_size);
@@ -1384,7 +1219,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 			unbind_n_page_offset *= n_page_per_2mb;
 	}
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = page_size * bo_n_pages;
 
 	if (flags & MAP_FLAG_USERPTR) {
@@ -1684,7 +1519,7 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 			unbind_n_page_offset *= n_page_per_2mb;
 	}
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_size = page_size * bo_n_pages;
 
 	if (flags & MAP_FLAG_USERPTR) {
@@ -2001,12 +1836,6 @@ igt_main
 	igt_subtest("userptr-invalid")
 		userptr_invalid(fd);
 
-	igt_subtest("vm-async-ops-err")
-		vm_async_ops_err(fd, false);
-
-	igt_subtest("vm-async-ops-err-destroy")
-		vm_async_ops_err(fd, true);
-
 	igt_subtest("shared-pte-page")
 		xe_for_each_hw_engine(fd, hwe)
 			shared_pte_page(fd, hwe, 4,
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index 34005fbeb..e0116f181 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -34,7 +34,7 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 
 	sync[0].addr = to_user_pointer(&wait_fence);
 	sync[0].timeline_value = val;
-	xe_vm_bind(fd, vm, bo, offset, addr, size, sync, 1);
+	xe_vm_bind_async(fd, vm, 0, bo, offset, addr, size, sync, 1);
 }
 
 enum waittype {
@@ -63,7 +63,7 @@ waitfence(int fd, enum waittype wt)
 	uint32_t bo_7;
 	int64_t timeout;
 
-	uint32_t vm = xe_vm_create(fd, 0, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
 	bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
 	do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
 	bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
@@ -96,21 +96,6 @@ waitfence(int fd, enum waittype wt)
 			  ", elapsed: %" PRId64 "\n",
 			  timeout, signalled, signalled - current);
 	}
-
-	xe_vm_unbind_sync(fd, vm, 0, 0x200000, 0x40000);
-	xe_vm_unbind_sync(fd, vm, 0, 0xc0000000, 0x40000);
-	xe_vm_unbind_sync(fd, vm, 0, 0x180000000, 0x40000);
-	xe_vm_unbind_sync(fd, vm, 0, 0x140000000, 0x10000);
-	xe_vm_unbind_sync(fd, vm, 0, 0x100000000, 0x100000);
-	xe_vm_unbind_sync(fd, vm, 0, 0xc0040000, 0x1c0000);
-	xe_vm_unbind_sync(fd, vm, 0, 0xeffff0000, 0x10000);
-	gem_close(fd, bo_7);
-	gem_close(fd, bo_6);
-	gem_close(fd, bo_5);
-	gem_close(fd, bo_4);
-	gem_close(fd, bo_3);
-	gem_close(fd, bo_2);
-	gem_close(fd, bo_1);
 }
 
 igt_main
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 11/24] drm-uapi/xe: Replace useless 'instance' per unique gt_id
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (9 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 10/24] xe: Update to new VM bind uAPI Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 16:47   ` Tvrtko Ursulin
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 12/24] drm-uapi/xe: Remove unused field of drm_xe_query_gt Francois Dugast
                   ` (15 subsequent siblings)
  26 siblings, 1 reply; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe/uapi: Replace useless 'instance' per unique gt_id")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 65 ++++++++++++++++++++++++++-------------
 tests/intel/xe_query.c    |  2 +-
 2 files changed, 44 insertions(+), 23 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 13c693393..68cc5e051 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -337,6 +337,47 @@ struct drm_xe_query_config {
 	__u64 info[];
 };
 
+/**
+ * struct drm_xe_query_gt - describe an individual GT.
+ *
+ * To be used with drm_xe_query_gts, which will return a list with all the
+ * existing GT individual descriptions.
+ * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
+ * implementing graphics and/or media operations.
+ */
+struct drm_xe_query_gt {
+#define XE_QUERY_GT_TYPE_MAIN		0
+#define XE_QUERY_GT_TYPE_REMOTE		1
+#define XE_QUERY_GT_TYPE_MEDIA		2
+	/** @type: GT type: Main, Remote, or Media */
+	__u16 type;
+	/** @gt_id: Unique ID of this GT within the PCI Device */
+	__u16 gt_id;
+	/** @clock_freq: A clock frequency for timestamp */
+	__u32 clock_freq;
+	/** @features: Reserved for future information about GT features */
+	__u64 features;
+	/**
+	 * @native_mem_regions: Bit mask of instances from
+	 * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
+	 * direct access.
+	 */
+	__u64 native_mem_regions;
+	/**
+	 * @slow_mem_regions: Bit mask of instances from
+	 * drm_xe_query_mem_usage that this GT can indirectly access, although
+	 * they live on a different GPU/Tile.
+	 */
+	__u64 slow_mem_regions;
+	/**
+	 * @inaccessible_mem_regions: Bit mask of instances from
+	 * drm_xe_query_mem_usage that is not accessible by this GT at all.
+	 */
+	__u64 inaccessible_mem_regions;
+	/** @reserved: Reserved */
+	__u64 reserved[8];
+};
+
 /**
  * struct drm_xe_query_gts - describe GTs
  *
@@ -347,30 +388,10 @@ struct drm_xe_query_config {
 struct drm_xe_query_gts {
 	/** @num_gt: number of GTs returned in gts */
 	__u32 num_gt;
-
 	/** @pad: MBZ */
 	__u32 pad;
-
-	/**
-	 * @gts: The GTs returned for this device
-	 *
-	 * TODO: convert drm_xe_query_gt to proper kernel-doc.
-	 * TODO: Perhaps info about every mem region relative to this GT? e.g.
-	 * bandwidth between this GT and remote region?
-	 */
-	struct drm_xe_query_gt {
-#define XE_QUERY_GT_TYPE_MAIN		0
-#define XE_QUERY_GT_TYPE_REMOTE		1
-#define XE_QUERY_GT_TYPE_MEDIA		2
-		__u16 type;
-		__u16 instance;
-		__u32 clock_freq;
-		__u64 features;
-		__u64 native_mem_regions;	/* bit mask of instances from drm_xe_query_mem_usage */
-		__u64 slow_mem_regions;		/* bit mask of instances from drm_xe_query_mem_usage */
-		__u64 inaccessible_mem_regions;	/* bit mask of instances from drm_xe_query_mem_usage */
-		__u64 reserved[8];
-	} gts[];
+	/** @gts: The GT list returned for this device */
+	struct drm_xe_query_gt gts[];
 };
 
 /**
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index acf069f46..eb8d52897 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -279,7 +279,7 @@ test_query_gts(int fd)
 
 	for (i = 0; i < gts->num_gt; i++) {
 		igt_info("type: %d\n", gts->gts[i].type);
-		igt_info("instance: %d\n", gts->gts[i].instance);
+		igt_info("gt_id: %d\n", gts->gts[i].gt_id);
 		igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
 		igt_info("features: 0x%016llx\n", gts->gts[i].features);
 		igt_info("native_mem_regions: 0x%016llx\n",
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 12/24] drm-uapi/xe: Remove unused field of drm_xe_query_gt
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (10 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 11/24] drm-uapi/xe: Replace useless 'instance' per unique gt_id Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 13/24] drm-uapi/xe: Rename gts to gt_list Francois Dugast
                   ` (14 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe/uapi: Remove unused field of drm_xe_query_gt")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 2 --
 tests/intel/xe_query.c    | 1 -
 2 files changed, 3 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 68cc5e051..a6773c244 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -355,8 +355,6 @@ struct drm_xe_query_gt {
 	__u16 gt_id;
 	/** @clock_freq: A clock frequency for timestamp */
 	__u32 clock_freq;
-	/** @features: Reserved for future information about GT features */
-	__u64 features;
 	/**
 	 * @native_mem_regions: Bit mask of instances from
 	 * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index eb8d52897..3aa2918f0 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -281,7 +281,6 @@ test_query_gts(int fd)
 		igt_info("type: %d\n", gts->gts[i].type);
 		igt_info("gt_id: %d\n", gts->gts[i].gt_id);
 		igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
-		igt_info("features: 0x%016llx\n", gts->gts[i].features);
 		igt_info("native_mem_regions: 0x%016llx\n",
 		       gts->gts[i].native_mem_regions);
 		igt_info("slow_mem_regions: 0x%016llx\n",
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 13/24] drm-uapi/xe: Rename gts to gt_list
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (11 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 12/24] drm-uapi/xe: Remove unused field of drm_xe_query_gt Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 14/24] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY Francois Dugast
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe/uapi: Rename gts to gt_list")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h                | 18 ++++----
 lib/xe/xe_query.c                        | 52 ++++++++++++------------
 lib/xe/xe_query.h                        | 10 ++---
 lib/xe/xe_spin.c                         |  6 +--
 tests/intel-ci/xe-fast-feedback.testlist |  2 +-
 tests/intel/xe_query.c                   | 36 ++++++++--------
 6 files changed, 62 insertions(+), 62 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index a6773c244..a0310dbe0 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -340,7 +340,7 @@ struct drm_xe_query_config {
 /**
  * struct drm_xe_query_gt - describe an individual GT.
  *
- * To be used with drm_xe_query_gts, which will return a list with all the
+ * To be used with drm_xe_query_gt_list, which will return a list with all the
  * existing GT individual descriptions.
  * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
  * implementing graphics and/or media operations.
@@ -377,19 +377,19 @@ struct drm_xe_query_gt {
 };
 
 /**
- * struct drm_xe_query_gts - describe GTs
+ * struct drm_xe_query_gt_list - A list with GT description items.
  *
  * If a query is made with a struct drm_xe_device_query where .query
- * is equal to DRM_XE_DEVICE_QUERY_GTS, then the reply uses struct
- * drm_xe_query_gts in .data.
+ * is equal to DRM_XE_DEVICE_QUERY_GT_LIST, then the reply uses struct
+ * drm_xe_query_gt_list in .data.
  */
-struct drm_xe_query_gts {
-	/** @num_gt: number of GTs returned in gts */
+struct drm_xe_query_gt_list {
+	/** @num_gt: number of GT items returned in gt_list */
 	__u32 num_gt;
 	/** @pad: MBZ */
 	__u32 pad;
-	/** @gts: The GT list returned for this device */
-	struct drm_xe_query_gt gts[];
+	/** @gt_list: The GT list returned for this device */
+	struct drm_xe_query_gt gt_list[];
 };
 
 /**
@@ -482,7 +482,7 @@ struct drm_xe_device_query {
 #define DRM_XE_DEVICE_QUERY_ENGINES	0
 #define DRM_XE_DEVICE_QUERY_MEM_USAGE	1
 #define DRM_XE_DEVICE_QUERY_CONFIG	2
-#define DRM_XE_DEVICE_QUERY_GTS		3
+#define DRM_XE_DEVICE_QUERY_GT_LIST	3
 #define DRM_XE_DEVICE_QUERY_HWCONFIG	4
 #define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY	5
 #define DRM_XE_QUERY_CS_CYCLES		6
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index ab7b31188..986a3a0c1 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -39,35 +39,35 @@ static struct drm_xe_query_config *xe_query_config_new(int fd)
 	return config;
 }
 
-static struct drm_xe_query_gts *xe_query_gts_new(int fd)
+static struct drm_xe_query_gt_list *xe_query_gt_list_new(int fd)
 {
-	struct drm_xe_query_gts *gts;
+	struct drm_xe_query_gt_list *gt_list;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_GTS,
+		.query = DRM_XE_DEVICE_QUERY_GT_LIST,
 		.size = 0,
 		.data = 0,
 	};
 
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	gts = malloc(query.size);
-	igt_assert(gts);
+	gt_list = malloc(query.size);
+	igt_assert(gt_list);
 
-	query.data = to_user_pointer(gts);
+	query.data = to_user_pointer(gt_list);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	return gts;
+	return gt_list;
 }
 
-static uint64_t __memory_regions(const struct drm_xe_query_gts *gts)
+static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
 {
 	uint64_t regions = 0;
 	int i;
 
-	for (i = 0; i < gts->num_gt; i++)
-		regions |= gts->gts[i].native_mem_regions |
-			   gts->gts[i].slow_mem_regions;
+	for (i = 0; i < gt_list->num_gt; i++)
+		regions |= gt_list->gt_list[i].native_mem_regions |
+			   gt_list->gt_list[i].slow_mem_regions;
 
 	return regions;
 }
@@ -118,21 +118,21 @@ static struct drm_xe_query_mem_usage *xe_query_mem_usage_new(int fd)
 	return mem_usage;
 }
 
-static uint64_t native_region_for_gt(const struct drm_xe_query_gts *gts, int gt)
+static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list, int gt)
 {
 	uint64_t region;
 
-	igt_assert(gts->num_gt > gt);
-	region = gts->gts[gt].native_mem_regions;
+	igt_assert(gt_list->num_gt > gt);
+	region = gt_list->gt_list[gt].native_mem_regions;
 	igt_assert(region);
 
 	return region;
 }
 
 static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
-			     const struct drm_xe_query_gts *gts, int gt)
+			     const struct drm_xe_query_gt_list *gt_list, int gt)
 {
-	int region_idx = ffs(native_region_for_gt(gts, gt)) - 1;
+	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
 
 	if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
 		return mem_usage->regions[region_idx].total_size;
@@ -141,9 +141,9 @@ static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
 }
 
 static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
-				     const struct drm_xe_query_gts *gts, int gt)
+				     const struct drm_xe_query_gt_list *gt_list, int gt)
 {
-	int region_idx = ffs(native_region_for_gt(gts, gt)) - 1;
+	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
 
 	if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
 		return mem_usage->regions[region_idx].cpu_visible_size;
@@ -220,7 +220,7 @@ static struct xe_device *find_in_cache(int fd)
 static void xe_device_free(struct xe_device *xe_dev)
 {
 	free(xe_dev->config);
-	free(xe_dev->gts);
+	free(xe_dev->gt_list);
 	free(xe_dev->hw_engines);
 	free(xe_dev->mem_usage);
 	free(xe_dev->vram_size);
@@ -252,18 +252,18 @@ struct xe_device *xe_device_get(int fd)
 	xe_dev->number_gt = xe_dev->config->info[XE_QUERY_CONFIG_GT_COUNT];
 	xe_dev->va_bits = xe_dev->config->info[XE_QUERY_CONFIG_VA_BITS];
 	xe_dev->dev_id = xe_dev->config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
-	xe_dev->gts = xe_query_gts_new(fd);
-	xe_dev->memory_regions = __memory_regions(xe_dev->gts);
+	xe_dev->gt_list = xe_query_gt_list_new(fd);
+	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
 	xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
 	xe_dev->mem_usage = xe_query_mem_usage_new(fd);
 	xe_dev->vram_size = calloc(xe_dev->number_gt, sizeof(*xe_dev->vram_size));
 	xe_dev->visible_vram_size = calloc(xe_dev->number_gt, sizeof(*xe_dev->visible_vram_size));
 	for (int gt = 0; gt < xe_dev->number_gt; gt++) {
 		xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_usage,
-						     xe_dev->gts, gt);
+						     xe_dev->gt_list, gt);
 		xe_dev->visible_vram_size[gt] =
 			gt_visible_vram_size(xe_dev->mem_usage,
-					     xe_dev->gts, gt);
+					     xe_dev->gt_list, gt);
 	}
 	xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_usage);
 	xe_dev->has_vram = __mem_has_vram(xe_dev->mem_usage);
@@ -356,7 +356,7 @@ _TYPE _NAME(int fd)			\
  * xe_number_gt:
  * @fd: xe device fd
  *
- * Return number of gts for xe device fd.
+ * Return number of gt_list for xe device fd.
  */
 xe_dev_FN(xe_number_gt, number_gt, unsigned int);
 
@@ -396,7 +396,7 @@ uint64_t vram_memory(int fd, int gt)
 	igt_assert(xe_dev);
 	igt_assert(gt >= 0 && gt < xe_dev->number_gt);
 
-	return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gts, gt) : 0;
+	return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gt_list, gt) : 0;
 }
 
 static uint64_t __xe_visible_vram_size(int fd, int gt)
@@ -647,7 +647,7 @@ uint64_t xe_vram_available(int fd, int gt)
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
 
-	region_idx = ffs(native_region_for_gt(xe_dev->gts, gt)) - 1;
+	region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
 	mem_region = &xe_dev->mem_usage->regions[region_idx];
 
 	if (XE_IS_CLASS_VRAM(mem_region)) {
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 20dbfa12c..da7deaf4c 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -26,13 +26,13 @@ struct xe_device {
 	/** @config: xe configuration */
 	struct drm_xe_query_config *config;
 
-	/** @gts: gt info */
-	struct drm_xe_query_gts *gts;
+	/** @gt_list: gt info */
+	struct drm_xe_query_gt_list *gt_list;
 
 	/** @number_gt: number of gt */
 	unsigned int number_gt;
 
-	/** @gts: bitmask of all memory regions */
+	/** @gt_list: bitmask of all memory regions */
 	uint64_t memory_regions;
 
 	/** @hw_engines: array of hardware engines */
@@ -44,10 +44,10 @@ struct xe_device {
 	/** @mem_usage: regions memory information and usage */
 	struct drm_xe_query_mem_usage *mem_usage;
 
-	/** @vram_size: array of vram sizes for all gts */
+	/** @vram_size: array of vram sizes for all gt_list */
 	uint64_t *vram_size;
 
-	/** @visible_vram_size: array of visible vram sizes for all gts */
+	/** @visible_vram_size: array of visible vram sizes for all gt_list */
 	uint64_t *visible_vram_size;
 
 	/** @default_alignment: safe alignment regardless region location */
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index f0d77aed3..b05b38829 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -20,10 +20,10 @@ static uint32_t read_timestamp_frequency(int fd, int gt_id)
 {
 	struct xe_device *dev = xe_device_get(fd);
 
-	igt_assert(dev && dev->gts && dev->gts->num_gt);
-	igt_assert(gt_id >= 0 && gt_id <= dev->gts->num_gt);
+	igt_assert(dev && dev->gt_list && dev->gt_list->num_gt);
+	igt_assert(gt_id >= 0 && gt_id <= dev->gt_list->num_gt);
 
-	return dev->gts->gts[gt_id].clock_freq;
+	return dev->gt_list->gt_list[gt_id].clock_freq;
 }
 
 static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index a9fe43b08..0cf28baf9 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -147,7 +147,7 @@ igt@xe_prime_self_import@basic-with_fd_dup
 #igt@xe_prime_self_import@basic-llseek-size
 igt@xe_query@query-engines
 igt@xe_query@query-mem-usage
-igt@xe_query@query-gts
+igt@xe_query@query-gt-list
 igt@xe_query@query-config
 igt@xe_query@query-hwconfig
 igt@xe_query@query-topology
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 3aa2918f0..e0d14966b 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -252,17 +252,17 @@ test_query_mem_usage(int fd)
 }
 
 /**
- * SUBTEST: query-gts
+ * SUBTEST: query-gt-list
  * Test category: functionality test
- * Description: Display information about available GTs for xe device.
+ * Description: Display information about available GT components for xe device.
  */
 static void
-test_query_gts(int fd)
+test_query_gt_list(int fd)
 {
-	struct drm_xe_query_gts *gts;
+	struct drm_xe_query_gt_list *gt_list;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_GTS,
+		.query = DRM_XE_DEVICE_QUERY_GT_LIST,
 		.size = 0,
 		.data = 0,
 	};
@@ -271,29 +271,29 @@ test_query_gts(int fd)
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 	igt_assert_neq(query.size, 0);
 
-	gts = malloc(query.size);
-	igt_assert(gts);
+	gt_list = malloc(query.size);
+	igt_assert(gt_list);
 
-	query.data = to_user_pointer(gts);
+	query.data = to_user_pointer(gt_list);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	for (i = 0; i < gts->num_gt; i++) {
-		igt_info("type: %d\n", gts->gts[i].type);
-		igt_info("gt_id: %d\n", gts->gts[i].gt_id);
-		igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
+	for (i = 0; i < gt_list->num_gt; i++) {
+		igt_info("type: %d\n", gt_list->gt_list[i].type);
+		igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
+		igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
 		igt_info("native_mem_regions: 0x%016llx\n",
-		       gts->gts[i].native_mem_regions);
+		       gt_list->gt_list[i].native_mem_regions);
 		igt_info("slow_mem_regions: 0x%016llx\n",
-		       gts->gts[i].slow_mem_regions);
+		       gt_list->gt_list[i].slow_mem_regions);
 		igt_info("inaccessible_mem_regions: 0x%016llx\n",
-		       gts->gts[i].inaccessible_mem_regions);
+		       gt_list->gt_list[i].inaccessible_mem_regions);
 	}
 }
 
 /**
  * SUBTEST: query-topology
  * Test category: functionality test
- * Description: Display topology information of GTs.
+ * Description: Display topology information of GT.
  */
 static void
 test_query_gt_topology(int fd)
@@ -682,8 +682,8 @@ igt_main
 	igt_subtest("query-mem-usage")
 		test_query_mem_usage(xe);
 
-	igt_subtest("query-gts")
-		test_query_gts(xe);
+	igt_subtest("query-gt-list")
+		test_query_gt_list(xe);
 
 	igt_subtest("query-config")
 		test_query_config(xe);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 14/24] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (12 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 13/24] drm-uapi/xe: Rename gts to gt_list Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 15/24] drm-uapi/xe: Align with documentation updates Francois Dugast
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with kernel commit
("drm/xe/uapi: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 4 ++--
 tests/intel/xe_query.c    | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index a0310dbe0..02279b791 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -331,8 +331,8 @@ struct drm_xe_query_config {
 #define XE_QUERY_CONFIG_VA_BITS			3
 #define XE_QUERY_CONFIG_GT_COUNT		4
 #define XE_QUERY_CONFIG_MEM_REGION_COUNT	5
-#define XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY	6
-#define XE_QUERY_CONFIG_NUM_PARAM		(XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY + 1)
+#define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	6
+#define XE_QUERY_CONFIG_NUM_PARAM		(XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY + 1)
 	/** @info: array of elements containing the config info */
 	__u64 info[];
 };
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index e0d14966b..17215fd72 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -380,8 +380,8 @@ test_query_config(int fd)
 		config->info[XE_QUERY_CONFIG_GT_COUNT]);
 	igt_info("XE_QUERY_CONFIG_MEM_REGION_COUNT\t%llu\n",
 		config->info[XE_QUERY_CONFIG_MEM_REGION_COUNT]);
-	igt_info("XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY\t%llu\n",
-		config->info[XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY]);
+	igt_info("XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
+		config->info[XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
 	dump_hex_debug(config, query.size);
 
 	free(config);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 15/24] drm-uapi/xe: Align with documentation updates
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (13 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 14/24] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 16/24] drm-uapi/xe: Align with Crystal Reference Clock updates Francois Dugast
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe/uapi: Add documentation for query")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 41 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 38 insertions(+), 3 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 02279b791..df0455450 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -324,14 +324,43 @@ struct drm_xe_query_config {
 	/** @pad: MBZ */
 	__u32 pad;
 
+	/*
+	 * Device ID (lower 16 bits) and the device revision (next
+	 * 8 bits)
+	 */
 #define XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
+	/*
+	 * Flags describing the device configuration, see list below
+	 */
 #define XE_QUERY_CONFIG_FLAGS			1
+	/*
+	 * Flag is set if the device has usable VRAM
+	 */
 	#define XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
+	/*
+	 * Minimal memory alignment required by this device,
+	 * typically SZ_4K or SZ_64K
+	 */
 #define XE_QUERY_CONFIG_MIN_ALIGNMENT		2
+	/*
+	 * Maximum bits of a virtual address
+	 */
 #define XE_QUERY_CONFIG_VA_BITS			3
+	/*
+	 * Total number of GTs for the entire device
+	 */
 #define XE_QUERY_CONFIG_GT_COUNT		4
+	/*
+	 * Total number of accessible memory regions
+	 */
 #define XE_QUERY_CONFIG_MEM_REGION_COUNT	5
+	/*
+	 * Value of the highest available exec queue priority
+	 */
 #define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	6
+	/*
+	 * Number of elements in the info array
+	 */
 #define XE_QUERY_CONFIG_NUM_PARAM		(XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY + 1)
 	/** @info: array of elements containing the config info */
 	__u64 info[];
@@ -443,9 +472,15 @@ struct drm_xe_query_topology_mask {
 /**
  * struct drm_xe_device_query - main structure to query device information
  *
- * If size is set to 0, the driver fills it with the required size for the
- * requested type of data to query. If size is equal to the required size,
- * the queried information is copied into data.
+ * The user selects the type of data to query among DRM_XE_DEVICE_QUERY_*
+ * and sets the value in the query member. This determines the type of
+ * the structure provided by the driver in data, among struct drm_xe_query_*.
+ *
+ * If size is set to 0, the driver fills it with the required size for
+ * the requested type of data to query. If size is equal to the required
+ * size, the queried information is copied into data. If size is set to
+ * a value different from 0 and different from the required size, the
+ * IOCTL call returns -EINVAL.
  *
  * For example the following code snippet allows retrieving and printing
  * information about the device engines with DRM_XE_DEVICE_QUERY_ENGINES:
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 16/24] drm-uapi/xe: Align with Crystal Reference Clock updates
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (14 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 15/24] drm-uapi/xe: Align with documentation updates Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 17/24] drm-uapi/xe: Align with extension of drm_xe_vm_bind_op Francois Dugast
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

This patch only aims the simplest update as possible to get rid
of the ref_clock in favor of the cs_reference_clock, aligning
with the uapi changes on commit
b53c288afe30 ("drm/xe/uapi: Crystal Reference Clock updates")

This is a non-functional change since the values are exactly
the same. Any issues with current tests would still be present.
Any further update to xe_spin should be done in follow-up updates.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 10 ++++------
 lib/xe/xe_query.c         | 21 +++++++++++++++++++++
 lib/xe/xe_query.h         |  1 +
 lib/xe/xe_spin.c          | 11 +++++------
 tests/intel/xe_query.c    | 31 ++++++++-----------------------
 5 files changed, 39 insertions(+), 35 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index df0455450..8e59b98e5 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -249,8 +249,8 @@ struct drm_xe_query_mem_region {
  * relevant GPU timestamp. clockid is used to return the specific CPU
  * timestamp.
  *
- * The query returns the command streamer cycles and the frequency that can
- * be used to calculate the command streamer timestamp. In addition the
+ * The query returns the command streamer cycles and the reference clock that
+ * can be used to calculate the command streamer timestamp. In addition the
  * query returns a set of cpu timestamps that indicate when the command
  * streamer cycle count was captured.
  */
@@ -267,8 +267,8 @@ struct drm_xe_query_cs_cycles {
 	 */
 	__u64 cs_cycles;
 
-	/** Frequency of the cs cycles in Hz. */
-	__u64 cs_frequency;
+	/** Reference Clock of the cs cycles in Hz. */
+	__u64 cs_reference_clock;
 
 	/**
 	 * CPU timestamp in ns. The timestamp is captured before reading the
@@ -382,8 +382,6 @@ struct drm_xe_query_gt {
 	__u16 type;
 	/** @gt_id: Unique ID of this GT within the PCI Device */
 	__u16 gt_id;
-	/** @clock_freq: A clock frequency for timestamp */
-	__u32 clock_freq;
 	/**
 	 * @native_mem_regions: Bit mask of instances from
 	 * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 986a3a0c1..61d71ef26 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -328,6 +328,27 @@ bool xe_supports_faults(int fd)
 	return supports_faults;
 }
 
+/**
+ * xe_query_cs_cycles:
+ * @fd: xe device fd
+ * @resp: A pointer to a drm_xe_query_cs_cycles to get the output of the query
+ *
+ * Full DRM_XE_QUERY_CS_CYCLES returning the response on the
+ * struct drm_xe_query_cs_cycles pointer argument.
+ */
+void xe_query_cs_cycles(int fd, struct drm_xe_query_cs_cycles *resp)
+{
+	struct drm_xe_device_query query = {
+		.extensions = 0,
+		.query = DRM_XE_QUERY_CS_CYCLES,
+		.size = sizeof(*resp),
+		.data = to_user_pointer(resp),
+	};
+
+	do_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
+	igt_assert(query.size);
+}
+
 static void xe_device_destroy_cache(void)
 {
 	pthread_mutex_lock(&cache.cache_mutex);
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index da7deaf4c..da4461306 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -102,6 +102,7 @@ uint32_t xe_get_default_alignment(int fd);
 uint32_t xe_va_bits(int fd);
 uint16_t xe_dev_id(int fd);
 bool xe_supports_faults(int fd);
+void xe_query_cs_cycles(int fd, struct drm_xe_query_cs_cycles *resp);
 const char *xe_engine_class_string(uint32_t engine_class);
 bool xe_has_engine_class(int fd, uint16_t engine_class);
 
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index b05b38829..986d63cb4 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -16,14 +16,13 @@
 #include "xe_ioctl.h"
 #include "xe_spin.h"
 
-static uint32_t read_timestamp_frequency(int fd, int gt_id)
+static uint32_t read_timestamp_frequency(int fd)
 {
-	struct xe_device *dev = xe_device_get(fd);
+	struct drm_xe_query_cs_cycles ts = {};
 
-	igt_assert(dev && dev->gt_list && dev->gt_list->num_gt);
-	igt_assert(gt_id >= 0 && gt_id <= dev->gt_list->num_gt);
+	xe_query_cs_cycles(fd, &ts);
 
-	return dev->gt_list->gt_list[gt_id].clock_freq;
+	return ts.cs_reference_clock;
 }
 
 static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
@@ -43,7 +42,7 @@ static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
  */
 uint32_t duration_to_ctx_ticks(int fd, int gt_id, uint64_t duration_ns)
 {
-	uint32_t f = read_timestamp_frequency(fd, gt_id);
+	uint32_t f = read_timestamp_frequency(fd);
 	uint64_t ctx_ticks = div64_u64_round_up(duration_ns * f, NSEC_PER_SEC);
 
 	igt_assert_lt_u64(ctx_ticks, XE_SPIN_MAX_CTX_TICKS);
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 17215fd72..872b889f9 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -280,7 +280,6 @@ test_query_gt_list(int fd)
 	for (i = 0; i < gt_list->num_gt; i++) {
 		igt_info("type: %d\n", gt_list->gt_list[i].type);
 		igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
-		igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
 		igt_info("native_mem_regions: 0x%016llx\n",
 		       gt_list->gt_list[i].native_mem_regions);
 		igt_info("slow_mem_regions: 0x%016llx\n",
@@ -488,20 +487,6 @@ query_cs_cycles_supported(int fd)
 	return igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query) == 0;
 }
 
-static void
-query_cs_cycles(int fd, struct drm_xe_query_cs_cycles *resp)
-{
-	struct drm_xe_device_query query = {
-		.extensions = 0,
-		.query = DRM_XE_QUERY_CS_CYCLES,
-		.size = sizeof(*resp),
-		.data = to_user_pointer(resp),
-	};
-
-	do_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
-	igt_assert(query.size);
-}
-
 static void
 __cs_cycles(int fd, struct drm_xe_engine_class_instance *hwe)
 {
@@ -544,29 +529,29 @@ __cs_cycles(int fd, struct drm_xe_engine_class_instance *hwe)
 		ts2.eci = *hwe;
 		ts2.clockid = clock[index].id;
 
-		query_cs_cycles(fd, &ts1);
-		query_cs_cycles(fd, &ts2);
+		xe_query_cs_cycles(fd, &ts1);
+		xe_query_cs_cycles(fd, &ts2);
 
 		igt_debug("[1] cpu_ts before %llu, reg read time %llu\n",
 			  ts1.cpu_timestamp,
 			  ts1.cpu_delta);
 		igt_debug("[1] cs_ts %llu, freq %llu Hz, width %u\n",
-			  ts1.cs_cycles, ts1.cs_frequency, ts1.width);
+			  ts1.cs_cycles, ts1.cs_reference_clock, ts1.width);
 
 		igt_debug("[2] cpu_ts before %llu, reg read time %llu\n",
 			  ts2.cpu_timestamp,
 			  ts2.cpu_delta);
 		igt_debug("[2] cs_ts %llu, freq %llu Hz, width %u\n",
-			  ts2.cs_cycles, ts2.cs_frequency, ts2.width);
+			  ts2.cs_cycles, ts2.cs_reference_clock, ts2.width);
 
 		delta_cpu = ts2.cpu_timestamp - ts1.cpu_timestamp;
 
 		if (ts2.cs_cycles >= ts1.cs_cycles)
 			delta_cs = (ts2.cs_cycles - ts1.cs_cycles) *
-				   NSEC_PER_SEC / ts1.cs_frequency;
+				   NSEC_PER_SEC / ts1.cs_reference_clock;
 		else
 			delta_cs = (((1 << ts2.width) - ts2.cs_cycles) + ts1.cs_cycles) *
-				   NSEC_PER_SEC / ts1.cs_frequency;
+				   NSEC_PER_SEC / ts1.cs_reference_clock;
 
 		igt_debug("delta_cpu[%lu], delta_cs[%lu]\n",
 			  delta_cpu, delta_cs);
@@ -637,7 +622,7 @@ static void test_cs_cycles_invalid(int fd)
 
 	/* sanity check engine selection is valid */
 	ts.eci = *hwe;
-	query_cs_cycles(fd, &ts);
+	xe_query_cs_cycles(fd, &ts);
 
 	/* bad instance */
 	ts.eci = *hwe;
@@ -666,7 +651,7 @@ static void test_cs_cycles_invalid(int fd)
 	ts.clockid = 0;
 
 	/* sanity check */
-	query_cs_cycles(fd, &ts);
+	xe_query_cs_cycles(fd, &ts);
 }
 
 igt_main
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 17/24] drm-uapi/xe: Align with extension of drm_xe_vm_bind_op
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (15 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 16/24] drm-uapi/xe: Align with Crystal Reference Clock updates Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 18/24] drm-uapi/xe: Align with uAPI to query micro-controler firmware version Francois Dugast
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe: Extend drm_xe_vm_bind_op")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 include/drm-uapi/xe_drm.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 8e59b98e5..f07694111 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -657,6 +657,9 @@ struct drm_xe_vm_destroy {
 };
 
 struct drm_xe_vm_bind_op {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
 	/**
 	 * @obj: GEM object to operate on, MBZ for MAP_USERPTR, MBZ for UNMAP
 	 */
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 18/24] drm-uapi/xe: Align with uAPI to query micro-controler firmware version
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (16 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 17/24] drm-uapi/xe: Align with extension of drm_xe_vm_bind_op Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 19/24] drm-uapi/xe: Align with DRM_XE_DEVICE_QUERY_HWCONFIG documentation Francois Dugast
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe: Add uAPI to query micro-controler firmware version")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 45 +++++++++++++++++++++++++++++++++------
 1 file changed, 38 insertions(+), 7 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index f07694111..e0e202e4a 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -467,6 +467,36 @@ struct drm_xe_query_topology_mask {
 	__u8 mask[];
 };
 
+/**
+ * struct drm_xe_query_uc_fw_version - query a micro-controller firmware version
+ *
+ * Given a uc_type this will return the major, minor, patch and branch version
+ * of the micro-controller firmware.
+ */
+struct drm_xe_query_uc_fw_version {
+	/** @uc: The micro-controller type to query firmware version */
+#define XE_QUERY_UC_TYPE_GUC 0
+	__u16 uc_type;
+
+	/** @pad: MBZ */
+	__u16 pad;
+
+	/* @major_ver: major uc fw version */
+	__u32 major_ver;
+	/* @minor_ver: minor uc fw version */
+	__u32 minor_ver;
+	/* @patch_ver: patch uc fw version */
+	__u32 patch_ver;
+	/* @branch_ver: branch uc fw version */
+	__u32 branch_ver;
+
+	/** @pad2: MBZ */
+	__u32 pad2;
+
+	/** @reserved: Reserved */
+	__u64 reserved;
+};
+
 /**
  * struct drm_xe_device_query - main structure to query device information
  *
@@ -512,13 +542,14 @@ struct drm_xe_device_query {
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
-#define DRM_XE_DEVICE_QUERY_ENGINES	0
-#define DRM_XE_DEVICE_QUERY_MEM_USAGE	1
-#define DRM_XE_DEVICE_QUERY_CONFIG	2
-#define DRM_XE_DEVICE_QUERY_GT_LIST	3
-#define DRM_XE_DEVICE_QUERY_HWCONFIG	4
-#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY	5
-#define DRM_XE_QUERY_CS_CYCLES		6
+#define DRM_XE_DEVICE_QUERY_ENGINES		0
+#define DRM_XE_DEVICE_QUERY_MEM_USAGE		1
+#define DRM_XE_DEVICE_QUERY_CONFIG		2
+#define DRM_XE_DEVICE_QUERY_GT_LIST		3
+#define DRM_XE_DEVICE_QUERY_HWCONFIG		4
+#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY		5
+#define DRM_XE_QUERY_CS_CYCLES			6
+#define DRM_XE_DEVICE_QUERY_UC_FW_VERSION	7
 	/** @query: The type of data to query */
 	__u32 query;
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 19/24] drm-uapi/xe: Align with DRM_XE_DEVICE_QUERY_HWCONFIG documentation
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (17 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 18/24] drm-uapi/xe: Align with uAPI to query micro-controler firmware version Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 20/24] drm-uapi/xe: Align with uAPI to pad to drm_xe_engine_class_instance Francois Dugast
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Document DRM_XE_DEVICE_QUERY_HWCONFIG")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index e0e202e4a..d70afb8ff 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -546,6 +546,11 @@ struct drm_xe_device_query {
 #define DRM_XE_DEVICE_QUERY_MEM_USAGE		1
 #define DRM_XE_DEVICE_QUERY_CONFIG		2
 #define DRM_XE_DEVICE_QUERY_GT_LIST		3
+	/*
+	 * Query type to retrieve the hardware configuration of the device
+	 * such as information on slices, memory, caches, and so on. It is
+	 * provided as a table of attributes (key / value).
+	 */
 #define DRM_XE_DEVICE_QUERY_HWCONFIG		4
 #define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY		5
 #define DRM_XE_QUERY_CS_CYCLES			6
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 20/24] drm-uapi/xe: Align with uAPI to pad to drm_xe_engine_class_instance
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (18 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 19/24] drm-uapi/xe: Align with DRM_XE_DEVICE_QUERY_HWCONFIG documentation Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 21/24] drm-uapi/xe: Align with uAPI update query HuC micro-controler firmware version Francois Dugast
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Add pad to drm_xe_engine_class_instance")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index d70afb8ff..9931232f2 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -143,6 +143,8 @@ struct drm_xe_engine_class_instance {
 
 	__u16 engine_instance;
 	__u16 gt_id;
+	/** @pad: MBZ */
+	__u32 pad;
 };
 
 /**
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 21/24] drm-uapi/xe: Align with uAPI update query HuC micro-controler firmware version
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (19 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 20/24] drm-uapi/xe: Align with uAPI to pad to drm_xe_engine_class_instance Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 22/24] drm-uapi/xe: Align with uAPI update for query config num_params Francois Dugast
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe: Extend uAPI to query HuC micro-controler firmware version")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 9931232f2..69b5189ab 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -478,6 +478,7 @@ struct drm_xe_query_topology_mask {
 struct drm_xe_query_uc_fw_version {
 	/** @uc: The micro-controller type to query firmware version */
 #define XE_QUERY_UC_TYPE_GUC 0
+#define XE_QUERY_UC_TYPE_HUC 1
 	__u16 uc_type;
 
 	/** @pad: MBZ */
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 22/24] drm-uapi/xe: Align with uAPI update for query config num_params
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (20 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 21/24] drm-uapi/xe: Align with uAPI update query HuC micro-controler firmware version Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 23/24] drm-uapi/xe: Align with uAPI update to add DRM_ prefix in uAPI constants Francois Dugast
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe: Remove useless query config num_params")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 7 -------
 lib/xe/xe_query.c         | 2 --
 tests/intel/xe_query.c    | 2 --
 3 files changed, 11 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 69b5189ab..94a22b6c8 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -320,9 +320,6 @@ struct drm_xe_query_mem_usage {
  * struct drm_xe_query_config in .data.
  */
 struct drm_xe_query_config {
-	/** @num_params: number of parameters returned in info */
-	__u32 num_params;
-
 	/** @pad: MBZ */
 	__u32 pad;
 
@@ -360,10 +357,6 @@ struct drm_xe_query_config {
 	 * Value of the highest available exec queue priority
 	 */
 #define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	6
-	/*
-	 * Number of elements in the info array
-	 */
-#define XE_QUERY_CONFIG_NUM_PARAM		(XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY + 1)
 	/** @info: array of elements containing the config info */
 	__u64 info[];
 };
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 61d71ef26..6feb82ddf 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -34,8 +34,6 @@ static struct drm_xe_query_config *xe_query_config_new(int fd)
 	query.data = to_user_pointer(config);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	igt_assert(config->num_params > 0);
-
 	return config;
 }
 
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 872b889f9..0dabe8d06 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -358,8 +358,6 @@ test_query_config(int fd)
 	query.data = to_user_pointer(config);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	igt_assert(config->num_params > 0);
-
 	igt_info("XE_QUERY_CONFIG_REV_AND_DEVICE_ID\t%#llx\n",
 		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID]);
 	igt_info("  REV_ID\t\t\t\t%#llx\n",
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 23/24] drm-uapi/xe: Align with uAPI update to add DRM_ prefix in uAPI constants
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (21 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 22/24] drm-uapi/xe: Align with uAPI update for query config num_params Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 24/24] drm-uapi/xe: Align with uAPI update to add _FLAG to constants usable for flags Francois Dugast
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Add missing DRM_ prefix in uAPI constants")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h        | 116 ++++++++++++++++---------------
 lib/intel_batchbuffer.c          |   8 +--
 lib/xe/xe_ioctl.c                |  22 +++---
 lib/xe/xe_query.c                |  14 ++--
 lib/xe/xe_query.h                |   4 +-
 lib/xe/xe_util.c                 |  10 +--
 lib/xe/xe_util.h                 |   4 +-
 tests/intel/xe_ccs.c             |   4 +-
 tests/intel/xe_debugfs.c         |  14 ++--
 tests/intel/xe_exec_basic.c      |   8 +--
 tests/intel/xe_exec_fault_mode.c |   4 +-
 tests/intel/xe_exec_reset.c      |  20 +++---
 tests/intel/xe_exec_threads.c    |   4 +-
 tests/intel/xe_exercise_blt.c    |   4 +-
 tests/intel/xe_pm.c              |   2 +-
 tests/intel/xe_query.c           |  48 ++++++-------
 tests/intel/xe_vm.c              |  10 +--
 tests/kms_flip.c                 |   0
 18 files changed, 149 insertions(+), 147 deletions(-)
 mode change 100755 => 100644 tests/kms_flip.c

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 94a22b6c8..c4cf9d56f 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -19,12 +19,12 @@ extern "C" {
 /**
  * DOC: uevent generated by xe on it's pci node.
  *
- * XE_RESET_FAILED_UEVENT - Event is generated when attempt to reset gt
+ * DRM_XE_RESET_FAILED_UEVENT - Event is generated when attempt to reset gt
  * fails. The value supplied with the event is always "NEEDS_RESET".
  * Additional information supplied is tile id and gt id of the gt unit for
  * which reset has failed.
  */
-#define XE_RESET_FAILED_UEVENT "DEVICE_STATUS"
+#define DRM_XE_RESET_FAILED_UEVENT "DEVICE_STATUS"
 
 /**
  * struct xe_user_extension - Base class for defining a chain of extensions
@@ -151,14 +151,14 @@ struct drm_xe_engine_class_instance {
  * enum drm_xe_memory_class - Supported memory classes.
  */
 enum drm_xe_memory_class {
-	/** @XE_MEM_REGION_CLASS_SYSMEM: Represents system memory. */
-	XE_MEM_REGION_CLASS_SYSMEM = 0,
+	/** @DRM_XE_MEM_REGION_CLASS_SYSMEM: Represents system memory. */
+	DRM_XE_MEM_REGION_CLASS_SYSMEM = 0,
 	/**
-	 * @XE_MEM_REGION_CLASS_VRAM: On discrete platforms, this
+	 * @DRM_XE_MEM_REGION_CLASS_VRAM: On discrete platforms, this
 	 * represents the memory that is local to the device, which we
 	 * call VRAM. Not valid on integrated platforms.
 	 */
-	XE_MEM_REGION_CLASS_VRAM
+	DRM_XE_MEM_REGION_CLASS_VRAM
 };
 
 /**
@@ -218,7 +218,7 @@ struct drm_xe_query_mem_region {
 	 * always equal the @total_size, since all of it will be CPU
 	 * accessible.
 	 *
-	 * Note this is only tracked for XE_MEM_REGION_CLASS_VRAM
+	 * Note this is only tracked for DRM_XE_MEM_REGION_CLASS_VRAM
 	 * regions (for other types the value here will always equal
 	 * zero).
 	 */
@@ -230,7 +230,7 @@ struct drm_xe_query_mem_region {
 	 * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable
 	 * accounting. Without this the value here will always equal
 	 * zero.  Note this is only currently tracked for
-	 * XE_MEM_REGION_CLASS_VRAM regions (for other types the value
+	 * DRM_XE_MEM_REGION_CLASS_VRAM regions (for other types the value
 	 * here will always be zero).
 	 */
 	__u64 cpu_visible_used;
@@ -327,36 +327,36 @@ struct drm_xe_query_config {
 	 * Device ID (lower 16 bits) and the device revision (next
 	 * 8 bits)
 	 */
-#define XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
+#define DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
 	/*
 	 * Flags describing the device configuration, see list below
 	 */
-#define XE_QUERY_CONFIG_FLAGS			1
+#define DRM_XE_QUERY_CONFIG_FLAGS			1
 	/*
 	 * Flag is set if the device has usable VRAM
 	 */
-	#define XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
+	#define DRM_DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
 	/*
 	 * Minimal memory alignment required by this device,
 	 * typically SZ_4K or SZ_64K
 	 */
-#define XE_QUERY_CONFIG_MIN_ALIGNMENT		2
+#define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT		2
 	/*
 	 * Maximum bits of a virtual address
 	 */
-#define XE_QUERY_CONFIG_VA_BITS			3
+#define DRM_XE_QUERY_CONFIG_VA_BITS			3
 	/*
 	 * Total number of GTs for the entire device
 	 */
-#define XE_QUERY_CONFIG_GT_COUNT		4
+#define DRM_XE_QUERY_CONFIG_GT_COUNT		4
 	/*
 	 * Total number of accessible memory regions
 	 */
-#define XE_QUERY_CONFIG_MEM_REGION_COUNT	5
+#define DRM_XE_QUERY_CONFIG_MEM_REGION_COUNT	5
 	/*
 	 * Value of the highest available exec queue priority
 	 */
-#define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	6
+#define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	6
 	/** @info: array of elements containing the config info */
 	__u64 info[];
 };
@@ -370,9 +370,9 @@ struct drm_xe_query_config {
  * implementing graphics and/or media operations.
  */
 struct drm_xe_query_gt {
-#define XE_QUERY_GT_TYPE_MAIN		0
-#define XE_QUERY_GT_TYPE_REMOTE		1
-#define XE_QUERY_GT_TYPE_MEDIA		2
+#define DRM_XE_QUERY_GT_TYPE_MAIN		0
+#define DRM_XE_QUERY_GT_TYPE_REMOTE		1
+#define DRM_XE_QUERY_GT_TYPE_MEDIA		2
 	/** @type: GT type: Main, Remote, or Media */
 	__u16 type;
 	/** @gt_id: Unique ID of this GT within the PCI Device */
@@ -435,7 +435,7 @@ struct drm_xe_query_topology_mask {
 	 *   DSS_GEOMETRY    ff ff ff ff 00 00 00 00
 	 * means 32 DSS are available for geometry.
 	 */
-#define XE_TOPO_DSS_GEOMETRY	(1 << 0)
+#define DRM_XE_TOPO_DSS_GEOMETRY	(1 << 0)
 	/*
 	 * To query the mask of Dual Sub Slices (DSS) available for compute
 	 * operations. For example a query response containing the following
@@ -443,7 +443,7 @@ struct drm_xe_query_topology_mask {
 	 *   DSS_COMPUTE    ff ff ff ff 00 00 00 00
 	 * means 32 DSS are available for compute.
 	 */
-#define XE_TOPO_DSS_COMPUTE	(1 << 1)
+#define DRM_XE_TOPO_DSS_COMPUTE	(1 << 1)
 	/*
 	 * To query the mask of Execution Units (EU) available per Dual Sub
 	 * Slices (DSS). For example a query response containing the following
@@ -451,7 +451,7 @@ struct drm_xe_query_topology_mask {
 	 *   EU_PER_DSS    ff ff 00 00 00 00 00 00
 	 * means each DSS has 16 EU.
 	 */
-#define XE_TOPO_EU_PER_DSS	(1 << 2)
+#define DRM_XE_TOPO_EU_PER_DSS	(1 << 2)
 	/** @type: type of mask */
 	__u16 type;
 
@@ -470,8 +470,8 @@ struct drm_xe_query_topology_mask {
  */
 struct drm_xe_query_uc_fw_version {
 	/** @uc: The micro-controller type to query firmware version */
-#define XE_QUERY_UC_TYPE_GUC 0
-#define XE_QUERY_UC_TYPE_HUC 1
+#define DRM_XE_QUERY_UC_TYPE_GUC 0
+#define DRM_XE_QUERY_UC_TYPE_HUC 1
 	__u16 uc_type;
 
 	/** @pad: MBZ */
@@ -575,8 +575,8 @@ struct drm_xe_gem_create {
 	 */
 	__u64 size;
 
-#define XE_GEM_CREATE_FLAG_DEFER_BACKING	(0x1 << 24)
-#define XE_GEM_CREATE_FLAG_SCANOUT		(0x1 << 25)
+#define DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING	(0x1 << 24)
+#define DRM_XE_GEM_CREATE_FLAG_SCANOUT		(0x1 << 25)
 /*
  * When using VRAM as a possible placement, ensure that the corresponding VRAM
  * allocation will always use the CPU accessible part of VRAM. This is important
@@ -592,7 +592,7 @@ struct drm_xe_gem_create {
  * display surfaces, therefore the kernel requires setting this flag for such
  * objects, otherwise an error is thrown on small-bar systems.
  */
-#define XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM	(0x1 << 26)
+#define DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM	(0x1 << 26)
 	/**
 	 * @flags: Flags, currently a mask of memory instances of where BO can
 	 * be placed
@@ -659,7 +659,7 @@ struct drm_xe_ext_set_property {
 };
 
 struct drm_xe_vm_create {
-#define XE_VM_EXTENSION_SET_PROPERTY	0
+#define DRM_XE_VM_EXTENSION_SET_PROPERTY	0
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
@@ -725,29 +725,29 @@ struct drm_xe_vm_bind_op {
 	 */
 	__u64 tile_mask;
 
-#define XE_VM_BIND_OP_MAP		0x0
-#define XE_VM_BIND_OP_UNMAP		0x1
-#define XE_VM_BIND_OP_MAP_USERPTR	0x2
-#define XE_VM_BIND_OP_UNMAP_ALL		0x3
-#define XE_VM_BIND_OP_PREFETCH		0x4
+#define DRM_XE_VM_BIND_OP_MAP		0x0
+#define DRM_XE_VM_BIND_OP_UNMAP		0x1
+#define DRM_DRM_XE_VM_BIND_OP_MAP_USERPTR	0x2
+#define DRM_DRM_XE_VM_BIND_OP_UNMAP_ALL		0x3
+#define DRM_XE_VM_BIND_OP_PREFETCH		0x4
 	/** @op: Bind operation to perform */
 	__u32 op;
 
-#define XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
-#define XE_VM_BIND_FLAG_ASYNC		(0x1 << 1)
+#define DRM_XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
+#define DRM_XE_VM_BIND_FLAG_ASYNC		(0x1 << 1)
 	/*
 	 * Valid on a faulting VM only, do the MAP operation immediately rather
 	 * than deferring the MAP to the page fault handler.
 	 */
-#define XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
+#define DRM_XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
 	/*
 	 * When the NULL flag is set, the page tables are setup with a special
 	 * bit which indicates writes are dropped and all reads return zero.  In
-	 * the future, the NULL flags will only be valid for XE_VM_BIND_OP_MAP
+	 * the future, the NULL flags will only be valid for DRM_XE_VM_BIND_OP_MAP
 	 * operations, the BO handle MBZ, and the BO offset MBZ. This flag is
 	 * intended to implement VK sparse bindings.
 	 */
-#define XE_VM_BIND_FLAG_NULL		(0x1 << 3)
+#define DRM_XE_VM_BIND_FLAG_NULL		(0x1 << 3)
 	/** @flags: Bind flags */
 	__u32 flags;
 
@@ -814,14 +814,14 @@ struct drm_xe_exec_queue_set_property {
 	/** @exec_queue_id: Exec queue ID */
 	__u32 exec_queue_id;
 
-#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY		0
-#define XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
-#define XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT	2
-#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE		3
-#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT		4
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER		5
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY		6
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY	7
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY		0
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT	2
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE		3
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT		4
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER		5
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY		6
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY	7
 	/** @property: property to set */
 	__u32 property;
 
@@ -833,7 +833,7 @@ struct drm_xe_exec_queue_set_property {
 };
 
 struct drm_xe_exec_queue_create {
-#define XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY               0
+#define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY               0
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
@@ -872,7 +872,7 @@ struct drm_xe_exec_queue_get_property {
 	/** @exec_queue_id: Exec queue ID */
 	__u32 exec_queue_id;
 
-#define XE_EXEC_QUEUE_GET_PROPERTY_BAN			0
+#define DRM_XE_EXEC_QUEUE_GET_PROPERTY_BAN			0
 	/** @property: property to get */
 	__u32 property;
 
@@ -1108,12 +1108,14 @@ struct drm_xe_vm_madvise {
 /**
  * DOC: XE PMU event config IDs
  *
- * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
- * as part of perf_event_open syscall to read a particular event.
+ * Check 'man perf_event_open' to use the ID's XE_PMU_XXXX listed in xe_drm.h
+ * in 'struct perf_event_attr' as part of perf_event_open syscall to read a
+ * particular event.
  *
- * For example to open the XE_PMU_INTERRUPTS(0):
+ * For example to open the DRM_XE_PMU_INTERRUPTS(0):
  *
  * .. code-block:: C
+ *
  *	struct perf_event_attr attr;
  *	long long count;
  *	int cpu = 0;
@@ -1124,7 +1126,7 @@ struct drm_xe_vm_madvise {
  *	attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
  *	attr.use_clockid = 1;
  *	attr.clockid = CLOCK_MONOTONIC;
- *	attr.config = XE_PMU_INTERRUPTS(0);
+ *	attr.config = DRM_XE_PMU_INTERRUPTS(0);
  *
  *	fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
  */
@@ -1137,11 +1139,11 @@ struct drm_xe_vm_madvise {
 #define ___XE_PMU_OTHER(gt, x) \
 	(((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
 
-#define XE_PMU_INTERRUPTS(gt)			___XE_PMU_OTHER(gt, 0)
-#define XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
-#define XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
-#define XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 3)
-#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 4)
+#define DRM_XE_PMU_INTERRUPTS(gt)			___XE_PMU_OTHER(gt, 0)
+#define DRM_XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
+#define DRM_XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
+#define DRM_XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 3)
+#define DRM_XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 4)
 
 #if defined(__cplusplus)
 }
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index df82ef5f5..bea03ff39 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1286,7 +1286,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
 {
 	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
 	struct drm_xe_vm_bind_op *bind_ops, *ops;
-	bool set_obj = (op & 0xffff) == XE_VM_BIND_OP_MAP;
+	bool set_obj = (op & 0xffff) == DRM_XE_VM_BIND_OP_MAP;
 
 	bind_ops = calloc(ibb->num_objects, sizeof(*bind_ops));
 	igt_assert(bind_ops);
@@ -1325,8 +1325,8 @@ static void __unbind_xe_objects(struct intel_bb *ibb)
 
 	if (ibb->num_objects > 1) {
 		struct drm_xe_vm_bind_op *bind_ops;
-		uint32_t op = XE_VM_BIND_OP_UNMAP;
-		uint32_t flags = XE_VM_BIND_FLAG_ASYNC;
+		uint32_t op = DRM_XE_VM_BIND_OP_UNMAP;
+		uint32_t flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 
 		bind_ops = xe_alloc_bind_ops(ibb, op, flags, 0);
 		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
@@ -2357,7 +2357,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
 
 	syncs[0].handle = syncobj_create(ibb->fd, 0);
 	if (ibb->num_objects > 1) {
-		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, 0);
+		bind_ops = xe_alloc_bind_ops(ibb, DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC, 0);
 		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
 				 ibb->num_objects, syncs, 1);
 		free(bind_ops);
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 895e3bd4e..da387f5fb 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -67,7 +67,7 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
 			    uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, 0, 0, 0,
-			    XE_VM_BIND_OP_UNMAP_ALL, XE_VM_BIND_FLAG_ASYNC,
+			    DRM_DRM_XE_VM_BIND_OP_UNMAP_ALL, DRM_XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -130,7 +130,7 @@ void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
+			    DRM_XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
 }
 
 void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
@@ -138,7 +138,7 @@ void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
 		  struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, 0, 0, offset, addr, size,
-			    XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
+			    DRM_XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
 }
 
 void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t offset,
@@ -147,7 +147,7 @@ void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t off
 			  uint32_t region)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
-			    XE_VM_BIND_OP_PREFETCH, XE_VM_BIND_FLAG_ASYNC,
+			    DRM_XE_VM_BIND_OP_PREFETCH, DRM_XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, region, 0);
 }
 
@@ -156,7 +156,7 @@ void xe_vm_bind_async(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		      struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, sync,
+			    DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC, sync,
 			    num_syncs, 0, 0);
 }
 
@@ -166,7 +166,7 @@ void xe_vm_bind_async_flags(int fd, uint32_t vm, uint32_t exec_queue, uint32_t b
 			    uint32_t flags)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC | flags,
+			    DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC | flags,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -175,7 +175,7 @@ void xe_vm_bind_userptr_async(int fd, uint32_t vm, uint32_t exec_queue,
 			      struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
-			    XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC,
+			    DRM_DRM_XE_VM_BIND_OP_MAP_USERPTR, DRM_XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -185,7 +185,7 @@ void xe_vm_bind_userptr_async_flags(int fd, uint32_t vm, uint32_t exec_queue,
 				    uint32_t num_syncs, uint32_t flags)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
-			    XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC |
+			    DRM_DRM_XE_VM_BIND_OP_MAP_USERPTR, DRM_XE_VM_BIND_FLAG_ASYNC |
 			    flags, sync, num_syncs, 0, 0);
 }
 
@@ -194,7 +194,7 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
 			struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
-			    XE_VM_BIND_OP_UNMAP, XE_VM_BIND_FLAG_ASYNC, sync,
+			    DRM_XE_VM_BIND_OP_UNMAP, DRM_XE_VM_BIND_FLAG_ASYNC, sync,
 			    num_syncs, 0, 0);
 }
 
@@ -208,13 +208,13 @@ static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 void xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		     uint64_t addr, uint64_t size)
 {
-	__xe_vm_bind_sync(fd, vm, bo, offset, addr, size, XE_VM_BIND_OP_MAP);
+	__xe_vm_bind_sync(fd, vm, bo, offset, addr, size, DRM_XE_VM_BIND_OP_MAP);
 }
 
 void xe_vm_unbind_sync(int fd, uint32_t vm, uint64_t offset,
 		       uint64_t addr, uint64_t size)
 {
-	__xe_vm_bind_sync(fd, vm, 0, offset, addr, size, XE_VM_BIND_OP_UNMAP);
+	__xe_vm_bind_sync(fd, vm, 0, offset, addr, size, DRM_XE_VM_BIND_OP_UNMAP);
 }
 
 void xe_vm_destroy(int fd, uint32_t vm)
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 6feb82ddf..75f015c2e 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -247,9 +247,9 @@ struct xe_device *xe_device_get(int fd)
 
 	xe_dev->fd = fd;
 	xe_dev->config = xe_query_config_new(fd);
-	xe_dev->number_gt = xe_dev->config->info[XE_QUERY_CONFIG_GT_COUNT];
-	xe_dev->va_bits = xe_dev->config->info[XE_QUERY_CONFIG_VA_BITS];
-	xe_dev->dev_id = xe_dev->config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
+	xe_dev->number_gt = xe_dev->config->info[DRM_XE_QUERY_CONFIG_GT_COUNT];
+	xe_dev->va_bits = xe_dev->config->info[DRM_XE_QUERY_CONFIG_VA_BITS];
+	xe_dev->dev_id = xe_dev->config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
 	xe_dev->gt_list = xe_query_gt_list_new(fd);
 	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
 	xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
@@ -434,7 +434,7 @@ static uint64_t __xe_visible_vram_size(int fd, int gt)
  * @gt: gt id
  *
  * Returns vram memory bitmask for xe device @fd and @gt id, with
- * XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM also set, to ensure that CPU access is
+ * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM also set, to ensure that CPU access is
  * possible.
  */
 uint64_t visible_vram_memory(int fd, int gt)
@@ -444,7 +444,7 @@ uint64_t visible_vram_memory(int fd, int gt)
 	 * has landed.
 	 */
 	if (__xe_visible_vram_size(fd, gt))
-		return vram_memory(fd, gt) | XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
+		return vram_memory(fd, gt) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
 	else
 		return vram_memory(fd, gt); /* older kernel */
 }
@@ -469,7 +469,7 @@ uint64_t vram_if_possible(int fd, int gt)
  *
  * Returns vram memory bitmask for xe device @fd and @gt id or system memory if
  * there's no vram memory available for @gt. Also attaches the
- * XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
+ * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
  * when using vram.
  */
 uint64_t visible_vram_if_possible(int fd, int gt)
@@ -483,7 +483,7 @@ uint64_t visible_vram_if_possible(int fd, int gt)
 	 * has landed.
 	 */
 	if (__xe_visible_vram_size(fd, gt))
-		return vram ? vram | XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
+		return vram ? vram | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
 	else
 		return vram ? vram : system_memory; /* older kernel */
 }
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index da4461306..0e2f40380 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -76,8 +76,8 @@ struct xe_device {
 	for (uint64_t __i = 0; __i < igt_fls(__memreg); __i++) \
 		for_if(__r = (__memreg & (1ull << __i)))
 
-#define XE_IS_CLASS_SYSMEM(__region) ((__region)->mem_class == XE_MEM_REGION_CLASS_SYSMEM)
-#define XE_IS_CLASS_VRAM(__region) ((__region)->mem_class == XE_MEM_REGION_CLASS_VRAM)
+#define XE_IS_CLASS_SYSMEM(__region) ((__region)->mem_class == DRM_XE_MEM_REGION_CLASS_SYSMEM)
+#define XE_IS_CLASS_VRAM(__region) ((__region)->mem_class == DRM_XE_MEM_REGION_CLASS_VRAM)
 
 unsigned int xe_number_gt(int fd);
 uint64_t all_memory_regions(int fd);
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 5fa4d4610..780125f92 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -134,12 +134,12 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
 		ops = &bind_ops[i];
 
 		if (obj->bind_op == XE_OBJECT_BIND) {
-			op = XE_VM_BIND_OP_MAP;
-			flags = XE_VM_BIND_FLAG_ASYNC;
+			op = DRM_XE_VM_BIND_OP_MAP;
+			flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 			ops->obj = obj->handle;
 		} else {
-			op = XE_VM_BIND_OP_UNMAP;
-			flags = XE_VM_BIND_FLAG_ASYNC;
+			op = DRM_XE_VM_BIND_OP_UNMAP;
+			flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 		}
 
 		ops->op = op;
@@ -211,7 +211,7 @@ void xe_bind_unbind_async(int xe, uint32_t vm, uint32_t bind_engine,
 		  tabsyncs[0].handle, tabsyncs[1].handle);
 
 	if (num_binds == 1) {
-		if ((bind_ops[0].op & 0xffff) == XE_VM_BIND_OP_MAP)
+		if ((bind_ops[0].op & 0xffff) == DRM_XE_VM_BIND_OP_MAP)
 			xe_vm_bind_async(xe, vm, bind_engine, bind_ops[0].obj, 0,
 					 bind_ops[0].addr, bind_ops[0].range,
 					 syncs, num_syncs);
diff --git a/lib/xe/xe_util.h b/lib/xe/xe_util.h
index e97d236b8..21b312071 100644
--- a/lib/xe/xe_util.h
+++ b/lib/xe/xe_util.h
@@ -13,9 +13,9 @@
 #include <xe_drm.h>
 
 #define XE_IS_SYSMEM_MEMORY_REGION(fd, region) \
-	(xe_region_class(fd, region) == XE_MEM_REGION_CLASS_SYSMEM)
+	(xe_region_class(fd, region) == DRM_XE_MEM_REGION_CLASS_SYSMEM)
 #define XE_IS_VRAM_MEMORY_REGION(fd, region) \
-	(xe_region_class(fd, region) == XE_MEM_REGION_CLASS_VRAM)
+	(xe_region_class(fd, region) == DRM_XE_MEM_REGION_CLASS_VRAM)
 
 struct igt_collection *
 __xe_get_memory_region_set(int xe, uint32_t *mem_regions_type, int num_regions);
diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
index 300b734c8..fa53c0279 100644
--- a/tests/intel/xe_ccs.c
+++ b/tests/intel/xe_ccs.c
@@ -634,8 +634,8 @@ igt_main_args("bf:pst:W:H:", NULL, help_str, opt_handler, NULL)
 		xe_device_get(xe);
 
 		set = xe_get_memory_region_set(xe,
-					       XE_MEM_REGION_CLASS_SYSMEM,
-					       XE_MEM_REGION_CLASS_VRAM);
+					       DRM_XE_MEM_REGION_CLASS_SYSMEM,
+					       DRM_XE_MEM_REGION_CLASS_VRAM);
 	}
 
 	igt_describe("Check block-copy uncompressed blit");
diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c
index e5bbb364c..e7c8f9585 100644
--- a/tests/intel/xe_debugfs.c
+++ b/tests/intel/xe_debugfs.c
@@ -91,20 +91,20 @@ test_base(int fd, struct drm_xe_query_config *config)
 
 	igt_assert(config);
 	sprintf(reference, "devid 0x%llx",
-			config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
+			config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
 	sprintf(reference, "revid %lld",
-			config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
+			config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
-	sprintf(reference, "is_dgfx %s", config->info[XE_QUERY_CONFIG_FLAGS] &
-		XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "yes" : "no");
+	sprintf(reference, "is_dgfx %s", config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
+		DRM_DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "yes" : "no");
 
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
 	if (!AT_LEAST_GEN(devid, 20)) {
-		switch (config->info[XE_QUERY_CONFIG_VA_BITS]) {
+		switch (config->info[DRM_XE_QUERY_CONFIG_VA_BITS]) {
 		case 48:
 			val = 3;
 			break;
@@ -121,13 +121,13 @@ test_base(int fd, struct drm_xe_query_config *config)
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
 	igt_assert(igt_debugfs_exists(fd, "gt0", O_RDONLY));
-	if (config->info[XE_QUERY_CONFIG_GT_COUNT] > 1)
+	if (config->info[DRM_XE_QUERY_CONFIG_GT_COUNT] > 1)
 		igt_assert(igt_debugfs_exists(fd, "gt1", O_RDONLY));
 
 	igt_assert(igt_debugfs_exists(fd, "gtt_mm", O_RDONLY));
 	igt_debugfs_dump(fd, "gtt_mm");
 
-	if (config->info[XE_QUERY_CONFIG_FLAGS] & XE_QUERY_CONFIG_FLAGS_HAS_VRAM) {
+	if (config->info[DRM_XE_QUERY_CONFIG_FLAGS] & DRM_DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM) {
 		igt_assert(igt_debugfs_exists(fd, "vram0_mm", O_RDONLY));
 		igt_debugfs_dump(fd, "vram0_mm");
 	}
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index 8dbce524d..232ddde8e 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -138,7 +138,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 		bo_flags = visible_vram_if_possible(fd, eci->gt_id);
 		if (flags & DEFER_ALLOC)
-			bo_flags |= XE_GEM_CREATE_FLAG_DEFER_BACKING;
+			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
 
 		bo = xe_bo_create_flags(fd, n_vm == 1 ? vm[0] : 0,
 					bo_size, bo_flags);
@@ -172,9 +172,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & SPARSE)
 			__xe_vm_bind_assert(fd, vm[i], bind_exec_queues[i],
 					    0, 0, sparse_addr[i], bo_size,
-					    XE_VM_BIND_OP_MAP,
-					    XE_VM_BIND_FLAG_ASYNC |
-					    XE_VM_BIND_FLAG_NULL, sync,
+					    DRM_XE_VM_BIND_OP_MAP,
+					    DRM_XE_VM_BIND_FLAG_ASYNC |
+					    DRM_XE_VM_BIND_FLAG_NULL, sync,
 					    1, 0, 0);
 	}
 
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index 92d8690a1..92359d1a7 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -175,12 +175,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (bo)
 			xe_vm_bind_async_flags(fd, vm, bind_exec_queues[0], bo, 0,
 					       addr, bo_size, sync, 1,
-					       XE_VM_BIND_FLAG_IMMEDIATE);
+					       DRM_XE_VM_BIND_FLAG_IMMEDIATE);
 		else
 			xe_vm_bind_userptr_async_flags(fd, vm, bind_exec_queues[0],
 						       to_user_pointer(data),
 						       addr, bo_size, sync, 1,
-						       XE_VM_BIND_FLAG_IMMEDIATE);
+						       DRM_XE_VM_BIND_FLAG_IMMEDIATE);
 	} else {
 		if (bo)
 			xe_vm_bind_async(fd, vm, bind_exec_queues[0], bo, 0, addr,
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 44248776b..39647b736 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -187,14 +187,14 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	for (i = 0; i < n_exec_queues; i++) {
 		struct drm_xe_ext_set_property job_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
 			.value = 50,
 		};
 		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
 		};
 		struct drm_xe_exec_queue_create create = {
@@ -374,14 +374,14 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	for (i = 0; i < n_exec_queues; i++) {
 		struct drm_xe_ext_set_property job_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
 			.value = 50,
 		};
 		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
 		};
 		uint64_t ext = 0;
@@ -542,8 +542,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	for (i = 0; i < n_exec_queues; i++) {
 		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
 		};
 		uint64_t ext = 0;
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index bb16bdd88..ccbfc4723 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -520,8 +520,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	for (i = 0; i < n_exec_queues; i++) {
 		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
 		};
 		uint64_t ext = to_user_pointer(&preempt_timeout);
diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
index 2f349b16d..df774130f 100644
--- a/tests/intel/xe_exercise_blt.c
+++ b/tests/intel/xe_exercise_blt.c
@@ -358,8 +358,8 @@ igt_main_args("b:pst:W:H:", NULL, help_str, opt_handler, NULL)
 		xe_device_get(xe);
 
 		set = xe_get_memory_region_set(xe,
-					       XE_MEM_REGION_CLASS_SYSMEM,
-					       XE_MEM_REGION_CLASS_VRAM);
+					       DRM_XE_MEM_REGION_CLASS_SYSMEM,
+					       DRM_XE_MEM_REGION_CLASS_VRAM);
 	}
 
 	igt_describe("Check fast-copy blit");
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index b2976ec84..d07ed4535 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -400,7 +400,7 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
 	for (i = 0; i < mem_usage->num_regions; i++) {
-		if (mem_usage->regions[i].mem_class == XE_MEM_REGION_CLASS_VRAM) {
+		if (mem_usage->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
 			vram_used_mb +=  (mem_usage->regions[i].used / (1024 * 1024));
 			vram_total_mb += (mem_usage->regions[i].total_size / (1024 * 1024));
 		}
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 0dabe8d06..d667823e8 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -163,9 +163,9 @@ void process_hwconfig(void *data, uint32_t len)
 const char *get_topo_name(int value)
 {
 	switch(value) {
-	case XE_TOPO_DSS_GEOMETRY: return "DSS_GEOMETRY";
-	case XE_TOPO_DSS_COMPUTE: return "DSS_COMPUTE";
-	case XE_TOPO_EU_PER_DSS: return "EU_PER_DSS";
+	case DRM_XE_TOPO_DSS_GEOMETRY: return "DSS_GEOMETRY";
+	case DRM_XE_TOPO_DSS_COMPUTE: return "DSS_COMPUTE";
+	case DRM_XE_TOPO_EU_PER_DSS: return "EU_PER_DSS";
 	}
 	return "??";
 }
@@ -221,9 +221,9 @@ test_query_mem_usage(int fd)
 	for (i = 0; i < mem_usage->num_regions; i++) {
 		igt_info("mem region %d: %s\t%#llx / %#llx\n", i,
 			mem_usage->regions[i].mem_class ==
-			XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
+			DRM_XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
 			:mem_usage->regions[i].mem_class ==
-			XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
+			DRM_XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
 			mem_usage->regions[i].used,
 			mem_usage->regions[i].total_size
 		);
@@ -358,27 +358,27 @@ test_query_config(int fd)
 	query.data = to_user_pointer(config);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	igt_info("XE_QUERY_CONFIG_REV_AND_DEVICE_ID\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID]);
+	igt_info("DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID\t%#llx\n",
+		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID]);
 	igt_info("  REV_ID\t\t\t\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
+		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
 	igt_info("  DEVICE_ID\t\t\t\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
-	igt_info("XE_QUERY_CONFIG_FLAGS\t\t\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_FLAGS]);
-	igt_info("  XE_QUERY_CONFIG_FLAGS_HAS_VRAM\t%s\n",
-		config->info[XE_QUERY_CONFIG_FLAGS] &
-		XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "ON":"OFF");
-	igt_info("XE_QUERY_CONFIG_MIN_ALIGNMENT\t\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_MIN_ALIGNMENT]);
-	igt_info("XE_QUERY_CONFIG_VA_BITS\t\t\t%llu\n",
-		config->info[XE_QUERY_CONFIG_VA_BITS]);
-	igt_info("XE_QUERY_CONFIG_GT_COUNT\t\t%llu\n",
-		config->info[XE_QUERY_CONFIG_GT_COUNT]);
-	igt_info("XE_QUERY_CONFIG_MEM_REGION_COUNT\t%llu\n",
-		config->info[XE_QUERY_CONFIG_MEM_REGION_COUNT]);
-	igt_info("XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
-		config->info[XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
+		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
+	igt_info("DRM_XE_QUERY_CONFIG_FLAGS\t\t\t%#llx\n",
+		config->info[DRM_XE_QUERY_CONFIG_FLAGS]);
+	igt_info("  DRM_DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM\t%s\n",
+		config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
+		DRM_DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "ON":"OFF");
+	igt_info("DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT\t\t%#llx\n",
+		config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT]);
+	igt_info("DRM_XE_QUERY_CONFIG_VA_BITS\t\t\t%llu\n",
+		config->info[DRM_XE_QUERY_CONFIG_VA_BITS]);
+	igt_info("DRM_XE_QUERY_CONFIG_GT_COUNT\t\t%llu\n",
+		config->info[DRM_XE_QUERY_CONFIG_GT_COUNT]);
+	igt_info("DRM_XE_QUERY_CONFIG_MEM_REGION_COUNT\t%llu\n",
+		config->info[DRM_XE_QUERY_CONFIG_MEM_REGION_COUNT]);
+	igt_info("DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
+		config->info[DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
 	dump_hex_debug(config, query.size);
 
 	free(config);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index dd3302337..2b62e7260 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -316,7 +316,7 @@ static void userptr_invalid(int fd)
 	vm = xe_vm_create(fd, 0, 0);
 	munmap(data, size);
 	ret = __xe_vm_bind(fd, vm, 0, 0, to_user_pointer(data), 0x40000,
-			   size, XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
+			   size, DRM_DRM_XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
 	igt_assert(ret == -EFAULT);
 
 	xe_vm_destroy(fd, vm);
@@ -752,8 +752,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 		bind_ops[i].range = bo_size;
 		bind_ops[i].addr = addr;
 		bind_ops[i].tile_mask = 0x1 << eci->gt_id;
-		bind_ops[i].op = XE_VM_BIND_OP_MAP;
-		bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
+		bind_ops[i].op = DRM_XE_VM_BIND_OP_MAP;
+		bind_ops[i].flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 		bind_ops[i].region = 0;
 		bind_ops[i].reserved[0] = 0;
 		bind_ops[i].reserved[1] = 0;
@@ -797,8 +797,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 
 	for (i = 0; i < n_execs; ++i) {
 		bind_ops[i].obj = 0;
-		bind_ops[i].op = XE_VM_BIND_OP_UNMAP;
-		bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
+		bind_ops[i].op = DRM_XE_VM_BIND_OP_UNMAP;
+		bind_ops[i].flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 	}
 
 	syncobj_reset(fd, &sync[0].handle, 1);
diff --git a/tests/kms_flip.c b/tests/kms_flip.c
old mode 100755
new mode 100644
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] [PATCH v3 24/24] drm-uapi/xe: Align with uAPI update to add _FLAG to constants usable for flags
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (22 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 23/24] drm-uapi/xe: Align with uAPI update to add DRM_ prefix in uAPI constants Francois Dugast
@ 2023-09-26 13:00 ` Francois Dugast
  2023-09-26 15:03 ` [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - take 1 (rev2) Patchwork
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Francois Dugast @ 2023-09-26 13:00 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Add _FLAG to uAPI constants usable for flags")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h          |  30 +++----
 lib/igt_fb.c                       |   2 +-
 lib/intel_batchbuffer.c            |  12 +--
 lib/intel_compute.c                |   6 +-
 lib/intel_ctx.c                    |   4 +-
 lib/xe/xe_ioctl.c                  |   6 +-
 lib/xe/xe_query.c                  |   4 +-
 lib/xe/xe_spin.c                   |   4 +-
 lib/xe/xe_util.c                   |   4 +-
 tests/intel/xe_ccs.c               |   4 +-
 tests/intel/xe_create.c            |   6 +-
 tests/intel/xe_dma_buf_sync.c      |   4 +-
 tests/intel/xe_drm_fdinfo.c        |  18 ++---
 tests/intel/xe_evict.c             |  24 +++---
 tests/intel/xe_exec_balancer.c     |  34 ++++----
 tests/intel/xe_exec_basic.c        |  16 ++--
 tests/intel/xe_exec_compute_mode.c |   6 +-
 tests/intel/xe_exec_fault_mode.c   |  14 ++--
 tests/intel/xe_exec_reset.c        |  42 +++++-----
 tests/intel/xe_exec_store.c        |  16 ++--
 tests/intel/xe_exec_threads.c      |  44 +++++------
 tests/intel/xe_exercise_blt.c      |   2 +-
 tests/intel/xe_guc_pc.c            |  12 +--
 tests/intel/xe_huc_copy.c          |   4 +-
 tests/intel/xe_intel_bb.c          |   2 +-
 tests/intel/xe_noexec_ping_pong.c  |   2 +-
 tests/intel/xe_pm.c                |  12 +--
 tests/intel/xe_pm_residency.c      |   2 +-
 tests/intel/xe_spin_batch.c        |   2 +-
 tests/intel/xe_vm.c                | 122 ++++++++++++++---------------
 tests/intel/xe_waitfence.c         |   4 +-
 31 files changed, 232 insertions(+), 232 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index c4cf9d56f..11cea21fc 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -663,10 +663,10 @@ struct drm_xe_vm_create {
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
-#define DRM_XE_VM_CREATE_SCRATCH_PAGE	(0x1 << 0)
-#define DRM_XE_VM_CREATE_COMPUTE_MODE	(0x1 << 1)
-#define DRM_XE_VM_CREATE_ASYNC_DEFAULT	(0x1 << 2)
-#define DRM_XE_VM_CREATE_FAULT_MODE	(0x1 << 3)
+#define DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE	(0x1 << 0)
+#define DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE	(0x1 << 1)
+#define DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT	(0x1 << 2)
+#define DRM_XE_VM_CREATE_FLAG_FAULT_MODE	(0x1 << 3)
 	/** @flags: Flags */
 	__u32 flags;
 
@@ -898,11 +898,11 @@ struct drm_xe_sync {
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
-#define DRM_XE_SYNC_SYNCOBJ		0x0
-#define DRM_XE_SYNC_TIMELINE_SYNCOBJ	0x1
-#define DRM_XE_SYNC_DMA_BUF		0x2
-#define DRM_XE_SYNC_USER_FENCE		0x3
-#define DRM_XE_SYNC_SIGNAL		0x10
+#define DRM_XE_SYNC_FLAG_SYNCOBJ		0x0
+#define DRM_XE_SYNC_FLAG_TIMELINE_SYNCOBJ	0x1
+#define DRM_XE_SYNC_FLAG_DMA_BUF		0x2
+#define DRM_XE_SYNC_FLAG_USER_FENCE		0x3
+#define DRM_XE_SYNC_FLAG_SIGNAL		0x10
 	__u32 flags;
 
 	/** @pad: MBZ */
@@ -988,8 +988,8 @@ struct drm_xe_wait_user_fence {
 	/** @op: wait operation (type of comparison) */
 	__u16 op;
 
-#define DRM_XE_UFENCE_WAIT_SOFT_OP	(1 << 0)	/* e.g. Wait on VM bind */
-#define DRM_XE_UFENCE_WAIT_ABSTIME	(1 << 1)
+#define DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP	(1 << 0)	/* e.g. Wait on VM bind */
+#define DRM_XE_UFENCE_WAIT_FLAG_ABSTIME	(1 << 1)
 	/** @flags: wait flags */
 	__u16 flags;
 
@@ -1007,10 +1007,10 @@ struct drm_xe_wait_user_fence {
 	__u64 mask;
 	/**
 	 * @timeout: how long to wait before bailing, value in nanoseconds.
-	 * Without DRM_XE_UFENCE_WAIT_ABSTIME flag set (relative timeout)
+	 * Without DRM_XE_UFENCE_WAIT_FLAG_ABSTIME flag set (relative timeout)
 	 * it contains timeout expressed in nanoseconds to wait (fence will
 	 * expire at now() + timeout).
-	 * When DRM_XE_UFENCE_WAIT_ABSTIME flat is set (absolute timeout) wait
+	 * When DRM_XE_UFENCE_WAIT_FLAG_ABSTIME flat is set (absolute timeout) wait
 	 * will end at timeout (uses system MONOTONIC_CLOCK).
 	 * Passing negative timeout leads to neverending wait.
 	 *
@@ -1023,13 +1023,13 @@ struct drm_xe_wait_user_fence {
 
 	/**
 	 * @num_engines: number of engine instances to wait on, must be zero
-	 * when DRM_XE_UFENCE_WAIT_SOFT_OP set
+	 * when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set
 	 */
 	__u64 num_engines;
 
 	/**
 	 * @instances: user pointer to array of drm_xe_engine_class_instance to
-	 * wait on, must be NULL when DRM_XE_UFENCE_WAIT_SOFT_OP set
+	 * wait on, must be NULL when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set
 	 */
 	__u64 instances;
 
diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index 34934855a..d02dd7a0d 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2892,7 +2892,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
 							  &bb_size,
 							  mem_region) == 0);
 	} else if (is_xe) {
-		vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		exec_queue = xe_exec_queue_create(dst_fb->fd, vm, &inst, 0);
 		xe_ctx = intel_ctx_xe(dst_fb->fd, vm, exec_queue, 0, 0, 0);
 		mem_region = vram_if_possible(dst_fb->fd, 0);
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index bea03ff39..0b8aca2ca 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -953,7 +953,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
 
 		if (!vm) {
 			igt_assert_f(!ctx, "No vm provided for engine");
-			vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+			vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		}
 
 		ibb->uses_full_ppgtt = true;
@@ -1315,8 +1315,8 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
 static void __unbind_xe_objects(struct intel_bb *ibb)
 {
 	struct drm_xe_sync syncs[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	int ret;
 
@@ -2302,8 +2302,8 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
 	uint32_t engine = flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK);
 	uint32_t engine_id;
 	struct drm_xe_sync syncs[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_vm_bind_op *bind_ops;
 	void *map;
@@ -2371,7 +2371,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
 	}
 	ibb->xe_bound = true;
 
-	syncs[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+	syncs[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 	ibb->engine_syncobj = syncobj_create(ibb->fd, 0);
 	syncs[1].handle = ibb->engine_syncobj;
 
diff --git a/lib/intel_compute.c b/lib/intel_compute.c
index 1ae33cdfc..e27043545 100644
--- a/lib/intel_compute.c
+++ b/lib/intel_compute.c
@@ -79,7 +79,7 @@ static void bo_execenv_create(int fd, struct bo_execenv *execenv)
 		else
 			engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
 
-		execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		execenv->exec_queue = xe_exec_queue_create_class(fd, execenv->vm,
 								 engine_class);
 	}
@@ -105,7 +105,7 @@ static void bo_execenv_bind(struct bo_execenv *execenv,
 		uint64_t alignment = xe_get_default_alignment(fd);
 		struct drm_xe_sync sync = { 0 };
 
-		sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
+		sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
 		sync.handle = syncobj_create(fd, 0);
 
 		for (int i = 0; i < entries; i++) {
@@ -161,7 +161,7 @@ static void bo_execenv_unbind(struct bo_execenv *execenv,
 		uint32_t vm = execenv->vm;
 		struct drm_xe_sync sync = { 0 };
 
-		sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
+		sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
 		sync.handle = syncobj_create(fd, 0);
 
 		for (int i = 0; i < entries; i++) {
diff --git a/lib/intel_ctx.c b/lib/intel_ctx.c
index f927b7df8..f82564572 100644
--- a/lib/intel_ctx.c
+++ b/lib/intel_ctx.c
@@ -423,8 +423,8 @@ intel_ctx_t *intel_ctx_xe(int fd, uint32_t vm, uint32_t exec_queue,
 int __intel_ctx_xe_exec(const intel_ctx_t *ctx, uint64_t ahnd, uint64_t bb_offset)
 {
 	struct drm_xe_sync syncs[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.exec_queue_id = ctx->exec_queue,
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index da387f5fb..5c022db05 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -399,7 +399,7 @@ void xe_exec_sync(int fd, uint32_t exec_queue, uint64_t addr,
 void xe_exec_wait(int fd, uint32_t exec_queue, uint64_t addr)
 {
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 		.handle = syncobj_create(fd, 0),
 	};
 
@@ -416,7 +416,7 @@ int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(addr),
 		.op = DRM_XE_UFENCE_WAIT_EQ,
-		.flags = !eci ? DRM_XE_UFENCE_WAIT_SOFT_OP : 0,
+		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP : 0,
 		.value = value,
 		.mask = DRM_XE_UFENCE_WAIT_U64,
 		.timeout = timeout,
@@ -448,7 +448,7 @@ int64_t xe_wait_ufence_abstime(int fd, uint64_t *addr, uint64_t value,
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(addr),
 		.op = DRM_XE_UFENCE_WAIT_EQ,
-		.flags = !eci ? DRM_XE_UFENCE_WAIT_SOFT_OP | DRM_XE_UFENCE_WAIT_ABSTIME : 0,
+		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP | DRM_XE_UFENCE_WAIT_FLAG_ABSTIME : 0,
 		.value = value,
 		.mask = DRM_XE_UFENCE_WAIT_U64,
 		.timeout = timeout,
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 75f015c2e..e2a55d4aa 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -314,8 +314,8 @@ bool xe_supports_faults(int fd)
 	bool supports_faults;
 
 	struct drm_xe_vm_create create = {
-		.flags = DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			 DRM_XE_VM_CREATE_FAULT_MODE,
+		.flags = DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			 DRM_XE_VM_CREATE_FLAG_FAULT_MODE,
 	};
 
 	supports_faults = !igt_ioctl(fd, DRM_IOCTL_XE_VM_CREATE, &create);
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index 986d63cb4..21933a6f1 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -190,7 +190,7 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
 	struct igt_spin *spin;
 	struct xe_spin *xe_spin;
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -287,7 +287,7 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
 	uint32_t vm, bo, exec_queue, syncobj;
 	struct xe_spin *spin;
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 780125f92..2635edf72 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -179,8 +179,8 @@ void xe_bind_unbind_async(int xe, uint32_t vm, uint32_t bind_engine,
 {
 	struct drm_xe_vm_bind_op *bind_ops;
 	struct drm_xe_sync tabsyncs[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ, .handle = sync_in },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, .handle = sync_out },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, .handle = sync_in },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, .handle = sync_out },
 	};
 	struct drm_xe_sync *syncs;
 	uint32_t num_binds = 0;
diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
index fa53c0279..1d5b286f3 100644
--- a/tests/intel/xe_ccs.c
+++ b/tests/intel/xe_ccs.c
@@ -343,7 +343,7 @@ static void block_copy(int xe,
 		uint32_t vm, exec_queue;
 
 		if (config->new_ctx) {
-			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 			exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 			surf_ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
 			surf_ahnd = intel_allocator_open(xe, surf_ctx->vm,
@@ -550,7 +550,7 @@ static void block_copy_test(int xe,
 				      copyfns[copy_function].suffix) {
 				uint32_t sync_bind, sync_out;
 
-				vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+				vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 				exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 				sync_bind = syncobj_create(xe, 0);
 				sync_out = syncobj_create(xe, 0);
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index d99bd51cf..4242e1a67 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -54,7 +54,7 @@ static void create_invalid_size(int fd)
 	uint32_t handle;
 	int ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	xe_for_each_mem_region(fd, memreg, region) {
 		memregion = xe_mem_region(fd, region);
@@ -140,7 +140,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
 
 	fd = drm_reopen_driver(fd);
 	num_engines = xe_number_hw_engines(fd);
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
 	igt_debug("nproc: %u, exec_queues per process: %u\n", nproc, exec_queues_per_process);
@@ -199,7 +199,7 @@ static void create_massive_size(int fd)
 	uint32_t handle;
 	int ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	xe_for_each_mem_region(fd, memreg, region) {
 		ret = __create_bo(fd, vm, -1ULL << 32, region, &handle);
diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
index 5c401b6dd..0d835dddb 100644
--- a/tests/intel/xe_dma_buf_sync.c
+++ b/tests/intel/xe_dma_buf_sync.c
@@ -144,8 +144,8 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
 		uint64_t sdi_addr = addr + sdi_offset;
 		uint64_t spin_offset = (char *)&data[i]->spin - (char *)data[i];
 		struct drm_xe_sync sync[2] = {
-			{ .flags = DRM_XE_SYNC_SYNCOBJ, },
-			{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+			{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, },
+			{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 		};
 		struct drm_xe_exec exec = {
 			.num_batch_buffer = 1,
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index 64168ed19..4ef30cf49 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -48,8 +48,8 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -71,7 +71,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 	struct xe_spin_opts spin_opts = { .preempt = true };
 	int i, b, ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * N_EXEC_QUEUES;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -110,20 +110,20 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 				xe_spin_init(&data[i].spin, &spin_opts);
 				exec.exec_queue_id = exec_queues[e];
 				exec.address = spin_opts.addr;
-				sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-				sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+				sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+				sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 				sync[1].handle = syncobjs[e];
 				xe_exec(fd, &exec);
 				xe_spin_wait_started(&data[i].spin);
 
 				addr += bo_size;
-				sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+				sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 				sync[1].handle = syncobjs[e];
 				xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
 						 bo_size, sync + 1, 1);
 				addr += bo_size;
 			} else {
-				sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+				sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 				xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
 						 bo_size, sync, 1);
 			}
@@ -149,7 +149,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 
 		syncobj_destroy(fd, sync[0].handle);
 		sync[0].handle = syncobj_create(fd, 0);
-		sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		xe_vm_unbind_all_async(fd, vm, 0, bo, sync, 1);
 		igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -221,7 +221,7 @@ static void test_total_resident(int xe)
 	uint64_t addr = 0x1a0000;
 	int ret;
 
-	vm = xe_vm_create(xe, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0);
+	vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0);
 
 	xe_for_each_mem_region(xe, memreg, region) {
 		uint64_t pre_size;
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index eec001218..53aa402a3 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -38,8 +38,8 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 	uint32_t bind_exec_queues[3] = { 0, 0, 0 };
 	uint64_t addr = 0x100000000, base_addr = 0x100000000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -63,12 +63,12 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 
 	fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	if (flags & BIND_EXEC_QUEUE)
 		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
 	if (flags & MULTI_VM) {
-		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
-		vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
+		vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		if (flags & BIND_EXEC_QUEUE) {
 			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
 									0, true);
@@ -121,7 +121,7 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 				 ALIGN(sizeof(*data) * n_execs, 0x1000));
 
 		if (i < n_execs / 2) {
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			sync[0].handle = syncobj_create(fd, 0);
 			if (flags & MULTI_VM) {
 				xe_vm_bind_async(fd, vm3, bind_exec_queues[2], __bo,
@@ -149,7 +149,7 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i >= n_exec_queues)
 			syncobj_reset(fd, &syncobjs[e], 1);
 		sync[1].handle = syncobjs[e];
@@ -216,7 +216,7 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x100000000, base_addr = 0x100000000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 		  .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -242,13 +242,13 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
 
 	fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 	if (flags & BIND_EXEC_QUEUE)
 		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
 	if (flags & MULTI_VM) {
-		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-				   DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+				   DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 		if (flags & BIND_EXEC_QUEUE)
 			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
 									0, true);
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index 3ca3de881..8a0165b8c 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -37,8 +37,8 @@ static void test_all_active(int fd, int gt, int class)
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -66,7 +66,7 @@ static void test_all_active(int fd, int gt, int class)
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
@@ -93,8 +93,8 @@ static void test_all_active(int fd, int gt, int class)
 	for (i = 0; i < num_placements; i++) {
 		spin_opts.addr = addr + (char *)&data[i].spin - (char *)data;
 		xe_spin_init(&data[i].spin, &spin_opts);
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[i];
 
 		exec.exec_queue_id = exec_queues[i];
@@ -110,7 +110,7 @@ static void test_all_active(int fd, int gt, int class)
 	}
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -176,8 +176,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_syncs = 2,
@@ -207,7 +207,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
@@ -269,8 +269,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -281,11 +281,11 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		xe_exec(fd, &exec);
 
 		if (flags & REBIND && i + 1 != n_execs) {
-			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 			xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size,
 					   sync + 1, 1);
 
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			addr += bo_size;
 			if (bo)
 				xe_vm_bind_async(fd, vm, 0, bo, 0, addr,
@@ -329,7 +329,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -399,7 +399,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	uint64_t addr = 0x1a0000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -433,8 +433,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index 232ddde8e..a401f0165 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -81,8 +81,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	  int n_exec_queues, int n_execs, int n_vm, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -109,7 +109,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	igt_assert(n_vm <= MAX_N_EXEC_QUEUES);
 
 	for (i = 0; i < n_vm; ++i)
-		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -199,9 +199,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[0].handle = bind_syncobjs[cur_vm];
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -213,11 +213,11 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & REBIND && i + 1 != n_execs) {
 			uint32_t __vm = vm[cur_vm];
 
-			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 			xe_vm_unbind_async(fd, __vm, bind_exec_queues[e], 0,
 					   __addr, bo_size, sync + 1, 1);
 
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			addr[i % n_vm] += bo_size;
 			__addr = addr[i % n_vm];
 			if (bo)
@@ -266,7 +266,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_assert(syncobj_wait(fd, &bind_syncobjs[i], 1, INT64_MAX, 0,
 					NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	for (i = 0; i < n_vm; ++i) {
 		syncobj_reset(fd, &sync[0].handle, 1);
 		xe_vm_unbind_async(fd, vm[i], bind_exec_queues[i], 0, addr[i],
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index b0a677dca..20d3fc6e8 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -88,7 +88,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x1a0000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -113,8 +113,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 	igt_assert(n_exec_queues <= MAX_N_EXECQUEUES);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index 92359d1a7..b66f31419 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -8,7 +8,7 @@
  * Category: Hardware building block
  * Sub-category: execbuf
  * Functionality: fault mode
- * GPU requirements: GPU needs support for DRM_XE_VM_CREATE_FAULT_MODE
+ * GPU requirements: GPU needs support for DRM_XE_VM_CREATE_FLAG_FAULT_MODE
  */
 
 #include <fcntl.h>
@@ -107,7 +107,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x1a0000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -131,8 +131,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_FAULT_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_FAULT_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -347,7 +347,7 @@ test_atomic(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x1a0000, addr_wait;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -375,8 +375,8 @@ test_atomic(int fd, struct drm_xe_engine_class_instance *eci,
 	uint32_t *ptr;
 	int i, b, wait_idx = 0;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_FAULT_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_FAULT_MODE, 0);
 	bo_size = sizeof(*data) * n_atomic;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 39647b736..195e62911 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -30,8 +30,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -45,7 +45,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 	struct xe_spin *spin;
 	struct xe_spin_opts spin_opts = { .addr = addr, .preempt = false };
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*spin);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -62,8 +62,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 
 	xe_spin_init(spin, &spin_opts);
 
-	sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-	sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	sync[1].handle = syncobj;
 
 	exec.exec_queue_id = exec_queue;
@@ -78,7 +78,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 	igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -140,8 +140,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_syncs = 2,
@@ -176,7 +176,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -257,8 +257,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		for (j = 0; j < num_placements && flags & PARALLEL; ++j)
 			batches[j] = exec_addr;
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -288,7 +288,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -336,8 +336,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -362,7 +362,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	if (flags & CLOSE_FD)
 		fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -425,8 +425,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 			exec_addr = batch_addr;
 		}
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -455,7 +455,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -501,7 +501,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x1a0000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -528,8 +528,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	if (flags & CLOSE_FD)
 		fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 90684b8cb..46caa2e0c 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -55,7 +55,7 @@ static void store_dword_batch(struct data *data, uint64_t addr, int value)
 static void store(int fd)
 {
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -75,7 +75,7 @@ static void store(int fd)
 	syncobj = syncobj_create(fd, 0);
 	sync.handle = syncobj;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -91,7 +91,7 @@ static void store(int fd)
 	exec_queue = xe_exec_queue_create(fd, vm, hw_engine, 0);
 	exec.exec_queue_id = exec_queue;
 	exec.address = data->addr;
-	sync.flags &= DRM_XE_SYNC_SIGNAL;
+	sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_exec(fd, &exec);
 
 	igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
@@ -112,8 +112,8 @@ static void store(int fd)
 static void store_all(int fd, int gt, int class)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, }
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, }
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -132,7 +132,7 @@ static void store_all(int fd, int gt, int class)
 	struct drm_xe_engine_class_instance *hwe;
 	int i, num_placements = 0;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -169,8 +169,8 @@ static void store_all(int fd, int gt, int class)
 	for (i = 0; i < num_placements; i++) {
 
 		store_dword_batch(data, addr, i);
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[i];
 
 		exec.exec_queue_id = exec_queues[i];
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index ccbfc4723..1c2b66f55 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -47,8 +47,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 	      int class, int n_exec_queues, int n_execs, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES];
 	struct drm_xe_exec exec = {
@@ -77,7 +77,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		owns_vm = true;
 	}
 
@@ -125,7 +125,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 					&create), 0);
 		exec_queues[i] = create.exec_queue_id;
 		syncobjs[i] = syncobj_create(fd, 0);
-		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
+		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 		sync_all[i].handle = syncobjs[i];
 	};
 	exec.num_batch_buffer = flags & PARALLEL ? num_placements : 1;
@@ -158,8 +158,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -173,7 +173,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 			xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size,
 					   sync_all, n_exec_queues);
 
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			addr += bo_size;
 			if (bo)
 				xe_vm_bind_async(fd, vm, 0, bo, 0, addr,
@@ -221,7 +221,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -254,7 +254,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 {
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -285,8 +285,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-				  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+				  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 		owns_vm = true;
 	}
 
@@ -457,8 +457,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		 int n_execs, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES];
 	struct drm_xe_exec exec = {
@@ -489,7 +489,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		owns_vm = true;
 	}
 
@@ -536,7 +536,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		else
 			bind_exec_queues[i] = 0;
 		syncobjs[i] = syncobj_create(fd, 0);
-		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
+		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 		sync_all[i].handle = syncobjs[i];
 	};
 
@@ -576,8 +576,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 			exec_addr = batch_addr;
 		}
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -599,7 +599,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 					   0, addr, bo_size,
 					   sync_all, n_exec_queues);
 
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			addr += bo_size;
 			if (bo)
 				xe_vm_bind_async(fd, vm, bind_exec_queues[e],
@@ -649,7 +649,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, bind_exec_queues[0], 0, addr,
 			   bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
@@ -1009,11 +1009,11 @@ static void threads(int fd, int flags)
 
 	if (flags & SHARED_VM) {
 		vm_legacy_mode = xe_vm_create(fd,
-					      DRM_XE_VM_CREATE_ASYNC_DEFAULT,
+					      DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT,
 					      0);
 		vm_compute_mode = xe_vm_create(fd,
-					       DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-					       DRM_XE_VM_CREATE_COMPUTE_MODE,
+					       DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+					       DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE,
 					       0);
 	}
 
diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
index df774130f..fd310138d 100644
--- a/tests/intel/xe_exercise_blt.c
+++ b/tests/intel/xe_exercise_blt.c
@@ -280,7 +280,7 @@ static void fast_copy_test(int xe,
 			region1 = igt_collection_get_value(regions, 0);
 			region2 = igt_collection_get_value(regions, 1);
 
-			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 			exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 			ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
 
diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
index 3f2c4ae23..fa2f20cca 100644
--- a/tests/intel/xe_guc_pc.c
+++ b/tests/intel/xe_guc_pc.c
@@ -37,8 +37,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -60,7 +60,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
 	igt_assert(n_execs > 0);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -95,8 +95,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -114,7 +114,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, bind_exec_queues[0], 0, addr,
 			   bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
index c71ff74a1..2693a392c 100644
--- a/tests/intel/xe_huc_copy.c
+++ b/tests/intel/xe_huc_copy.c
@@ -117,9 +117,9 @@ test_huc_copy(int fd)
 		{ .addr = ADDR_BATCH, .size = SIZE_BATCH }, // batch
 	};
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_VIDEO_DECODE);
-	sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
+	sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
 	sync.handle = syncobj_create(fd, 0);
 
 	for(int i = 0; i < BO_DICT_ENTRIES; i++) {
diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
index 26e4dcc85..d66996cd5 100644
--- a/tests/intel/xe_intel_bb.c
+++ b/tests/intel/xe_intel_bb.c
@@ -191,7 +191,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
 	intel_bb_reset(ibb, true);
 
 	if (new_context) {
-		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
 		intel_bb_destroy(ibb);
 		ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 88b22ed11..9c2a70ff3 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -64,7 +64,7 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
 	 * stats.
 	 */
 	for (i = 0; i < NUM_VMS; ++i) {
-		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 		for (j = 0; j < NUM_BOS; ++j) {
 			igt_debug("Creating bo size %lu for vm %u\n",
 				  (unsigned long) bo_size,
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index d07ed4535..18afb68b0 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -231,8 +231,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -259,7 +259,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 	if (check_rpm)
 		igt_assert(in_d3(device, d_state));
 
-	vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	if (check_rpm)
 		igt_assert(out_of_d3(device, d_state));
@@ -304,8 +304,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -331,7 +331,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 	if (check_rpm && runtime_usage_available(device.pci_xe))
 		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(device.fd_xe, vm, bind_exec_queues[0], 0, addr,
 			   bo_size, sync, 1);
 	igt_assert(syncobj_wait(device.fd_xe, &sync[0].handle, 1, INT64_MAX, 0,
diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
index 8e9197fae..c87eeef3c 100644
--- a/tests/intel/xe_pm_residency.c
+++ b/tests/intel/xe_pm_residency.c
@@ -87,7 +87,7 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
 	} *data;
 
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 
 	struct drm_xe_exec exec = {
diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
index eb5d6aba8..6ab604d9b 100644
--- a/tests/intel/xe_spin_batch.c
+++ b/tests/intel/xe_spin_batch.c
@@ -145,7 +145,7 @@ static void xe_spin_fixed_duration(int fd)
 {
 	struct drm_xe_sync sync = {
 		.handle = syncobj_create(fd, 0),
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 2b62e7260..a417a4f30 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -89,7 +89,7 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
 static void
 test_scratch(int fd)
 {
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0);
 	uint64_t addrs[] = {
 		0x000000000000ull,
 		0x7ffdb86402d8ull,
@@ -124,7 +124,7 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
 		uint64_t bind_addr = addrs[i] & ~(uint64_t)(bo_size - 1);
 
 		if (!vm)
-			vms[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE,
+			vms[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE,
 					      0);
 		igt_debug("Binding addr %"PRIx64"\n", addrs[i]);
 		xe_vm_bind_sync(fd, vm ? vm : vms[i], bo, 0,
@@ -214,7 +214,7 @@ test_bind_once(int fd)
 	uint64_t addr = 0x7ffdb86402d8ull;
 
 	__test_bind_one_bo(fd,
-			   xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0),
+			   xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0),
 			   1, &addr);
 }
 
@@ -234,7 +234,7 @@ test_bind_one_bo_many_times(int fd)
 						ARRAY_SIZE(addrs_48b);
 
 	__test_bind_one_bo(fd,
-			   xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0),
+			   xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0),
 			   addrs_size, addrs);
 }
 
@@ -272,10 +272,10 @@ static void unbind_all(int fd, int n_vmas)
 	uint32_t vm;
 	int i;
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo = xe_bo_create(fd, 0, vm, bo_size);
 
 	for (i = 0; i < n_vmas; ++i)
@@ -347,8 +347,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 	uint32_t vm;
 	uint64_t addr = 0x1000 * 512;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES + 1];
 	struct drm_xe_exec exec = {
@@ -372,7 +372,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 	data = malloc(sizeof(*data) * n_bo);
 	igt_assert(data);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(struct shared_pte_page_data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -387,7 +387,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 	for (i = 0; i < n_exec_queues; i++) {
 		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 		syncobjs[i] = syncobj_create(fd, 0);
-		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
+		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 		sync_all[i].handle = syncobjs[i];
 	};
 
@@ -412,8 +412,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		data[i]->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i]->batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -425,7 +425,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		if (i % 2)
 			continue;
 
-		sync_all[n_execs].flags = DRM_XE_SYNC_SIGNAL;
+		sync_all[n_execs].flags = DRM_XE_SYNC_FLAG_SIGNAL;
 		sync_all[n_execs].handle = sync[0].handle;
 		xe_vm_unbind_async(fd, vm, 0, 0, addr + i * addr_stride,
 				   bo_size, sync_all, n_execs + 1);
@@ -461,8 +461,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		data[i]->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i]->batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -475,7 +475,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		if (!(i % 2))
 			continue;
 
-		sync_all[n_execs].flags = DRM_XE_SYNC_SIGNAL;
+		sync_all[n_execs].flags = DRM_XE_SYNC_FLAG_SIGNAL;
 		sync_all[n_execs].handle = sync[0].handle;
 		xe_vm_unbind_async(fd, vm, 0, 0, addr + i * addr_stride,
 				   bo_size, sync_all, n_execs + 1);
@@ -530,8 +530,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -553,7 +553,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 	struct xe_spin_opts spin_opts = { .preempt = true };
 	int i, b;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * N_EXEC_QUEUES;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -587,22 +587,22 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 			xe_spin_init(&data[i].spin, &spin_opts);
 			exec.exec_queue_id = exec_queues[e];
 			exec.address = spin_opts.addr;
-			sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-			sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+			sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			sync[1].handle = syncobjs[e];
 			xe_exec(fd, &exec);
 			xe_spin_wait_started(&data[i].spin);
 
 			/* Do bind to 1st exec_queue blocked on cork */
 			addr += (flags & CONFLICT) ? (0x1 << 21) : bo_size;
-			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 			sync[1].handle = syncobjs[e];
 			xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
 					 bo_size, sync + 1, 1);
 			addr += bo_size;
 		} else {
 			/* Do bind to 2nd exec_queue which blocks write below */
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
 					 bo_size, sync, 1);
 		}
@@ -620,8 +620,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[!i ? N_EXEC_QUEUES : e];
 
 		exec.num_syncs = 2;
@@ -665,7 +665,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 
 	syncobj_destroy(fd, sync[0].handle);
 	sync[0].handle = syncobj_create(fd, 0);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_all_async(fd, vm, 0, bo, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -712,8 +712,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000, base_addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -733,7 +733,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 
 	igt_assert(n_execs <= BIND_ARRAY_MAX_N_EXEC);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -779,8 +779,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i == n_execs - 1) {
 			sync[1].handle = syncobj_create(fd, 0);
 			exec.num_syncs = 2;
@@ -802,8 +802,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 	}
 
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
-	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_bind_array(fd, vm, bind_exec_queue, bind_ops, n_execs, sync, 2);
 
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
@@ -900,8 +900,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 		 unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -927,7 +927,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 	}
 
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	if (flags & LARGE_BIND_FLAG_USERPTR) {
 		map = aligned_alloc(xe_get_default_alignment(fd), bo_size);
@@ -984,8 +984,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		if (i != e)
@@ -1007,7 +1007,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	if (flags & LARGE_BIND_FLAG_SPLIT) {
 		xe_vm_unbind_async(fd, vm, 0, 0, base_addr,
 				   bo_size / 2, NULL, 0);
@@ -1060,7 +1060,7 @@ static void *hammer_thread(void *tdata)
 {
 	struct thread_data *t = tdata;
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -1184,8 +1184,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 			 unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -1219,7 +1219,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 			unbind_n_page_offset *= n_page_per_2mb;
 	}
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = page_size * bo_n_pages;
 
 	if (flags & MAP_FLAG_USERPTR) {
@@ -1287,10 +1287,10 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 		data->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i)
 			syncobj_reset(fd, &sync[1].handle, 1);
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 		exec.exec_queue_id = exec_queue;
 		exec.address = batch_addr;
@@ -1302,8 +1302,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 
 	/* Unbind some of the pages */
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
-	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0,
 			   addr + unbind_n_page_offset * page_size,
 			   unbind_n_pages * page_size, sync, 2);
@@ -1344,9 +1344,9 @@ try_again_after_invalidate:
 			data->batch[b++] = MI_BATCH_BUFFER_END;
 			igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-			sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+			sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 			syncobj_reset(fd, &sync[1].handle, 1);
-			sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 			exec.exec_queue_id = exec_queue;
 			exec.address = batch_addr;
@@ -1387,7 +1387,7 @@ try_again_after_invalidate:
 
 	/* Confirm unbound region can be rebound */
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	if (flags & MAP_FLAG_USERPTR)
 		xe_vm_bind_userptr_async(fd, vm, 0,
 					 addr + unbind_n_page_offset * page_size,
@@ -1415,9 +1415,9 @@ try_again_after_invalidate:
 		data->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		syncobj_reset(fd, &sync[1].handle, 1);
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 		exec.exec_queue_id = exec_queue;
 		exec.address = batch_addr;
@@ -1485,8 +1485,8 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		     int unbind_n_pages, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -1519,7 +1519,7 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 			unbind_n_page_offset *= n_page_per_2mb;
 	}
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = page_size * bo_n_pages;
 
 	if (flags & MAP_FLAG_USERPTR) {
@@ -1593,10 +1593,10 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		data->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i)
 			syncobj_reset(fd, &sync[1].handle, 1);
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 		exec.exec_queue_id = exec_queue;
 		exec.address = batch_addr;
@@ -1608,8 +1608,8 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 
 	/* Bind some of the pages to different BO / userptr */
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
-	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 	if (flags & MAP_FLAG_USERPTR)
 		xe_vm_bind_userptr_async(fd, vm, 0, addr + bo_size +
 					 unbind_n_page_offset * page_size,
@@ -1661,10 +1661,10 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		data->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i)
 			syncobj_reset(fd, &sync[1].handle, 1);
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 		exec.exec_queue_id = exec_queue;
 		exec.address = batch_addr;
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index e0116f181..05060f329 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -30,7 +30,7 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		    uint64_t addr, uint64_t size, uint64_t val)
 {
 	struct drm_xe_sync sync[1] = {};
-	sync[0].flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL;
+	sync[0].flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL;
 
 	sync[0].addr = to_user_pointer(&wait_fence);
 	sync[0].timeline_value = val;
@@ -63,7 +63,7 @@ waitfence(int fd, enum waittype wt)
 	uint32_t bo_7;
 	int64_t timeout;
 
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
 	do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
 	bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - take 1 (rev2)
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (23 preceding siblings ...)
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 24/24] drm-uapi/xe: Align with uAPI update to add _FLAG to constants usable for flags Francois Dugast
@ 2023-09-26 15:03 ` Patchwork
  2023-09-26 15:14 ` [igt-dev] ✓ CI.xeBAT: " Patchwork
  2023-09-27  2:20 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
  26 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2023-09-26 15:03 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 6947 bytes --]

== Series Details ==

Series: uAPI Alignment - take 1 (rev2)
URL   : https://patchwork.freedesktop.org/series/123916/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_13681 -> IGTPW_9881
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/index.html

Participating hosts (41 -> 39)
------------------------------

  Additional (1): bat-dg2-14 
  Missing    (3): fi-kbl-soraka fi-hsw-4770 fi-snb-2520m 

Known issues
------------

  Here are the changes found in IGTPW_9881 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_suspend@basic-s0@smem:
    - bat-dg2-9:          [PASS][1] -> [INCOMPLETE][2] ([i915#9275])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/bat-dg2-9/igt@gem_exec_suspend@basic-s0@smem.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/bat-dg2-9/igt@gem_exec_suspend@basic-s0@smem.html

  * igt@kms_frontbuffer_tracking@basic:
    - fi-bsw-nick:        [PASS][3] -> [FAIL][4] ([i915#9276])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/fi-bsw-nick/igt@kms_frontbuffer_tracking@basic.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/fi-bsw-nick/igt@kms_frontbuffer_tracking@basic.html

  * igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-d-dp-5:
    - bat-adlp-11:        [PASS][5] -> [ABORT][6] ([i915#8668])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/bat-adlp-11/igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-d-dp-5.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/bat-adlp-11/igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-d-dp-5.html

  
#### Possible fixes ####

  * igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-c-dp-5:
    - bat-adlp-11:        [ABORT][7] ([i915#8668]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/bat-adlp-11/igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-c-dp-5.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/bat-adlp-11/igt@kms_pipe_crc_basic@read-crc-frame-sequence@pipe-c-dp-5.html

  
#### Warnings ####

  * igt@i915_selftest@live@requests:
    - bat-mtlp-8:         [ABORT][9] ([i915#9262]) -> [ABORT][10] ([i915#9414])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/bat-mtlp-8/igt@i915_selftest@live@requests.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/bat-mtlp-8/igt@i915_selftest@live@requests.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3840]: https://gitlab.freedesktop.org/drm/intel/issues/3840
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
  [i915#4215]: https://gitlab.freedesktop.org/drm/intel/issues/4215
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
  [i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
  [i915#5608]: https://gitlab.freedesktop.org/drm/intel/issues/5608
  [i915#7359]: https://gitlab.freedesktop.org/drm/intel/issues/7359
  [i915#8668]: https://gitlab.freedesktop.org/drm/intel/issues/8668
  [i915#8981]: https://gitlab.freedesktop.org/drm/intel/issues/8981
  [i915#9262]: https://gitlab.freedesktop.org/drm/intel/issues/9262
  [i915#9275]: https://gitlab.freedesktop.org/drm/intel/issues/9275
  [i915#9276]: https://gitlab.freedesktop.org/drm/intel/issues/9276
  [i915#9414]: https://gitlab.freedesktop.org/drm/intel/issues/9414


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_7503 -> IGTPW_9881

  CI-20190529: 20190529
  CI_DRM_13681: b57407d0de043fc22b000a941a404ab103849e06 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_9881: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/index.html
  IGT_7503: 7503


Testlist changes
----------------

+igt@xe_exec_balancer@many-cm-parallel-basic
+igt@xe_exec_balancer@many-cm-parallel-rebind
+igt@xe_exec_balancer@many-cm-parallel-userptr
+igt@xe_exec_balancer@many-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@many-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@many-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@many-execqueues-cm-parallel-basic
+igt@xe_exec_balancer@many-execqueues-cm-parallel-rebind
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@no-exec-cm-parallel-basic
+igt@xe_exec_balancer@no-exec-cm-parallel-rebind
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@once-cm-parallel-basic
+igt@xe_exec_balancer@once-cm-parallel-rebind
+igt@xe_exec_balancer@once-cm-parallel-userptr
+igt@xe_exec_balancer@once-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@once-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@once-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@twice-cm-parallel-basic
+igt@xe_exec_balancer@twice-cm-parallel-rebind
+igt@xe_exec_balancer@twice-cm-parallel-userptr
+igt@xe_exec_balancer@twice-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@twice-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@twice-cm-parallel-userptr-rebind
+igt@xe_exec_threads@threads-hang-rebind-err
+igt@xe_exec_threads@threads-hang-userptr-rebind-err
+igt@xe_exec_threads@threads-rebind-err
+igt@xe_exec_threads@threads-userptr-rebind-err
+igt@xe_query@query-cs-cycles
+igt@xe_query@query-gt-list
+igt@xe_query@query-invalid-cs-cycles
-igt@xe_exec_threads@threads-hang-shared-vm-rebind-err
-igt@xe_exec_threads@threads-hang-shared-vm-userptr-rebind-err
-igt@xe_exec_threads@threads-shared-vm-rebind-err
-igt@xe_exec_threads@threads-shared-vm-userptr-rebind-err
-igt@xe_mmio@mmio-invalid
-igt@xe_mmio@mmio-timestamp
-igt@xe_query@query-gts
-igt@xe_vm@vm-async-ops-err
-igt@xe_vm@vm-async-ops-err-destroy

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/index.html

[-- Attachment #2: Type: text/html, Size: 6841 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [igt-dev] ✓ CI.xeBAT: success for uAPI Alignment - take 1 (rev2)
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (24 preceding siblings ...)
  2023-09-26 15:03 ` [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - take 1 (rev2) Patchwork
@ 2023-09-26 15:14 ` Patchwork
  2023-09-27  2:20 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
  26 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2023-09-26 15:14 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 14686 bytes --]

== Series Details ==

Series: uAPI Alignment - take 1 (rev2)
URL   : https://patchwork.freedesktop.org/series/123916/
State : success

== Summary ==

CI Bug Log - changes from XEIGT_7503_BAT -> XEIGTPW_9881_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (4 -> 3)
------------------------------

  Missing    (1): bat-pvc-2 

Known issues
------------

  Here are the changes found in XEIGTPW_9881_BAT that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_psr@primary_page_flip:
    - bat-adlp-7:         NOTRUN -> [FAIL][1] ([Intel XE#716]) +12 other tests fail
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-adlp-7/igt@kms_psr@primary_page_flip.html

  * igt@xe_exec_compute_mode@twice-userptr-invalidate:
    - bat-atsm-2:         [PASS][2] -> [FAIL][3] ([Intel XE#716]) +127 other tests fail
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@xe_exec_compute_mode@twice-userptr-invalidate.html
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@xe_exec_compute_mode@twice-userptr-invalidate.html

  * igt@xe_intel_bb@create-in-region:
    - bat-dg2-oem2:       [PASS][4] -> [FAIL][5] ([Intel XE#716]) +177 other tests fail
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@xe_intel_bb@create-in-region.html
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@xe_intel_bb@create-in-region.html
    - bat-adlp-7:         [PASS][6] -> [FAIL][7] ([Intel XE#716]) +155 other tests fail
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-adlp-7/igt@xe_intel_bb@create-in-region.html
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-adlp-7/igt@xe_intel_bb@create-in-region.html

  
#### Warnings ####

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - bat-dg2-oem2:       [SKIP][8] ([Intel XE#623]) -> [FAIL][9] ([Intel XE#716])
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_addfb_basic@basic-y-tiled-legacy:
    - bat-dg2-oem2:       [SKIP][10] ([Intel XE#624]) -> [FAIL][11] ([Intel XE#716])
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@kms_addfb_basic@basic-y-tiled-legacy.html
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@kms_addfb_basic@basic-y-tiled-legacy.html
    - bat-adlp-7:         [FAIL][12] ([Intel XE#609]) -> [FAIL][13] ([Intel XE#716]) +2 other tests fail
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-adlp-7/igt@kms_addfb_basic@basic-y-tiled-legacy.html
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-adlp-7/igt@kms_addfb_basic@basic-y-tiled-legacy.html

  * igt@kms_addfb_basic@invalid-set-prop-any:
    - bat-atsm-2:         [SKIP][14] ([i915#6077]) -> [FAIL][15] ([Intel XE#716]) +33 other tests fail
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_addfb_basic@invalid-set-prop-any.html
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_addfb_basic@invalid-set-prop-any.html

  * igt@kms_addfb_basic@tile-pitch-mismatch:
    - bat-dg2-oem2:       [FAIL][16] ([Intel XE#609]) -> [FAIL][17] ([Intel XE#716]) +1 other test fail
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@kms_addfb_basic@tile-pitch-mismatch.html
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@kms_addfb_basic@tile-pitch-mismatch.html

  * igt@kms_cursor_legacy@basic-flip-before-cursor-legacy:
    - bat-atsm-2:         [SKIP][18] ([Intel XE#274] / [Intel XE#539]) -> [FAIL][19] ([Intel XE#716]) +5 other tests fail
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html

  * igt@kms_dsc@dsc-basic:
    - bat-atsm-2:         [SKIP][20] ([Intel XE#539]) -> [FAIL][21] ([Intel XE#716]) +1 other test fail
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_dsc@dsc-basic.html
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_dsc@dsc-basic.html
    - bat-dg2-oem2:       [SKIP][22] ([Intel XE#423]) -> [FAIL][23] ([Intel XE#716])
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@kms_dsc@dsc-basic.html
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@kms_dsc@dsc-basic.html
    - bat-adlp-7:         [SKIP][24] ([Intel XE#423]) -> [FAIL][25] ([Intel XE#716])
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-adlp-7/igt@kms_dsc@dsc-basic.html
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-adlp-7/igt@kms_dsc@dsc-basic.html

  * igt@kms_flip@basic-flip-vs-modeset:
    - bat-atsm-2:         [SKIP][26] ([Intel XE#275]) -> [FAIL][27] ([Intel XE#716]) +3 other tests fail
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_flip@basic-flip-vs-modeset.html
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_flip@basic-flip-vs-modeset.html

  * igt@kms_force_connector_basic@force-connector-state:
    - bat-atsm-2:         [SKIP][28] ([Intel XE#277] / [Intel XE#540]) -> [FAIL][29] ([Intel XE#716]) +2 other tests fail
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_force_connector_basic@force-connector-state.html
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_force_connector_basic@force-connector-state.html

  * igt@kms_force_connector_basic@prune-stale-modes:
    - bat-dg2-oem2:       [SKIP][30] ([i915#5274]) -> [FAIL][31] ([Intel XE#716])
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@kms_force_connector_basic@prune-stale-modes.html
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@kms_force_connector_basic@prune-stale-modes.html

  * igt@kms_frontbuffer_tracking@basic:
    - bat-dg2-oem2:       [FAIL][32] ([Intel XE#608]) -> [FAIL][33] ([Intel XE#716])
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@kms_frontbuffer_tracking@basic.html
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@kms_frontbuffer_tracking@basic.html
    - bat-adlp-7:         [INCOMPLETE][34] ([Intel XE#632]) -> [FAIL][35] ([Intel XE#716])
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html

  * igt@kms_hdmi_inject@inject-audio:
    - bat-atsm-2:         [SKIP][36] ([Intel XE#540]) -> [FAIL][37] ([Intel XE#716])
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_hdmi_inject@inject-audio.html
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_hdmi_inject@inject-audio.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12:
    - bat-dg2-oem2:       [FAIL][38] ([Intel XE#400]) -> [FAIL][39] ([Intel XE#716])
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24:
    - bat-atsm-2:         [SKIP][40] ([Intel XE#537] / [i915#1836]) -> [FAIL][41] ([Intel XE#716]) +6 other tests fail
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24.html
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24.html

  * igt@kms_prop_blob@basic:
    - bat-atsm-2:         [SKIP][42] ([Intel XE#273]) -> [FAIL][43] ([Intel XE#716])
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_prop_blob@basic.html
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_prop_blob@basic.html

  * igt@kms_psr@cursor_plane_move:
    - bat-atsm-2:         [SKIP][44] ([i915#1072]) -> [FAIL][45] ([Intel XE#716]) +2 other tests fail
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@kms_psr@cursor_plane_move.html
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@kms_psr@cursor_plane_move.html

  * igt@kms_psr@primary_page_flip:
    - bat-dg2-oem2:       [SKIP][46] ([i915#1072]) -> [FAIL][47] ([Intel XE#716]) +2 other tests fail
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@kms_psr@primary_page_flip.html
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@kms_psr@primary_page_flip.html

  * igt@xe_compute@compute-square:
    - bat-atsm-2:         [SKIP][48] ([Intel XE#672]) -> [FAIL][49] ([Intel XE#716])
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@xe_compute@compute-square.html
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@xe_compute@compute-square.html
    - bat-dg2-oem2:       [SKIP][50] ([Intel XE#672]) -> [FAIL][51] ([Intel XE#716])
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@xe_compute@compute-square.html
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@xe_compute@compute-square.html

  * igt@xe_evict@evict-beng-small-external:
    - bat-adlp-7:         [SKIP][52] ([Intel XE#261] / [Intel XE#688]) -> [FAIL][53] ([Intel XE#716]) +15 other tests fail
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-adlp-7/igt@xe_evict@evict-beng-small-external.html
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-adlp-7/igt@xe_evict@evict-beng-small-external.html

  * igt@xe_exec_fault_mode@many-basic:
    - bat-dg2-oem2:       [SKIP][54] ([Intel XE#288]) -> [FAIL][55] ([Intel XE#716]) +17 other tests fail
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@xe_exec_fault_mode@many-basic.html
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@xe_exec_fault_mode@many-basic.html

  * igt@xe_exec_fault_mode@twice-userptr:
    - bat-adlp-7:         [SKIP][56] ([Intel XE#288]) -> [FAIL][57] ([Intel XE#716]) +17 other tests fail
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-adlp-7/igt@xe_exec_fault_mode@twice-userptr.html
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-adlp-7/igt@xe_exec_fault_mode@twice-userptr.html

  * igt@xe_exec_fault_mode@twice-userptr-invalidate-imm:
    - bat-atsm-2:         [SKIP][58] ([Intel XE#288]) -> [FAIL][59] ([Intel XE#716]) +17 other tests fail
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-imm.html
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-imm.html

  * igt@xe_huc_copy@huc_copy:
    - bat-dg2-oem2:       [SKIP][60] ([Intel XE#255]) -> [FAIL][61] ([Intel XE#716])
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-dg2-oem2/igt@xe_huc_copy@huc_copy.html
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-dg2-oem2/igt@xe_huc_copy@huc_copy.html
    - bat-atsm-2:         [SKIP][62] ([Intel XE#255]) -> [FAIL][63] ([Intel XE#716])
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-atsm-2/igt@xe_huc_copy@huc_copy.html
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-atsm-2/igt@xe_huc_copy@huc_copy.html

  * igt@xe_mmap@vram:
    - bat-adlp-7:         [SKIP][64] ([Intel XE#263]) -> [FAIL][65] ([Intel XE#716])
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7503/bat-adlp-7/igt@xe_mmap@vram.html
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/bat-adlp-7/igt@xe_mmap@vram.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
  [Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
  [Intel XE#263]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/263
  [Intel XE#273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/273
  [Intel XE#274]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/274
  [Intel XE#275]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/275
  [Intel XE#277]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/277
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#400]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/400
  [Intel XE#423]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/423
  [Intel XE#524]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/524
  [Intel XE#537]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/537
  [Intel XE#539]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/539
  [Intel XE#540]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/540
  [Intel XE#608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/608
  [Intel XE#609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/609
  [Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
  [Intel XE#624]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/624
  [Intel XE#632]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/632
  [Intel XE#672]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/672
  [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
  [Intel XE#716]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/716
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1836]: https://gitlab.freedesktop.org/drm/intel/issues/1836
  [i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
  [i915#6077]: https://gitlab.freedesktop.org/drm/intel/issues/6077


Build changes
-------------

  * IGT: IGT_7503 -> IGTPW_9881

  IGTPW_9881: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/index.html
  IGT_7503: 7503
  xe-396-fc8ec3c56efa5c15b630ddc17c89100440fe03ef: fc8ec3c56efa5c15b630ddc17c89100440fe03ef

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9881/index.html

[-- Attachment #2: Type: text/html, Size: 18712 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [igt-dev] [PATCH v3 11/24] drm-uapi/xe: Replace useless 'instance' per unique gt_id
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 11/24] drm-uapi/xe: Replace useless 'instance' per unique gt_id Francois Dugast
@ 2023-09-26 16:47   ` Tvrtko Ursulin
  2023-09-27 16:53     ` Rodrigo Vivi
  0 siblings, 1 reply; 35+ messages in thread
From: Tvrtko Ursulin @ 2023-09-26 16:47 UTC (permalink / raw)
  To: Francois Dugast, igt-dev; +Cc: Rodrigo Vivi


On 26/09/2023 14:00, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Align with commit ("drm/xe/uapi: Replace useless 'instance' per unique gt_id")
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
>   include/drm-uapi/xe_drm.h | 65 ++++++++++++++++++++++++++-------------
>   tests/intel/xe_query.c    |  2 +-
>   2 files changed, 44 insertions(+), 23 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 13c693393..68cc5e051 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -337,6 +337,47 @@ struct drm_xe_query_config {
>   	__u64 info[];
>   };
>   
> +/**
> + * struct drm_xe_query_gt - describe an individual GT.
> + *
> + * To be used with drm_xe_query_gts, which will return a list with all the
> + * existing GT individual descriptions.
> + * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
> + * implementing graphics and/or media operations.
> + */
> +struct drm_xe_query_gt {
> +#define XE_QUERY_GT_TYPE_MAIN		0
> +#define XE_QUERY_GT_TYPE_REMOTE		1
> +#define XE_QUERY_GT_TYPE_MEDIA		2
> +	/** @type: GT type: Main, Remote, or Media */
> +	__u16 type;
> +	/** @gt_id: Unique ID of this GT within the PCI Device */
> +	__u16 gt_id;
> +	/** @clock_freq: A clock frequency for timestamp */
> +	__u32 clock_freq;
> +	/** @features: Reserved for future information about GT features */
> +	__u64 features;
> +	/**
> +	 * @native_mem_regions: Bit mask of instances from
> +	 * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
> +	 * direct access.
> +	 */
> +	__u64 native_mem_regions;

s/native/local/ ?

Although what was wrong with distance query? It was more future proof 
(can't ever end up with fast, slow, slower, .. ) and avoids a bit of a 
vague non-technical names like "slow".

> +	/**
> +	 * @slow_mem_regions: Bit mask of instances from
> +	 * drm_xe_query_mem_usage that this GT can indirectly access, although
> +	 * they live on a different GPU/Tile.
> +	 */
> +	__u64 slow_mem_regions;
> +	/**
> +	 * @inaccessible_mem_regions: Bit mask of instances from
> +	 * drm_xe_query_mem_usage that is not accessible by this GT at all.
> +	 */
> +	__u64 inaccessible_mem_regions;

Equal to ~(native | slow) so redundant?

Btw drm_xe_query_mem_usage is just a list of regions, nothing about 
usage like memory usage?

Regards,

Tvrtko

> +	/** @reserved: Reserved */
> +	__u64 reserved[8];
> +};
> +
>   /**
>    * struct drm_xe_query_gts - describe GTs
>    *
> @@ -347,30 +388,10 @@ struct drm_xe_query_config {
>   struct drm_xe_query_gts {
>   	/** @num_gt: number of GTs returned in gts */
>   	__u32 num_gt;
> -
>   	/** @pad: MBZ */
>   	__u32 pad;
> -
> -	/**
> -	 * @gts: The GTs returned for this device
> -	 *
> -	 * TODO: convert drm_xe_query_gt to proper kernel-doc.
> -	 * TODO: Perhaps info about every mem region relative to this GT? e.g.
> -	 * bandwidth between this GT and remote region?
> -	 */
> -	struct drm_xe_query_gt {
> -#define XE_QUERY_GT_TYPE_MAIN		0
> -#define XE_QUERY_GT_TYPE_REMOTE		1
> -#define XE_QUERY_GT_TYPE_MEDIA		2
> -		__u16 type;
> -		__u16 instance;
> -		__u32 clock_freq;
> -		__u64 features;
> -		__u64 native_mem_regions;	/* bit mask of instances from drm_xe_query_mem_usage */
> -		__u64 slow_mem_regions;		/* bit mask of instances from drm_xe_query_mem_usage */
> -		__u64 inaccessible_mem_regions;	/* bit mask of instances from drm_xe_query_mem_usage */
> -		__u64 reserved[8];
> -	} gts[];
> +	/** @gts: The GT list returned for this device */
> +	struct drm_xe_query_gt gts[];
>   };
>   
>   /**
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index acf069f46..eb8d52897 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -279,7 +279,7 @@ test_query_gts(int fd)
>   
>   	for (i = 0; i < gts->num_gt; i++) {
>   		igt_info("type: %d\n", gts->gts[i].type);
> -		igt_info("instance: %d\n", gts->gts[i].instance);
> +		igt_info("gt_id: %d\n", gts->gts[i].gt_id);
>   		igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
>   		igt_info("features: 0x%016llx\n", gts->gts[i].features);
>   		igt_info("native_mem_regions: 0x%016llx\n",

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
@ 2023-09-26 16:50   ` Tvrtko Ursulin
  2023-09-27 16:55     ` Rodrigo Vivi
  2023-09-27  4:58   ` Aravind Iddamsetty
  1 sibling, 1 reply; 35+ messages in thread
From: Tvrtko Ursulin @ 2023-09-26 16:50 UTC (permalink / raw)
  To: Francois Dugast, igt-dev; +Cc: Rodrigo Vivi


On 26/09/2023 14:00, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Align with commit ("drm/xe/pmu: Enable PMU interface")
> 
> Cc: Francois Dugast <francois.dugast@intel.com>
> Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>   include/drm-uapi/xe_drm.h | 38 ++++++++++++++++++++++++++++++++++++++
>   1 file changed, 38 insertions(+)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 804c02270..643eb6e82 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -1053,6 +1053,44 @@ struct drm_xe_vm_madvise {
>   	__u64 reserved[2];
>   };
>   
> +/**
> + * DOC: XE PMU event config IDs
> + *
> + * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
> + * as part of perf_event_open syscall to read a particular event.
> + *
> + * For example to open the XE_PMU_INTERRUPTS(0):
> + *
> + * .. code-block:: C
> + *	struct perf_event_attr attr;
> + *	long long count;
> + *	int cpu = 0;
> + *	int fd;
> + *
> + *	memset(&attr, 0, sizeof(struct perf_event_attr));
> + *	attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
> + *	attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
> + *	attr.use_clockid = 1;
> + *	attr.clockid = CLOCK_MONOTONIC;
> + *	attr.config = XE_PMU_INTERRUPTS(0);
> + *
> + *	fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
> + */
> +
> +/*
> + * Top bits of every counter are GT id.
> + */
> +#define __XE_PMU_GT_SHIFT (56)
> +
> +#define ___XE_PMU_OTHER(gt, x) \
> +	(((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
> +
> +#define XE_PMU_INTERRUPTS(gt)			___XE_PMU_OTHER(gt, 0)

AFAIR interrupts is probably the least useful counter and I don't 
remember that anyone asked much about it. Therefore I'd say it could be 
worth seeing if you could just drop it. Changes to intel_gpu_top to work 
with the below set (no per engine, no frequencies) will have to be 
extensive already anyway.

Regards,

Tvrtko

> +#define XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
> +#define XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
> +#define XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 3)
> +#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 4)
> +
>   #if defined(__cplusplus)
>   }
>   #endif

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [igt-dev] ✗ Fi.CI.IGT: failure for uAPI Alignment - take 1 (rev2)
  2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
                   ` (25 preceding siblings ...)
  2023-09-26 15:14 ` [igt-dev] ✓ CI.xeBAT: " Patchwork
@ 2023-09-27  2:20 ` Patchwork
  26 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2023-09-27  2:20 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 93777 bytes --]

== Series Details ==

Series: uAPI Alignment - take 1 (rev2)
URL   : https://patchwork.freedesktop.org/series/123916/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_13681_full -> IGTPW_9881_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with IGTPW_9881_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in IGTPW_9881_full, please notify your bug team (lgci.bug.filing@intel.com) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/index.html

Participating hosts (9 -> 9)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_9881_full:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_pm_rpm@drm-resources-equal:
    - shard-dg2:          NOTRUN -> [FAIL][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@i915_pm_rpm@drm-resources-equal.html

  
Known issues
------------

  Here are the changes found in IGTPW_9881_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@api_intel_bb@blit-reloc-keep-cache:
    - shard-rkl:          NOTRUN -> [SKIP][2] ([i915#8411])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-7/igt@api_intel_bb@blit-reloc-keep-cache.html

  * igt@debugfs_test@basic-hwmon:
    - shard-mtlp:         NOTRUN -> [SKIP][3] ([i915#9318])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@debugfs_test@basic-hwmon.html
    - shard-rkl:          NOTRUN -> [SKIP][4] ([i915#9318])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@debugfs_test@basic-hwmon.html

  * igt@device_reset@unbind-cold-reset-rebind:
    - shard-dg2:          NOTRUN -> [SKIP][5] ([i915#7701])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@device_reset@unbind-cold-reset-rebind.html

  * igt@drm_fdinfo@busy-hang@rcs0:
    - shard-mtlp:         NOTRUN -> [SKIP][6] ([i915#8414]) +12 other tests skip
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@drm_fdinfo@busy-hang@rcs0.html

  * igt@gem_caching@writes:
    - shard-mtlp:         NOTRUN -> [SKIP][7] ([i915#4873]) +1 other test skip
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@gem_caching@writes.html

  * igt@gem_ccs@ctrl-surf-copy:
    - shard-mtlp:         NOTRUN -> [SKIP][8] ([i915#3555]) +2 other tests skip
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@gem_ccs@ctrl-surf-copy.html

  * igt@gem_ccs@suspend-resume:
    - shard-rkl:          NOTRUN -> [SKIP][9] ([i915#9323]) +1 other test skip
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-7/igt@gem_ccs@suspend-resume.html

  * igt@gem_ccs@suspend-resume@xmajor-compressed-compfmt0-lmem0-lmem0:
    - shard-dg2:          [PASS][10] -> [INCOMPLETE][11] ([i915#7297])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-11/igt@gem_ccs@suspend-resume@xmajor-compressed-compfmt0-lmem0-lmem0.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@gem_ccs@suspend-resume@xmajor-compressed-compfmt0-lmem0-lmem0.html

  * igt@gem_close_race@multigpu-basic-process:
    - shard-dg2:          NOTRUN -> [SKIP][12] ([i915#7697])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@gem_close_race@multigpu-basic-process.html

  * igt@gem_create@create-ext-cpu-access-big:
    - shard-mtlp:         NOTRUN -> [SKIP][13] ([i915#6335])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@gem_create@create-ext-cpu-access-big.html

  * igt@gem_ctx_isolation@preservation-s3@ccs1:
    - shard-dg2:          NOTRUN -> [FAIL][14] ([fdo#103375])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gem_ctx_isolation@preservation-s3@ccs1.html

  * igt@gem_ctx_persistence@engines-mixed@rcs0:
    - shard-mtlp:         NOTRUN -> [ABORT][15] ([i915#9414])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@gem_ctx_persistence@engines-mixed@rcs0.html

  * igt@gem_ctx_persistence@heartbeat-many:
    - shard-dg2:          NOTRUN -> [SKIP][16] ([i915#8555])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@gem_ctx_persistence@heartbeat-many.html
    - shard-mtlp:         NOTRUN -> [SKIP][17] ([i915#8555])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-1/igt@gem_ctx_persistence@heartbeat-many.html

  * igt@gem_ctx_persistence@idempotent:
    - shard-snb:          NOTRUN -> [SKIP][18] ([fdo#109271] / [i915#1099])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-snb2/igt@gem_ctx_persistence@idempotent.html

  * igt@gem_ctx_sseu@engines:
    - shard-rkl:          NOTRUN -> [SKIP][19] ([i915#280])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@gem_ctx_sseu@engines.html

  * igt@gem_ctx_sseu@mmap-args:
    - shard-mtlp:         NOTRUN -> [SKIP][20] ([i915#280])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-1/igt@gem_ctx_sseu@mmap-args.html

  * igt@gem_eio@hibernate:
    - shard-dg2:          [PASS][21] -> [ABORT][22] ([i915#7975] / [i915#8213])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-11/igt@gem_eio@hibernate.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@gem_eio@hibernate.html

  * igt@gem_exec_balancer@bonded-sync:
    - shard-dg2:          NOTRUN -> [SKIP][23] ([i915#4771]) +1 other test skip
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gem_exec_balancer@bonded-sync.html
    - shard-mtlp:         NOTRUN -> [SKIP][24] ([i915#4771])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@gem_exec_balancer@bonded-sync.html

  * igt@gem_exec_balancer@hog:
    - shard-mtlp:         NOTRUN -> [SKIP][25] ([i915#4812])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-3/igt@gem_exec_balancer@hog.html

  * igt@gem_exec_balancer@parallel-balancer:
    - shard-rkl:          NOTRUN -> [SKIP][26] ([i915#4525]) +1 other test skip
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@gem_exec_balancer@parallel-balancer.html

  * igt@gem_exec_capture@capture-invisible@lmem0:
    - shard-dg2:          NOTRUN -> [SKIP][27] ([i915#6334]) +1 other test skip
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gem_exec_capture@capture-invisible@lmem0.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-rkl:          [PASS][28] -> [FAIL][29] ([i915#2846])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-6/igt@gem_exec_fair@basic-deadline.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-rrul:
    - shard-dg2:          NOTRUN -> [SKIP][30] ([i915#3539] / [i915#4852]) +2 other tests skip
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@gem_exec_fair@basic-none-rrul.html
    - shard-dg1:          NOTRUN -> [SKIP][31] ([i915#3539] / [i915#4852])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-18/igt@gem_exec_fair@basic-none-rrul.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-rkl:          NOTRUN -> [FAIL][32] ([i915#2842])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-7/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_exec_fair@basic-pace:
    - shard-mtlp:         NOTRUN -> [SKIP][33] ([i915#4473] / [i915#4771]) +2 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@gem_exec_fair@basic-pace.html

  * igt@gem_exec_gttfill@multigpu-basic:
    - shard-mtlp:         NOTRUN -> [SKIP][34] ([i915#7697]) +2 other tests skip
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@gem_exec_gttfill@multigpu-basic.html

  * igt@gem_exec_params@rsvd2-dirt:
    - shard-dg2:          NOTRUN -> [SKIP][35] ([fdo#109283] / [i915#5107])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@gem_exec_params@rsvd2-dirt.html

  * igt@gem_exec_params@secure-non-master:
    - shard-mtlp:         NOTRUN -> [SKIP][36] ([fdo#112283]) +1 other test skip
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@gem_exec_params@secure-non-master.html

  * igt@gem_exec_reloc@basic-cpu-read-noreloc:
    - shard-dg1:          NOTRUN -> [SKIP][37] ([i915#3281]) +1 other test skip
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@gem_exec_reloc@basic-cpu-read-noreloc.html

  * igt@gem_exec_reloc@basic-gtt-noreloc:
    - shard-mtlp:         NOTRUN -> [SKIP][38] ([i915#3281]) +11 other tests skip
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@gem_exec_reloc@basic-gtt-noreloc.html

  * igt@gem_exec_reloc@basic-write-read:
    - shard-rkl:          NOTRUN -> [SKIP][39] ([i915#3281]) +7 other tests skip
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@gem_exec_reloc@basic-write-read.html

  * igt@gem_exec_reloc@basic-write-wc-noreloc:
    - shard-dg2:          NOTRUN -> [SKIP][40] ([i915#3281]) +6 other tests skip
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gem_exec_reloc@basic-write-wc-noreloc.html

  * igt@gem_exec_schedule@noreorder-corked@ccs0:
    - shard-mtlp:         NOTRUN -> [DMESG-WARN][41] ([i915#8962] / [i915#9121])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@gem_exec_schedule@noreorder-corked@ccs0.html

  * igt@gem_exec_schedule@noreorder-corked@rcs0:
    - shard-mtlp:         NOTRUN -> [DMESG-WARN][42] ([i915#9121])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@gem_exec_schedule@noreorder-corked@rcs0.html

  * igt@gem_fence_thrash@bo-copy:
    - shard-dg2:          NOTRUN -> [SKIP][43] ([i915#4860]) +4 other tests skip
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@gem_fence_thrash@bo-copy.html

  * igt@gem_fenced_exec_thrash@too-many-fences:
    - shard-mtlp:         NOTRUN -> [SKIP][44] ([i915#4860]) +2 other tests skip
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@gem_fenced_exec_thrash@too-many-fences.html
    - shard-dg1:          NOTRUN -> [SKIP][45] ([i915#4860])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-15/igt@gem_fenced_exec_thrash@too-many-fences.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][46] ([fdo#109271] / [i915#2190])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-apl7/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@parallel-random-verify:
    - shard-rkl:          NOTRUN -> [SKIP][47] ([i915#4613]) +1 other test skip
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@gem_lmem_swapping@parallel-random-verify.html

  * igt@gem_lmem_swapping@random-engines:
    - shard-mtlp:         NOTRUN -> [SKIP][48] ([i915#4613]) +5 other tests skip
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@gem_lmem_swapping@random-engines.html

  * igt@gem_media_fill@media-fill:
    - shard-dg2:          NOTRUN -> [SKIP][49] ([i915#8289])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gem_media_fill@media-fill.html

  * igt@gem_mmap@big-bo:
    - shard-dg2:          NOTRUN -> [SKIP][50] ([i915#4083]) +3 other tests skip
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gem_mmap@big-bo.html

  * igt@gem_mmap_gtt@basic-read-write:
    - shard-dg1:          NOTRUN -> [SKIP][51] ([i915#4077]) +2 other tests skip
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@gem_mmap_gtt@basic-read-write.html

  * igt@gem_mmap_gtt@coherency:
    - shard-rkl:          NOTRUN -> [SKIP][52] ([fdo#111656])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@gem_mmap_gtt@coherency.html

  * igt@gem_mmap_gtt@cpuset-medium-copy:
    - shard-mtlp:         NOTRUN -> [SKIP][53] ([i915#4077]) +26 other tests skip
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@gem_mmap_gtt@cpuset-medium-copy.html

  * igt@gem_mmap_wc@read-write:
    - shard-mtlp:         NOTRUN -> [SKIP][54] ([i915#4083]) +4 other tests skip
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@gem_mmap_wc@read-write.html

  * igt@gem_partial_pwrite_pread@writes-after-reads-snoop:
    - shard-dg1:          NOTRUN -> [SKIP][55] ([i915#3282])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-19/igt@gem_partial_pwrite_pread@writes-after-reads-snoop.html

  * igt@gem_pread@snoop:
    - shard-dg2:          NOTRUN -> [SKIP][56] ([i915#3282]) +5 other tests skip
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@gem_pread@snoop.html

  * igt@gem_pxp@create-regular-context-2:
    - shard-dg2:          NOTRUN -> [SKIP][57] ([i915#4270]) +1 other test skip
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@gem_pxp@create-regular-context-2.html

  * igt@gem_pxp@reject-modify-context-protection-off-1:
    - shard-tglu:         NOTRUN -> [SKIP][58] ([i915#4270])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-10/igt@gem_pxp@reject-modify-context-protection-off-1.html

  * igt@gem_pxp@reject-modify-context-protection-off-2:
    - shard-dg1:          NOTRUN -> [SKIP][59] ([i915#4270])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-15/igt@gem_pxp@reject-modify-context-protection-off-2.html

  * igt@gem_pxp@verify-pxp-key-change-after-suspend-resume:
    - shard-rkl:          NOTRUN -> [SKIP][60] ([i915#4270]) +1 other test skip
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@gem_pxp@verify-pxp-key-change-after-suspend-resume.html

  * igt@gem_pxp@verify-pxp-stale-buf-optout-execution:
    - shard-mtlp:         NOTRUN -> [SKIP][61] ([i915#4270])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@gem_pxp@verify-pxp-stale-buf-optout-execution.html

  * igt@gem_readwrite@read-bad-handle:
    - shard-mtlp:         NOTRUN -> [SKIP][62] ([i915#3282]) +6 other tests skip
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@gem_readwrite@read-bad-handle.html

  * igt@gem_render_copy@yf-tiled-ccs-to-yf-tiled:
    - shard-mtlp:         NOTRUN -> [SKIP][63] ([i915#8428]) +7 other tests skip
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@gem_render_copy@yf-tiled-ccs-to-yf-tiled.html

  * igt@gem_render_tiled_blits@basic:
    - shard-dg2:          NOTRUN -> [SKIP][64] ([i915#4079]) +1 other test skip
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@gem_render_tiled_blits@basic.html
    - shard-dg1:          NOTRUN -> [SKIP][65] ([i915#4079])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@gem_render_tiled_blits@basic.html

  * igt@gem_set_tiling_vs_blt@tiled-to-tiled:
    - shard-mtlp:         NOTRUN -> [SKIP][66] ([i915#4079])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@gem_set_tiling_vs_blt@tiled-to-tiled.html

  * igt@gem_set_tiling_vs_pwrite:
    - shard-rkl:          NOTRUN -> [SKIP][67] ([i915#3282]) +3 other tests skip
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@gem_set_tiling_vs_pwrite.html

  * igt@gem_softpin@evict-snoop-interruptible:
    - shard-dg2:          NOTRUN -> [SKIP][68] ([i915#4885])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@gem_softpin@evict-snoop-interruptible.html
    - shard-rkl:          NOTRUN -> [SKIP][69] ([fdo#109312])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@gem_softpin@evict-snoop-interruptible.html

  * igt@gem_tiled_blits@basic:
    - shard-dg2:          NOTRUN -> [SKIP][70] ([i915#4077]) +14 other tests skip
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gem_tiled_blits@basic.html

  * igt@gem_unfence_active_buffers:
    - shard-mtlp:         NOTRUN -> [SKIP][71] ([i915#4879])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@gem_unfence_active_buffers.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-dg1:          NOTRUN -> [SKIP][72] ([i915#3297])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-16/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@gem_userptr_blits@dmabuf-unsync:
    - shard-dg2:          NOTRUN -> [SKIP][73] ([i915#3297]) +4 other tests skip
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@gem_userptr_blits@dmabuf-unsync.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy:
    - shard-dg2:          NOTRUN -> [SKIP][74] ([i915#3297] / [i915#4880])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@gem_userptr_blits@map-fixed-invalidate-busy.html

  * igt@gem_userptr_blits@mmap-offset-banned@gtt:
    - shard-mtlp:         NOTRUN -> [SKIP][75] ([i915#3297]) +6 other tests skip
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@gem_userptr_blits@mmap-offset-banned@gtt.html

  * igt@gem_userptr_blits@unsync-overlap:
    - shard-rkl:          NOTRUN -> [SKIP][76] ([i915#3297])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@gem_userptr_blits@unsync-overlap.html

  * igt@gen7_exec_parse@chained-batch:
    - shard-dg2:          NOTRUN -> [SKIP][77] ([fdo#109289]) +4 other tests skip
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-10/igt@gen7_exec_parse@chained-batch.html

  * igt@gen9_exec_parse@bb-start-out:
    - shard-rkl:          NOTRUN -> [SKIP][78] ([i915#2527]) +1 other test skip
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@gen9_exec_parse@bb-start-out.html

  * igt@gen9_exec_parse@shadow-peek:
    - shard-mtlp:         NOTRUN -> [SKIP][79] ([i915#2856]) +3 other tests skip
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@gen9_exec_parse@shadow-peek.html

  * igt@gen9_exec_parse@valid-registers:
    - shard-dg2:          NOTRUN -> [SKIP][80] ([i915#2856])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gen9_exec_parse@valid-registers.html

  * igt@i915_fb_tiling:
    - shard-mtlp:         NOTRUN -> [SKIP][81] ([i915#4881])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@i915_fb_tiling.html

  * igt@i915_hangman@engine-engine-error@vcs0:
    - shard-mtlp:         NOTRUN -> [FAIL][82] ([i915#7069])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@i915_hangman@engine-engine-error@vcs0.html

  * igt@i915_hwmon@hwmon-write:
    - shard-mtlp:         NOTRUN -> [SKIP][83] ([i915#7707])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@i915_hwmon@hwmon-write.html

  * igt@i915_module_load@resize-bar:
    - shard-mtlp:         NOTRUN -> [SKIP][84] ([i915#6412])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@i915_module_load@resize-bar.html

  * igt@i915_pipe_stress@stress-xrgb8888-ytiled:
    - shard-mtlp:         NOTRUN -> [SKIP][85] ([i915#8436])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@i915_pipe_stress@stress-xrgb8888-ytiled.html

  * igt@i915_pm_freq_api@freq-reset-multiple:
    - shard-rkl:          NOTRUN -> [SKIP][86] ([i915#8399])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-7/igt@i915_pm_freq_api@freq-reset-multiple.html

  * igt@i915_pm_freq_mult@media-freq@gt1:
    - shard-mtlp:         NOTRUN -> [SKIP][87] ([i915#6590]) +1 other test skip
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@i915_pm_freq_mult@media-freq@gt1.html

  * igt@i915_pm_rpm@gem-mmap-type@gtt-smem0:
    - shard-mtlp:         NOTRUN -> [SKIP][88] ([i915#8431])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@i915_pm_rpm@gem-mmap-type@gtt-smem0.html

  * igt@i915_pm_rpm@modeset-lpsp-stress:
    - shard-dg2:          NOTRUN -> [SKIP][89] ([i915#1397])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@i915_pm_rpm@modeset-lpsp-stress.html
    - shard-dg1:          NOTRUN -> [SKIP][90] ([i915#1397])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@i915_pm_rpm@modeset-lpsp-stress.html

  * igt@i915_pm_rpm@modeset-lpsp-stress-no-wait:
    - shard-dg2:          [PASS][91] -> [SKIP][92] ([i915#1397]) +1 other test skip
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-10/igt@i915_pm_rpm@modeset-lpsp-stress-no-wait.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@i915_pm_rpm@modeset-lpsp-stress-no-wait.html
    - shard-dg1:          [PASS][93] -> [SKIP][94] ([i915#1397])
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg1-19/igt@i915_pm_rpm@modeset-lpsp-stress-no-wait.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-14/igt@i915_pm_rpm@modeset-lpsp-stress-no-wait.html

  * igt@i915_pm_rpm@modeset-non-lpsp:
    - shard-mtlp:         NOTRUN -> [SKIP][95] ([i915#1397])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@i915_pm_rpm@modeset-non-lpsp.html

  * igt@i915_pm_rpm@modeset-non-lpsp-stress-no-wait:
    - shard-rkl:          [PASS][96] -> [SKIP][97] ([i915#1397]) +1 other test skip
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-1/igt@i915_pm_rpm@modeset-non-lpsp-stress-no-wait.html
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-7/igt@i915_pm_rpm@modeset-non-lpsp-stress-no-wait.html

  * igt@i915_pm_rpm@pc8-residency:
    - shard-dg2:          NOTRUN -> [SKIP][98] ([fdo#109506])
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@i915_pm_rpm@pc8-residency.html

  * igt@i915_pm_rpm@system-suspend:
    - shard-mtlp:         [PASS][99] -> [ABORT][100] ([i915#9262])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-mtlp-4/igt@i915_pm_rpm@system-suspend.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-6/igt@i915_pm_rpm@system-suspend.html

  * igt@i915_pm_rps@basic-api:
    - shard-mtlp:         NOTRUN -> [SKIP][101] ([i915#6621])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-6/igt@i915_pm_rps@basic-api.html

  * igt@i915_pm_rps@engine-order:
    - shard-apl:          [PASS][102] -> [FAIL][103] ([i915#6537])
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-apl4/igt@i915_pm_rps@engine-order.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-apl3/igt@i915_pm_rps@engine-order.html

  * igt@i915_pm_rps@thresholds@gt1:
    - shard-mtlp:         NOTRUN -> [SKIP][104] ([i915#8925]) +5 other tests skip
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@i915_pm_rps@thresholds@gt1.html

  * igt@i915_query@query-topology-known-pci-ids:
    - shard-dg1:          NOTRUN -> [SKIP][105] ([fdo#109303])
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-16/igt@i915_query@query-topology-known-pci-ids.html
    - shard-dg2:          NOTRUN -> [SKIP][106] ([fdo#109303])
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@i915_query@query-topology-known-pci-ids.html

  * igt@i915_query@query-topology-unsupported:
    - shard-dg2:          NOTRUN -> [SKIP][107] ([fdo#109302])
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-10/igt@i915_query@query-topology-unsupported.html
    - shard-dg1:          NOTRUN -> [SKIP][108] ([fdo#109302])
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-18/igt@i915_query@query-topology-unsupported.html
    - shard-mtlp:         NOTRUN -> [SKIP][109] ([fdo#109302])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@i915_query@query-topology-unsupported.html

  * igt@i915_suspend@basic-s3-without-i915:
    - shard-mtlp:         NOTRUN -> [SKIP][110] ([i915#6645])
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@i915_suspend@basic-s3-without-i915.html

  * igt@i915_suspend@sysfs-reader:
    - shard-snb:          NOTRUN -> [DMESG-WARN][111] ([i915#8841])
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-snb1/igt@i915_suspend@sysfs-reader.html

  * igt@kms_addfb_basic@addfb25-framebuffer-vs-set-tiling:
    - shard-mtlp:         NOTRUN -> [SKIP][112] ([i915#4212]) +2 other tests skip
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-3/igt@kms_addfb_basic@addfb25-framebuffer-vs-set-tiling.html

  * igt@kms_addfb_basic@basic-x-tiled-legacy:
    - shard-dg2:          NOTRUN -> [SKIP][113] ([i915#4212]) +1 other test skip
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@kms_addfb_basic@basic-x-tiled-legacy.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-2-4-rc_ccs-cc:
    - shard-dg2:          NOTRUN -> [SKIP][114] ([i915#8709]) +11 other tests skip
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-2-4-rc_ccs-cc.html

  * igt@kms_async_flips@crc@pipe-b-hdmi-a-3:
    - shard-dg1:          NOTRUN -> [FAIL][115] ([i915#8247]) +3 other tests fail
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@kms_async_flips@crc@pipe-b-hdmi-a-3.html

  * igt@kms_async_flips@crc@pipe-d-edp-1:
    - shard-mtlp:         NOTRUN -> [FAIL][116] ([i915#8247]) +3 other tests fail
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_async_flips@crc@pipe-d-edp-1.html

  * igt@kms_async_flips@invalid-async-flip:
    - shard-mtlp:         NOTRUN -> [SKIP][117] ([i915#6228])
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-3/igt@kms_async_flips@invalid-async-flip.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels:
    - shard-dg2:          NOTRUN -> [SKIP][118] ([i915#1769] / [i915#3555])
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html
    - shard-dg1:          NOTRUN -> [SKIP][119] ([i915#1769] / [i915#3555])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-15/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html

  * igt@kms_big_fb@4-tiled-32bpp-rotate-270:
    - shard-dg2:          NOTRUN -> [SKIP][120] ([fdo#111614]) +3 other tests skip
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@kms_big_fb@4-tiled-32bpp-rotate-270.html
    - shard-dg1:          NOTRUN -> [SKIP][121] ([i915#4538] / [i915#5286])
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@kms_big_fb@4-tiled-32bpp-rotate-270.html

  * igt@kms_big_fb@4-tiled-8bpp-rotate-90:
    - shard-rkl:          NOTRUN -> [SKIP][122] ([i915#5286])
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@linear-32bpp-rotate-270:
    - shard-dg1:          NOTRUN -> [SKIP][123] ([i915#3638])
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-16/igt@kms_big_fb@linear-32bpp-rotate-270.html

  * igt@kms_big_fb@linear-64bpp-rotate-270:
    - shard-mtlp:         NOTRUN -> [SKIP][124] ([fdo#111614]) +6 other tests skip
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-1/igt@kms_big_fb@linear-64bpp-rotate-270.html

  * igt@kms_big_fb@linear-64bpp-rotate-90:
    - shard-rkl:          NOTRUN -> [SKIP][125] ([fdo#111614] / [i915#3638]) +3 other tests skip
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_big_fb@linear-64bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
    - shard-tglu:         [PASS][126] -> [FAIL][127] ([i915#3743]) +2 other tests fail
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-tglu-7/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-8/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
    - shard-dg2:          NOTRUN -> [SKIP][128] ([i915#5190]) +11 other tests skip
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-90:
    - shard-dg2:          NOTRUN -> [SKIP][129] ([i915#4538] / [i915#5190]) +7 other tests skip
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_big_fb@yf-tiled-32bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-270:
    - shard-mtlp:         NOTRUN -> [SKIP][130] ([fdo#111615]) +14 other tests skip
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@kms_big_fb@yf-tiled-8bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
    - shard-mtlp:         NOTRUN -> [SKIP][131] ([i915#6187]) +1 other test skip
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
    - shard-rkl:          NOTRUN -> [SKIP][132] ([fdo#111615])
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180:
    - shard-rkl:          NOTRUN -> [SKIP][133] ([fdo#110723])
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-dg1:          NOTRUN -> [SKIP][134] ([i915#4538])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-18/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_joiner@2x-modeset:
    - shard-rkl:          NOTRUN -> [SKIP][135] ([i915#2705])
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_big_joiner@2x-modeset.html

  * igt@kms_big_joiner@basic:
    - shard-dg1:          NOTRUN -> [SKIP][136] ([i915#2705])
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-19/igt@kms_big_joiner@basic.html
    - shard-mtlp:         NOTRUN -> [SKIP][137] ([i915#2705])
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_big_joiner@basic.html

  * igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_ccs:
    - shard-dg2:          NOTRUN -> [SKIP][138] ([i915#3689] / [i915#5354]) +17 other tests skip
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_ccs.html

  * igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y_tiled_gen12_rc_ccs_cc:
    - shard-dg2:          NOTRUN -> [SKIP][139] ([i915#3689] / [i915#3886] / [i915#5354]) +8 other tests skip
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_mc_ccs:
    - shard-rkl:          NOTRUN -> [SKIP][140] ([i915#3886] / [i915#5354] / [i915#6095]) +1 other test skip
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-b-bad-aux-stride-yf_tiled_ccs:
    - shard-rkl:          NOTRUN -> [SKIP][141] ([i915#3734] / [i915#5354] / [i915#6095]) +4 other tests skip
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@kms_ccs@pipe-b-bad-aux-stride-yf_tiled_ccs.html

  * igt@kms_ccs@pipe-b-bad-pixel-format-4_tiled_mtl_rc_ccs_cc:
    - shard-rkl:          NOTRUN -> [SKIP][142] ([i915#5354] / [i915#6095]) +8 other tests skip
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@kms_ccs@pipe-b-bad-pixel-format-4_tiled_mtl_rc_ccs_cc.html

  * igt@kms_ccs@pipe-b-bad-pixel-format-y_tiled_gen12_mc_ccs:
    - shard-mtlp:         NOTRUN -> [SKIP][143] ([i915#3886] / [i915#5354] / [i915#6095]) +20 other tests skip
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-6/igt@kms_ccs@pipe-b-bad-pixel-format-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-c-bad-rotation-90-4_tiled_mtl_mc_ccs:
    - shard-dg2:          NOTRUN -> [SKIP][144] ([i915#5354]) +44 other tests skip
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@kms_ccs@pipe-c-bad-rotation-90-4_tiled_mtl_mc_ccs.html

  * igt@kms_ccs@pipe-c-ccs-on-another-bo-y_tiled_ccs:
    - shard-tglu:         NOTRUN -> [SKIP][145] ([i915#3689] / [i915#5354] / [i915#6095])
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-4/igt@kms_ccs@pipe-c-ccs-on-another-bo-y_tiled_ccs.html

  * igt@kms_ccs@pipe-c-missing-ccs-buffer-4_tiled_mtl_mc_ccs:
    - shard-dg1:          NOTRUN -> [SKIP][146] ([i915#5354] / [i915#6095]) +1 other test skip
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@kms_ccs@pipe-c-missing-ccs-buffer-4_tiled_mtl_mc_ccs.html

  * igt@kms_ccs@pipe-c-missing-ccs-buffer-4_tiled_mtl_rc_ccs_cc:
    - shard-rkl:          NOTRUN -> [SKIP][147] ([i915#5354]) +19 other tests skip
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@kms_ccs@pipe-c-missing-ccs-buffer-4_tiled_mtl_rc_ccs_cc.html

  * igt@kms_ccs@pipe-d-bad-pixel-format-4_tiled_dg2_rc_ccs_cc:
    - shard-mtlp:         NOTRUN -> [SKIP][148] ([i915#5354] / [i915#6095]) +58 other tests skip
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_ccs@pipe-d-bad-pixel-format-4_tiled_dg2_rc_ccs_cc.html

  * igt@kms_ccs@pipe-d-bad-rotation-90-4_tiled_dg2_mc_ccs:
    - shard-dg1:          NOTRUN -> [SKIP][149] ([i915#3689] / [i915#5354] / [i915#6095]) +4 other tests skip
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-16/igt@kms_ccs@pipe-d-bad-rotation-90-4_tiled_dg2_mc_ccs.html

  * igt@kms_cdclk@mode-transition:
    - shard-rkl:          NOTRUN -> [SKIP][150] ([i915#3742])
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_cdclk@mode-transition.html

  * igt@kms_cdclk@mode-transition-all-outputs:
    - shard-mtlp:         NOTRUN -> [SKIP][151] ([i915#7213] / [i915#9010])
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@kms_cdclk@mode-transition-all-outputs.html

  * igt@kms_cdclk@mode-transition@pipe-d-dp-4:
    - shard-dg2:          NOTRUN -> [SKIP][152] ([i915#4087] / [i915#7213]) +3 other tests skip
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_cdclk@mode-transition@pipe-d-dp-4.html

  * igt@kms_chamelium_color@ctm-blue-to-red:
    - shard-tglu:         NOTRUN -> [SKIP][153] ([fdo#111827])
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-5/igt@kms_chamelium_color@ctm-blue-to-red.html

  * igt@kms_chamelium_color@ctm-limited-range:
    - shard-mtlp:         NOTRUN -> [SKIP][154] ([fdo#111827]) +3 other tests skip
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@kms_chamelium_color@ctm-limited-range.html
    - shard-dg2:          NOTRUN -> [SKIP][155] ([fdo#111827])
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_chamelium_color@ctm-limited-range.html

  * igt@kms_chamelium_edid@hdmi-edid-read:
    - shard-rkl:          NOTRUN -> [SKIP][156] ([i915#7828]) +4 other tests skip
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_chamelium_edid@hdmi-edid-read.html

  * igt@kms_chamelium_frames@hdmi-crc-fast:
    - shard-dg2:          NOTRUN -> [SKIP][157] ([i915#7828]) +7 other tests skip
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_chamelium_frames@hdmi-crc-fast.html

  * igt@kms_chamelium_frames@hdmi-frame-dump:
    - shard-dg1:          NOTRUN -> [SKIP][158] ([i915#7828]) +1 other test skip
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-16/igt@kms_chamelium_frames@hdmi-frame-dump.html

  * igt@kms_chamelium_hpd@hdmi-hpd:
    - shard-mtlp:         NOTRUN -> [SKIP][159] ([i915#7828]) +10 other tests skip
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_chamelium_hpd@hdmi-hpd.html

  * igt@kms_color@deep-color:
    - shard-dg2:          NOTRUN -> [SKIP][160] ([i915#3555]) +4 other tests skip
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@kms_color@deep-color.html
    - shard-rkl:          NOTRUN -> [SKIP][161] ([i915#3555]) +4 other tests skip
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_color@deep-color.html

  * igt@kms_color@deep-color@pipe-b-edp-1-degamma:
    - shard-mtlp:         NOTRUN -> [FAIL][162] ([i915#6892]) +3 other tests fail
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@kms_color@deep-color@pipe-b-edp-1-degamma.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-dg2:          NOTRUN -> [SKIP][163] ([i915#7118])
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@atomic@pipe-a-dp-4:
    - shard-dg2:          NOTRUN -> [TIMEOUT][164] ([i915#7173]) +1 other test timeout
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_content_protection@atomic@pipe-a-dp-4.html

  * igt@kms_content_protection@content_type_change:
    - shard-mtlp:         NOTRUN -> [SKIP][165] ([i915#6944])
   [165]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@kms_content_protection@content_type_change.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-dg2:          NOTRUN -> [SKIP][166] ([i915#3299])
   [166]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_content_protection@mei_interface:
    - shard-mtlp:         NOTRUN -> [SKIP][167] ([i915#8063])
   [167]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@kms_content_protection@mei_interface.html

  * igt@kms_content_protection@uevent:
    - shard-rkl:          NOTRUN -> [SKIP][168] ([i915#7118])
   [168]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@cursor-offscreen-512x170:
    - shard-dg1:          NOTRUN -> [SKIP][169] ([i915#3359])
   [169]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@kms_cursor_crc@cursor-offscreen-512x170.html

  * igt@kms_cursor_crc@cursor-onscreen-32x10:
    - shard-mtlp:         NOTRUN -> [SKIP][170] ([i915#3555] / [i915#8814]) +1 other test skip
   [170]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@kms_cursor_crc@cursor-onscreen-32x10.html

  * igt@kms_cursor_crc@cursor-onscreen-512x170:
    - shard-dg2:          NOTRUN -> [SKIP][171] ([i915#3359]) +1 other test skip
   [171]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_cursor_crc@cursor-onscreen-512x170.html
    - shard-rkl:          NOTRUN -> [SKIP][172] ([fdo#109279] / [i915#3359])
   [172]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_cursor_crc@cursor-onscreen-512x170.html

  * igt@kms_cursor_crc@cursor-random-max-size:
    - shard-dg1:          NOTRUN -> [SKIP][173] ([i915#3555])
   [173]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@kms_cursor_crc@cursor-random-max-size.html

  * igt@kms_cursor_crc@cursor-sliding-512x512:
    - shard-mtlp:         NOTRUN -> [SKIP][174] ([i915#3359])
   [174]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_cursor_crc@cursor-sliding-512x512.html

  * igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy:
    - shard-rkl:          NOTRUN -> [SKIP][175] ([fdo#111825]) +3 other tests skip
   [175]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html

  * igt@kms_cursor_legacy@2x-long-cursor-vs-flip-atomic:
    - shard-mtlp:         NOTRUN -> [SKIP][176] ([i915#3546]) +10 other tests skip
   [176]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-atomic.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size:
    - shard-rkl:          NOTRUN -> [SKIP][177] ([i915#4103])
   [177]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size.html
    - shard-mtlp:         NOTRUN -> [SKIP][178] ([i915#4213]) +2 other tests skip
   [178]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-toggle:
    - shard-dg2:          NOTRUN -> [SKIP][179] ([fdo#109274] / [fdo#111767] / [i915#5354])
   [179]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@kms_cursor_legacy@cursorb-vs-flipb-toggle.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size:
    - shard-dg2:          NOTRUN -> [SKIP][180] ([fdo#109274] / [i915#5354]) +4 other tests skip
   [180]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-10/igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
    - shard-glk:          [PASS][181] -> [FAIL][182] ([i915#2346])
   [181]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-glk1/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
   [182]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-glk3/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
    - shard-dg2:          NOTRUN -> [SKIP][183] ([i915#4103] / [i915#4213])
   [183]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html

  * igt@kms_display_modes@extended-mode-basic:
    - shard-mtlp:         NOTRUN -> [SKIP][184] ([i915#3555] / [i915#8827])
   [184]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@kms_display_modes@extended-mode-basic.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-rkl:          NOTRUN -> [SKIP][185] ([i915#3555] / [i915#3840])
   [185]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_dsc@dsc-with-output-formats:
    - shard-dg2:          NOTRUN -> [SKIP][186] ([i915#3555] / [i915#3840]) +1 other test skip
   [186]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@kms_dsc@dsc-with-output-formats.html

  * igt@kms_fbcon_fbt@psr-suspend:
    - shard-dg2:          NOTRUN -> [SKIP][187] ([i915#3469])
   [187]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@kms_fbcon_fbt@psr-suspend.html

  * igt@kms_flip@2x-absolute-wf_vblank:
    - shard-dg2:          NOTRUN -> [SKIP][188] ([fdo#109274]) +8 other tests skip
   [188]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@kms_flip@2x-absolute-wf_vblank.html

  * igt@kms_flip@2x-absolute-wf_vblank-interruptible:
    - shard-mtlp:         NOTRUN -> [SKIP][189] ([i915#3637]) +2 other tests skip
   [189]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@kms_flip@2x-absolute-wf_vblank-interruptible.html

  * igt@kms_flip@2x-flip-vs-blocking-wf-vblank:
    - shard-rkl:          NOTRUN -> [SKIP][190] ([fdo#111767] / [fdo#111825])
   [190]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_flip@2x-flip-vs-blocking-wf-vblank.html
    - shard-mtlp:         NOTRUN -> [SKIP][191] ([fdo#111767] / [i915#3637])
   [191]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@kms_flip@2x-flip-vs-blocking-wf-vblank.html

  * igt@kms_flip@flip-vs-fences:
    - shard-mtlp:         NOTRUN -> [SKIP][192] ([i915#8381])
   [192]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-6/igt@kms_flip@flip-vs-fences.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][193] ([i915#2672]) +2 other tests skip
   [193]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-valid-mode:
    - shard-dg1:          NOTRUN -> [SKIP][194] ([i915#2587] / [i915#2672])
   [194]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-19/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][195] ([i915#8810])
   [195]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-1/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-rkl:          NOTRUN -> [SKIP][196] ([i915#2672])
   [196]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling@pipe-a-valid-mode:
    - shard-dg2:          NOTRUN -> [SKIP][197] ([i915#2672]) +2 other tests skip
   [197]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][198] ([i915#2672] / [i915#3555])
   [198]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling@pipe-a-default-mode.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-mmap-wc:
    - shard-dg1:          NOTRUN -> [SKIP][199] ([i915#8708]) +2 other tests skip
   [199]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-shrfb-draw-pwrite:
    - shard-dg2:          [PASS][200] -> [FAIL][201] ([i915#6880])
   [200]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-11/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-shrfb-draw-pwrite.html
   [201]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-shrfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt:
    - shard-mtlp:         NOTRUN -> [SKIP][202] ([i915#1825]) +43 other tests skip
   [202]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-msflip-blt:
    - shard-tglu:         NOTRUN -> [SKIP][203] ([fdo#109280])
   [203]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-4/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@fbc-tiling-y:
    - shard-mtlp:         NOTRUN -> [SKIP][204] ([i915#5460])
   [204]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
    - shard-dg2:          NOTRUN -> [SKIP][205] ([i915#5460])
   [205]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_frontbuffer_tracking@fbc-tiling-y.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-onoff:
    - shard-dg1:          NOTRUN -> [SKIP][206] ([fdo#111825]) +5 other tests skip
   [206]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-15/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-blt:
    - shard-rkl:          NOTRUN -> [SKIP][207] ([fdo#111825] / [i915#1825]) +25 other tests skip
   [207]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-7/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-farfromfence-mmap-gtt:
    - shard-dg2:          NOTRUN -> [SKIP][208] ([i915#8708]) +11 other tests skip
   [208]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@kms_frontbuffer_tracking@fbcpsr-farfromfence-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-mmap-gtt:
    - shard-mtlp:         NOTRUN -> [SKIP][209] ([i915#8708]) +13 other tests skip
   [209]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-shrfb-draw-blt:
    - shard-tglu:         NOTRUN -> [SKIP][210] ([fdo#110189])
   [210]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-3/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-shrfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-rte:
    - shard-rkl:          NOTRUN -> [SKIP][211] ([i915#3023]) +12 other tests skip
   [211]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_frontbuffer_tracking@psr-1p-rte.html

  * igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary:
    - shard-dg2:          NOTRUN -> [SKIP][212] ([i915#3458]) +19 other tests skip
   [212]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary.html
    - shard-dg1:          NOTRUN -> [SKIP][213] ([i915#3458]) +4 other tests skip
   [213]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-dg2:          NOTRUN -> [SKIP][214] ([i915#3555] / [i915#8228])
   [214]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_hdr@invalid-hdr:
    - shard-rkl:          NOTRUN -> [SKIP][215] ([i915#3555] / [i915#8228])
   [215]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@kms_hdr@invalid-hdr.html

  * igt@kms_hdr@static-toggle-dpms:
    - shard-mtlp:         NOTRUN -> [SKIP][216] ([i915#3555] / [i915#8228])
   [216]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-3/igt@kms_hdr@static-toggle-dpms.html

  * igt@kms_pipe_b_c_ivb@from-pipe-c-to-b-with-3-lanes:
    - shard-rkl:          NOTRUN -> [SKIP][217] ([fdo#109289]) +4 other tests skip
   [217]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_pipe_b_c_ivb@from-pipe-c-to-b-with-3-lanes.html

  * igt@kms_plane@plane-panning-bottom-right-suspend@pipe-a-planes:
    - shard-mtlp:         NOTRUN -> [ABORT][218] ([i915#9262]) +6 other tests abort
   [218]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-a-planes.html

  * igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes:
    - shard-mtlp:         NOTRUN -> [DMESG-WARN][219] ([i915#9262]) +3 other tests dmesg-warn
   [219]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes.html

  * igt@kms_plane_lowres@tiling-x@pipe-c-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][220] ([i915#3582]) +7 other tests skip
   [220]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@kms_plane_lowres@tiling-x@pipe-c-edp-1.html

  * igt@kms_plane_multiple@tiling-y:
    - shard-mtlp:         NOTRUN -> [SKIP][221] ([i915#8806])
   [221]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@kms_plane_multiple@tiling-y.html

  * igt@kms_plane_scaling@intel-max-src-size:
    - shard-mtlp:         NOTRUN -> [SKIP][222] ([i915#6953])
   [222]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_plane_scaling@intel-max-src-size.html

  * igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-1:
    - shard-dg1:          NOTRUN -> [FAIL][223] ([i915#8292])
   [223]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-19/igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-1.html

  * igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-b-hdmi-a-1:
    - shard-dg2:          NOTRUN -> [SKIP][224] ([i915#5176]) +11 other tests skip
   [224]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-10/igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-b-hdmi-a-1.html

  * igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-5@pipe-c-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][225] ([i915#5176]) +3 other tests skip
   [225]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-3/igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-5@pipe-c-edp-1.html

  * igt@kms_plane_scaling@plane-downscale-with-rotation-factor-0-5@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][226] ([i915#5176]) +3 other tests skip
   [226]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_plane_scaling@plane-downscale-with-rotation-factor-0-5@pipe-b-hdmi-a-2.html

  * igt@kms_plane_scaling@plane-downscale-with-rotation-factor-0-75@pipe-a-hdmi-a-4:
    - shard-dg1:          NOTRUN -> [SKIP][227] ([i915#5176]) +19 other tests skip
   [227]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-14/igt@kms_plane_scaling@plane-downscale-with-rotation-factor-0-75@pipe-a-hdmi-a-4.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-d-dp-4:
    - shard-dg2:          NOTRUN -> [SKIP][228] ([i915#5235]) +15 other tests skip
   [228]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-d-dp-4.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-5@pipe-b-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][229] ([i915#5235]) +27 other tests skip
   [229]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@kms_plane_scaling@planes-downscale-factor-0-5@pipe-b-edp-1.html

  * igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][230] ([i915#5235]) +5 other tests skip
   [230]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-b-hdmi-a-2.html

  * igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-d-hdmi-a-4:
    - shard-dg1:          NOTRUN -> [SKIP][231] ([i915#5235]) +15 other tests skip
   [231]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-14/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-d-hdmi-a-4.html

  * igt@kms_prime@basic-crc-hybrid:
    - shard-mtlp:         NOTRUN -> [SKIP][232] ([i915#6524])
   [232]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-6/igt@kms_prime@basic-crc-hybrid.html

  * igt@kms_prime@basic-crc-vgem@second-to-first:
    - shard-mtlp:         [PASS][233] -> [DMESG-WARN][234] ([i915#2017] / [i915#9157])
   [233]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-mtlp-8/igt@kms_prime@basic-crc-vgem@second-to-first.html
   [234]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-6/igt@kms_prime@basic-crc-vgem@second-to-first.html

  * igt@kms_prime@basic-modeset-hybrid:
    - shard-tglu:         NOTRUN -> [SKIP][235] ([i915#6524])
   [235]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-3/igt@kms_prime@basic-modeset-hybrid.html

  * igt@kms_psr2_sf@cursor-plane-move-continuous-exceed-fully-sf:
    - shard-rkl:          NOTRUN -> [SKIP][236] ([i915#658]) +1 other test skip
   [236]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_psr2_sf@cursor-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr2_su@page_flip-p010:
    - shard-rkl:          NOTRUN -> [SKIP][237] ([fdo#111068] / [i915#658])
   [237]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@kms_psr2_su@page_flip-p010.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-dg2:          NOTRUN -> [SKIP][238] ([i915#658]) +2 other tests skip
   [238]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@no_drrs:
    - shard-dg1:          NOTRUN -> [SKIP][239] ([i915#1072])
   [239]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@kms_psr@no_drrs.html

  * igt@kms_psr@primary_render:
    - shard-rkl:          NOTRUN -> [SKIP][240] ([i915#1072]) +6 other tests skip
   [240]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@kms_psr@primary_render.html

  * igt@kms_psr@psr2_sprite_mmap_gtt:
    - shard-dg2:          NOTRUN -> [SKIP][241] ([i915#1072]) +5 other tests skip
   [241]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_psr@psr2_sprite_mmap_gtt.html

  * igt@kms_psr_stress_test@flip-primary-invalidate-overlay:
    - shard-dg2:          NOTRUN -> [SKIP][242] ([i915#5461] / [i915#658])
   [242]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html

  * igt@kms_rotation_crc@bad-pixel-format:
    - shard-snb:          NOTRUN -> [SKIP][243] ([fdo#109271]) +106 other tests skip
   [243]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-snb5/igt@kms_rotation_crc@bad-pixel-format.html

  * igt@kms_rotation_crc@primary-4-tiled-reflect-x-180:
    - shard-dg1:          NOTRUN -> [SKIP][244] ([i915#5289])
   [244]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html

  * igt@kms_rotation_crc@primary-rotation-270:
    - shard-mtlp:         NOTRUN -> [SKIP][245] ([i915#4235]) +4 other tests skip
   [245]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@kms_rotation_crc@primary-rotation-270.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-270:
    - shard-dg2:          NOTRUN -> [SKIP][246] ([i915#4235] / [i915#5190])
   [246]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_rotation_crc@primary-y-tiled-reflect-x-270.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - shard-mtlp:         NOTRUN -> [SKIP][247] ([i915#3555] / [i915#8809]) +1 other test skip
   [247]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@kms_setmode@invalid-clone-single-crtc-stealing:
    - shard-rkl:          NOTRUN -> [SKIP][248] ([i915#3555] / [i915#4098])
   [248]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_setmode@invalid-clone-single-crtc-stealing.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-rkl:          NOTRUN -> [SKIP][249] ([i915#8623])
   [249]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-1/igt@kms_tiled_display@basic-test-pattern.html

  * igt@kms_universal_plane@cursor-fb-leak-pipe-b:
    - shard-rkl:          NOTRUN -> [FAIL][250] ([i915#9196])
   [250]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_universal_plane@cursor-fb-leak-pipe-b.html

  * igt@kms_universal_plane@cursor-fb-leak-pipe-c:
    - shard-tglu:         [PASS][251] -> [FAIL][252] ([i915#9196])
   [251]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-tglu-6/igt@kms_universal_plane@cursor-fb-leak-pipe-c.html
   [252]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-10/igt@kms_universal_plane@cursor-fb-leak-pipe-c.html

  * igt@kms_universal_plane@universal-plane-pipe-d-functional:
    - shard-rkl:          NOTRUN -> [SKIP][253] ([i915#4070] / [i915#533] / [i915#6768]) +2 other tests skip
   [253]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@kms_universal_plane@universal-plane-pipe-d-functional.html

  * igt@kms_vblank@pipe-b-ts-continuation-dpms-suspend:
    - shard-apl:          [PASS][254] -> [INCOMPLETE][255] ([i915#180] / [i915#9392])
   [254]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-apl1/igt@kms_vblank@pipe-b-ts-continuation-dpms-suspend.html
   [255]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-apl3/igt@kms_vblank@pipe-b-ts-continuation-dpms-suspend.html

  * igt@kms_vblank@pipe-c-ts-continuation-modeset-rpm:
    - shard-rkl:          NOTRUN -> [SKIP][256] ([i915#4070] / [i915#6768]) +3 other tests skip
   [256]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_vblank@pipe-c-ts-continuation-modeset-rpm.html

  * igt@kms_writeback@writeback-fb-id:
    - shard-mtlp:         NOTRUN -> [SKIP][257] ([i915#2437]) +1 other test skip
   [257]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-3/igt@kms_writeback@writeback-fb-id.html

  * igt@perf@enable-disable@0-rcs0:
    - shard-dg2:          [PASS][258] -> [FAIL][259] ([i915#8724])
   [258]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-1/igt@perf@enable-disable@0-rcs0.html
   [259]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@perf@enable-disable@0-rcs0.html

  * igt@perf@gen8-unprivileged-single-ctx-counters:
    - shard-mtlp:         NOTRUN -> [SKIP][260] ([fdo#109289]) +8 other tests skip
   [260]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@perf@gen8-unprivileged-single-ctx-counters.html

  * igt@perf@global-sseu-config:
    - shard-mtlp:         NOTRUN -> [SKIP][261] ([i915#7387])
   [261]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@perf@global-sseu-config.html
    - shard-dg2:          NOTRUN -> [SKIP][262] ([i915#7387])
   [262]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-10/igt@perf@global-sseu-config.html

  * igt@perf@mi-rpc:
    - shard-dg2:          NOTRUN -> [SKIP][263] ([i915#2434])
   [263]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@perf@mi-rpc.html

  * igt@perf@unprivileged-single-ctx-counters:
    - shard-tglu:         NOTRUN -> [SKIP][264] ([fdo#109289])
   [264]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-10/igt@perf@unprivileged-single-ctx-counters.html

  * igt@perf_pmu@busy-double-start@vecs1:
    - shard-dg2:          NOTRUN -> [FAIL][265] ([i915#4349]) +3 other tests fail
   [265]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-2/igt@perf_pmu@busy-double-start@vecs1.html

  * igt@perf_pmu@cpu-hotplug:
    - shard-mtlp:         NOTRUN -> [SKIP][266] ([i915#8850])
   [266]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-1/igt@perf_pmu@cpu-hotplug.html

  * igt@perf_pmu@event-wait@rcs0:
    - shard-rkl:          NOTRUN -> [SKIP][267] ([fdo#112283]) +1 other test skip
   [267]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@perf_pmu@event-wait@rcs0.html
    - shard-mtlp:         NOTRUN -> [SKIP][268] ([i915#8807])
   [268]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-1/igt@perf_pmu@event-wait@rcs0.html

  * igt@perf_pmu@frequency@gt0:
    - shard-dg2:          [PASS][269] -> [FAIL][270] ([i915#6806])
   [269]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-2/igt@perf_pmu@frequency@gt0.html
   [270]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-6/igt@perf_pmu@frequency@gt0.html
    - shard-dg1:          [PASS][271] -> [FAIL][272] ([i915#6806])
   [271]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg1-17/igt@perf_pmu@frequency@gt0.html
   [272]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@perf_pmu@frequency@gt0.html

  * igt@perf_pmu@module-unload:
    - shard-dg2:          NOTRUN -> [FAIL][273] ([i915#5793])
   [273]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@perf_pmu@module-unload.html

  * igt@prime_vgem@basic-fence-read:
    - shard-mtlp:         NOTRUN -> [SKIP][274] ([i915#3708]) +2 other tests skip
   [274]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@prime_vgem@basic-fence-read.html

  * igt@prime_vgem@basic-write:
    - shard-rkl:          NOTRUN -> [SKIP][275] ([fdo#109295] / [i915#3291] / [i915#3708])
   [275]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@prime_vgem@basic-write.html

  * igt@prime_vgem@coherency-blt:
    - shard-mtlp:         NOTRUN -> [FAIL][276] ([i915#8445])
   [276]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-8/igt@prime_vgem@coherency-blt.html

  * igt@sysfs_heartbeat_interval@mixed@vecs0:
    - shard-mtlp:         NOTRUN -> [FAIL][277] ([i915#1731])
   [277]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@sysfs_heartbeat_interval@mixed@vecs0.html

  * igt@sysfs_timeslice_duration@timeout@vecs0:
    - shard-mtlp:         NOTRUN -> [TIMEOUT][278] ([i915#6950])
   [278]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-7/igt@sysfs_timeslice_duration@timeout@vecs0.html

  * igt@tools_test@sysfs_l3_parity:
    - shard-mtlp:         NOTRUN -> [SKIP][279] ([i915#4818])
   [279]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-5/igt@tools_test@sysfs_l3_parity.html

  * igt@v3d/v3d_get_param@get-bad-param:
    - shard-mtlp:         NOTRUN -> [SKIP][280] ([i915#2575]) +21 other tests skip
   [280]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-2/igt@v3d/v3d_get_param@get-bad-param.html

  * igt@v3d/v3d_perfmon@create-perfmon-exceed:
    - shard-dg1:          NOTRUN -> [SKIP][281] ([i915#2575]) +1 other test skip
   [281]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@v3d/v3d_perfmon@create-perfmon-exceed.html

  * igt@v3d/v3d_submit_cl@bad-multisync-extension:
    - shard-apl:          NOTRUN -> [SKIP][282] ([fdo#109271]) +52 other tests skip
   [282]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-apl6/igt@v3d/v3d_submit_cl@bad-multisync-extension.html

  * igt@v3d/v3d_submit_cl@bad-multisync-in-sync:
    - shard-rkl:          NOTRUN -> [SKIP][283] ([fdo#109315]) +10 other tests skip
   [283]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@v3d/v3d_submit_cl@bad-multisync-in-sync.html

  * igt@v3d/v3d_submit_cl@bad-multisync-out-sync:
    - shard-dg2:          NOTRUN -> [SKIP][284] ([i915#2575]) +16 other tests skip
   [284]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@v3d/v3d_submit_cl@bad-multisync-out-sync.html

  * igt@v3d/v3d_submit_csd@bad-flag:
    - shard-tglu:         NOTRUN -> [SKIP][285] ([fdo#109315] / [i915#2575]) +1 other test skip
   [285]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-4/igt@v3d/v3d_submit_csd@bad-flag.html

  * igt@vc4/vc4_mmap@mmap-bo:
    - shard-dg2:          NOTRUN -> [SKIP][286] ([i915#7711]) +9 other tests skip
   [286]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@vc4/vc4_mmap@mmap-bo.html

  * igt@vc4/vc4_perfmon@create-perfmon-exceed:
    - shard-mtlp:         NOTRUN -> [SKIP][287] ([i915#7711]) +13 other tests skip
   [287]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-4/igt@vc4/vc4_perfmon@create-perfmon-exceed.html

  * igt@vc4/vc4_tiling@get-bad-modifier:
    - shard-rkl:          NOTRUN -> [SKIP][288] ([i915#7711]) +6 other tests skip
   [288]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@vc4/vc4_tiling@get-bad-modifier.html

  * igt@vc4/vc4_wait_bo@used-bo-1ns:
    - shard-dg1:          NOTRUN -> [SKIP][289] ([i915#7711]) +1 other test skip
   [289]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-19/igt@vc4/vc4_wait_bo@used-bo-1ns.html

  
#### Possible fixes ####

  * igt@drm_fdinfo@most-busy-check-all@rcs0:
    - shard-rkl:          [FAIL][290] ([i915#7742]) -> [PASS][291] +1 other test pass
   [290]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-4/igt@drm_fdinfo@most-busy-check-all@rcs0.html
   [291]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@drm_fdinfo@most-busy-check-all@rcs0.html

  * igt@gem_ctx_exec@basic-nohangcheck:
    - shard-rkl:          [FAIL][292] ([i915#6268]) -> [PASS][293]
   [292]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-1/igt@gem_ctx_exec@basic-nohangcheck.html
   [293]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-4/igt@gem_ctx_exec@basic-nohangcheck.html

  * igt@gem_eio@reset-stress:
    - shard-dg2:          [FAIL][294] ([i915#5784]) -> [PASS][295]
   [294]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-3/igt@gem_eio@reset-stress.html
   [295]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@gem_eio@reset-stress.html

  * igt@gem_exec_endless@dispatch@vcs1:
    - shard-tglu:         [TIMEOUT][296] ([i915#3778]) -> [PASS][297]
   [296]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-tglu-2/igt@gem_exec_endless@dispatch@vcs1.html
   [297]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-4/igt@gem_exec_endless@dispatch@vcs1.html

  * igt@gem_exec_flush@basic-batch-kernel-default-uc:
    - shard-mtlp:         [DMESG-FAIL][298] ([i915#8962]) -> [PASS][299]
   [298]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-mtlp-4/igt@gem_exec_flush@basic-batch-kernel-default-uc.html
   [299]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-1/igt@gem_exec_flush@basic-batch-kernel-default-uc.html

  * igt@gem_exec_suspend@basic-s0@lmem0:
    - shard-dg2:          [INCOMPLETE][300] -> [PASS][301]
   [300]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-1/igt@gem_exec_suspend@basic-s0@lmem0.html
   [301]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-3/igt@gem_exec_suspend@basic-s0@lmem0.html

  * igt@gem_mmap_gtt@fault-concurrent-y:
    - shard-snb:          [INCOMPLETE][302] ([i915#5161]) -> [PASS][303]
   [302]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-snb1/igt@gem_mmap_gtt@fault-concurrent-y.html
   [303]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-snb1/igt@gem_mmap_gtt@fault-concurrent-y.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-apl:          [INCOMPLETE][304] ([i915#5566]) -> [PASS][305]
   [304]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-apl1/igt@gen9_exec_parse@allowed-single.html
   [305]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-apl6/igt@gen9_exec_parse@allowed-single.html

  * igt@i915_pm_rpm@dpms-mode-unset-lpsp:
    - shard-dg1:          [SKIP][306] ([i915#1397]) -> [PASS][307]
   [306]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg1-12/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html
   [307]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-19/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html

  * igt@i915_pm_rpm@i2c:
    - shard-dg2:          [FAIL][308] ([i915#8717]) -> [PASS][309]
   [308]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-2/igt@i915_pm_rpm@i2c.html
   [309]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@i915_pm_rpm@i2c.html

  * igt@i915_pm_rps@reset:
    - shard-dg1:          [FAIL][310] -> [PASS][311]
   [310]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg1-12/igt@i915_pm_rps@reset.html
   [311]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@i915_pm_rps@reset.html
    - shard-tglu:         [FAIL][312] -> [PASS][313]
   [312]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-tglu-2/igt@i915_pm_rps@reset.html
   [313]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-5/igt@i915_pm_rps@reset.html

  * igt@i915_suspend@basic-s2idle-without-i915:
    - shard-dg1:          [DMESG-WARN][314] ([i915#4391] / [i915#4423]) -> [PASS][315]
   [314]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg1-17/igt@i915_suspend@basic-s2idle-without-i915.html
   [315]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@i915_suspend@basic-s2idle-without-i915.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip:
    - shard-mtlp:         [FAIL][316] ([i915#5138]) -> [PASS][317]
   [316]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-mtlp-3/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
   [317]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-mtlp-1/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
    - shard-tglu:         [FAIL][318] ([i915#3743]) -> [PASS][319] +2 other tests pass
   [318]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-tglu-10/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
   [319]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-tglu-2/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-cpu:
    - shard-dg2:          [FAIL][320] ([i915#6880]) -> [PASS][321] +2 other tests pass
   [320]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-11/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-cpu.html
   [321]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-1/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-cpu.html

  * igt@kms_pipe_crc_basic@suspend-read-crc@pipe-b-dp-1:
    - shard-apl:          [INCOMPLETE][322] ([i915#180] / [i915#9392]) -> [PASS][323] +1 other test pass
   [322]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-apl2/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-b-dp-1.html
   [323]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-apl7/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-b-dp-1.html

  * {igt@kms_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a}:
    - shard-rkl:          [SKIP][324] ([i915#1937]) -> [PASS][325]
   [324]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-4/igt@kms_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a.html
   [325]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-7/igt@kms_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
    - shard-rkl:          [INCOMPLETE][326] ([i915#8875]) -> [PASS][327] +1 other test pass
   [326]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-1/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
   [327]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html

  * igt@kms_sysfs_edid_timing:
    - shard-dg2:          [FAIL][328] ([IGT#2]) -> [PASS][329]
   [328]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg2-2/igt@kms_sysfs_edid_timing.html
   [329]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg2-11/igt@kms_sysfs_edid_timing.html

  * igt@perf_pmu@frequency@gt0:
    - shard-apl:          [SKIP][330] ([fdo#109271]) -> [PASS][331]
   [330]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-apl7/igt@perf_pmu@frequency@gt0.html
   [331]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-apl3/igt@perf_pmu@frequency@gt0.html
    - shard-glk:          [SKIP][332] ([fdo#109271]) -> [PASS][333]
   [332]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-glk9/igt@perf_pmu@frequency@gt0.html
   [333]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-glk6/igt@perf_pmu@frequency@gt0.html
    - shard-snb:          [SKIP][334] ([fdo#109271]) -> [PASS][335]
   [334]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-snb6/igt@perf_pmu@frequency@gt0.html
   [335]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-snb6/igt@perf_pmu@frequency@gt0.html

  
#### Warnings ####

  * igt@kms_fbcon_fbt@psr-suspend:
    - shard-rkl:          [SKIP][336] ([fdo#110189] / [i915#3955]) -> [SKIP][337] ([i915#3955])
   [336]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-2/igt@kms_fbcon_fbt@psr-suspend.html
   [337]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-6/igt@kms_fbcon_fbt@psr-suspend.html

  * igt@kms_force_connector_basic@force-load-detect:
    - shard-rkl:          [SKIP][338] ([fdo#109285] / [i915#4098]) -> [SKIP][339] ([fdo#109285])
   [338]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-2/igt@kms_force_connector_basic@force-load-detect.html
   [339]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-7/igt@kms_force_connector_basic@force-load-detect.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-gtt:
    - shard-dg1:          [SKIP][340] ([i915#4423] / [i915#8708]) -> [SKIP][341] ([i915#8708])
   [340]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg1-17/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-gtt.html
   [341]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-17/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-gtt.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-rkl:          [SKIP][342] ([i915#4816]) -> [SKIP][343] ([i915#4070] / [i915#4816])
   [342]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-rkl-7/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
   [343]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-rkl-2/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_psr@cursor_plane_move:
    - shard-dg1:          [SKIP][344] ([i915#1072]) -> [SKIP][345] ([i915#1072] / [i915#4078]) +1 other test skip
   [344]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg1-14/igt@kms_psr@cursor_plane_move.html
   [345]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-12/igt@kms_psr@cursor_plane_move.html

  * igt@kms_psr@sprite_plane_onoff:
    - shard-dg1:          [SKIP][346] ([i915#1072] / [i915#4078]) -> [SKIP][347] ([i915#1072])
   [346]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13681/shard-dg1-12/igt@kms_psr@sprite_plane_onoff.html
   [347]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/shard-dg1-19/igt@kms_psr@sprite_plane_onoff.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [IGT#2]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/2
  [fdo#103375]: https://bugs.freedesktop.org/show_bug.cgi?id=103375
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109295]: https://bugs.freedesktop.org/show_bug.cgi?id=109295
  [fdo#109302]: https://bugs.freedesktop.org/show_bug.cgi?id=109302
  [fdo#109303]: https://bugs.freedesktop.org/show_bug.cgi?id=109303
  [fdo#109312]: https://bugs.freedesktop.org/show_bug.cgi?id=109312
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#109506]: https://bugs.freedesktop.org/show_bug.cgi?id=109506
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
  [fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111656]: https://bugs.freedesktop.org/show_bug.cgi?id=111656
  [fdo#111767]: https://bugs.freedesktop.org/show_bug.cgi?id=111767
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112283]: https://bugs.freedesktop.org/show_bug.cgi?id=112283
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1099]: https://gitlab.freedesktop.org/drm/intel/issues/1099
  [i915#1397]: https://gitlab.freedesktop.org/drm/intel/issues/1397
  [i915#1731]: https://gitlab.freedesktop.org/drm/intel/issues/1731
  [i915#1769]: https://gitlab.freedesktop.org/drm/intel/issues/1769
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
  [i915#1937]: https://gitlab.freedesktop.org/drm/intel/issues/1937
  [i915#2017]: https://gitlab.freedesktop.org/drm/intel/issues/2017
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2434]: https://gitlab.freedesktop.org/drm/intel/issues/2434
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2705]: https://gitlab.freedesktop.org/drm/intel/issues/2705
  [i915#280]: https://gitlab.freedesktop.org/drm/intel/issues/280
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2846]: https://gitlab.freedesktop.org/drm/intel/issues/2846
  [i915#2856]: https://gitlab.freedesktop.org/drm/intel/issues/2856
  [i915#3023]: https://gitlab.freedesktop.org/drm/intel/issues/3023
  [i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
  [i915#3299]: https://gitlab.freedesktop.org/drm/intel/issues/3299
  [i915#3359]: https://gitlab.freedesktop.org/drm/intel/issues/3359
  [i915#3361]: https://gitlab.freedesktop.org/drm/intel/issues/3361
  [i915#3458]: https://gitlab.freedesktop.org/drm/intel/issues/3458
  [i915#3469]: https://gitlab.freedesktop.org/drm/intel/issues/3469
  [i915#3539]: https://gitlab.freedesktop.org/drm/intel/issues/3539
  [i915#3546]: https://gitlab.freedesktop.org/drm/intel/issues/3546
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3582]: https://gitlab.freedesktop.org/drm/intel/issues/3582
  [i915#3637]: https://gitlab.freedesktop.org/drm/intel/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/intel/issues/3638
  [i915#3689]: https://gitlab.freedesktop.org/drm/intel/issues/3689
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3734]: https://gitlab.freedesktop.org/drm/intel/issues/3734
  [i915#3742]: https://gitlab.freedesktop.org/drm/intel/issues/3742
  [i915#3743]: https://gitlab.freedesktop.org/drm/intel/issues/3743
  [i915#3778]: https://gitlab.freedesktop.org/drm/intel/issues/3778
  [i915#3840]: https://gitlab.freedesktop.org/drm/intel/issues/3840
  [i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
  [i915#3955]: https://gitlab.freedesktop.org/drm/intel/issues/3955
  [i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4078]: https://gitlab.freedesktop.org/drm/intel/issues/4078
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4087]: https://gitlab.freedesktop.org/drm/intel/issues/4087
  [i915#4098]: https://gitlab.freedesktop.org/drm/intel/issues/4098
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
  [i915#4235]: https://gitlab.freedesktop.org/drm/intel/issues/4235
  [i915#4270]: https://gitlab.freedesktop.org/drm/intel/issues/4270
  [i915#4349]: https://gitlab.freedesktop.org/drm/intel/issues/4349
  [i915#4391]: https://gitlab.freedesktop.org/drm/intel/issues/4391
  [i915#4423]: https://gitlab.freedesktop.org/drm/intel/issues/4423
  [i915#4473]: https://gitlab.freedesktop.org/drm/intel/issues/4473
  [i915#4525]: https://gitlab.freedesktop.org/drm/intel/issues/4525
  [i915#4538]: https://gitlab.freedesktop.org/drm/intel/issues/4538
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4771]: https://gitlab.freedesktop.org/drm/intel/issues/4771
  [i915#4812]: https://gitlab.freedesktop.org/drm/intel/issues/4812
  [i915#4816]: https://gitlab.freedesktop.org/drm/intel/issues/4816
  [i915#4818]: https://gitlab.freedesktop.org/drm/intel/issues/4818
  [i915#4852]: https://gitlab.freedesktop.org/drm/intel/issues/4852
  [i915#4860]: https://gitlab.freedesktop.org/drm/intel/issues/4860
  [i915#4873]: https://gitlab.freedesktop.org/drm/intel/issues/4873
  [i915#4879]: https://gitlab.freedesktop.org/drm/intel/issues/4879
  [i915#4880]: https://gitlab.freedesktop.org/drm/intel/issues/4880
  [i915#4881]: https://gitlab.freedesktop.org/drm/intel/issues/4881
  [i915#4885]: https://gitlab.freedesktop.org/drm/intel/issues/4885
  [i915#5107]: https://gitlab.freedesktop.org/drm/intel/issues/5107
  [i915#5138]: https://gitlab.freedesktop.org/drm/intel/issues/5138
  [i915#5161]: https://gitlab.freedesktop.org/drm/intel/issues/5161
  [i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
  [i915#5289]: https://gitlab.freedesktop.org/drm/intel/issues/5289
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
  [i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
  [i915#5460]: https://gitlab.freedesktop.org/drm/intel/issues/5460
  [i915#5461]: https://gitlab.freedesktop.org/drm/intel/issues/5461
  [i915#5566]: https://gitlab.freedesktop.org/drm/intel/issues/5566
  [i915#5784]: https://gitlab.freedesktop.org/drm/intel/issues/5784
  [i915#5793]: https://gitlab.freedesktop.org/drm/intel/issues/5793
  [i915#6095]: https://gitlab.freedesktop.org/drm/intel/issues/6095
  [i915#6187]: https://gitlab.freedesktop.org/drm/intel/issues/6187
  [i915#6228]: https://gitlab.freedesktop.org/drm/intel/issues/6228
  [i915#6268]: https://gitlab.freedesktop.org/drm/intel/issues/6268
  [i915#6334]: https://gitlab.freedesktop.org/drm/intel/issues/6334
  [i915#6335]: https://gitlab.freedesktop.org/drm/intel/issues/6335
  [i915#6412]: https://gitlab.freedesktop.org/drm/intel/issues/6412
  [i915#6524]: https://gitlab.freedesktop.org/drm/intel/issues/6524
  [i915#6537]: https://gitlab.freedesktop.org/drm/intel/issues/6537
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#6590]: https://gitlab.freedesktop.org/drm/intel/issues/6590
  [i915#6621]: https://gitlab.freedesktop.org/drm/intel/issues/6621
  [i915#6645]: https://gitlab.freedesktop.org/drm/intel/issues/6645
  [i915#6768]: https://gitlab.freedesktop.org/drm/intel/issues/6768
  [i915#6806]: https://gitlab.freedesktop.org/drm/intel/issues/6806
  [i915#6880]: https://gitlab.freedesktop.org/drm/intel/issues/6880
  [i915#6892]: https://gitlab.freedesktop.org/drm/intel/issues/6892
  [i915#6944]: https://gitlab.freedesktop.org/drm/intel/issues/6944
  [i915#6950]: https://gitlab.freedesktop.org/drm/intel/issues/6950
  [i915#6953]: https://gitlab.freedesktop.org/drm/intel/issues/6953
  [i915#7069]: https://gitlab.freedesktop.org/drm/intel/issues/7069
  [i915#7118]: https://gitlab.freedesktop.org/drm/intel/issues/7118
  [i915#7173]: https://gitlab.freedesktop.org/drm/intel/issues/7173
  [i915#7213]: https://gitlab.freedesktop.org/drm/intel/issues/7213
  [i915#7297]: https://gitlab.freedesktop.org/drm/intel/issues/7297
  [i915#7387]: https://gitlab.freedesktop.org/drm/intel/issues/7387
  [i915#7697]: https://gitlab.freedesktop.org/drm/intel/issues/7697
  [i915#7701]: https://gitlab.freedesktop.org/drm/intel/issues/7701
  [i915#7707]: https://gitlab.freedesktop.org/drm/intel/issues/7707
  [i915#7711]: https://gitlab.freedesktop.org/drm/intel/issues/7711
  [i915#7742]: https://gitlab.freedesktop.org/drm/intel/issues/7742
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7975]: https://gitlab.freedesktop.org/drm/intel/issues/7975
  [i915#8063]: https://gitlab.freedesktop.org/drm/intel/issues/8063
  [i915#8213]: https://gitlab.freedesktop.org/drm/intel/issues/8213
  [i915#8228]: https://gitlab.freedesktop.org/drm/intel/issues/8228
  [i915#8247]: https://gitlab.freedesktop.org/drm/intel/issues/8247
  [i915#8289]: https://gitlab.freedesktop.org/drm/intel/issues/8289
  [i915#8292]: https://gitlab.freedesktop.org/drm/intel/issues/8292
  [i915#8381]: https://gitlab.freedesktop.org/drm/intel/issues/8381
  [i915#8399]: https://gitlab.freedesktop.org/drm/intel/issues/8399
  [i915#8411]: https://gitlab.freedesktop.org/drm/intel/issues/8411
  [i915#8414]: https://gitlab.freedesktop.org/drm/intel/issues/8414
  [i915#8428]: https://gitlab.freedesktop.org/drm/intel/issues/8428
  [i915#8431]: https://gitlab.freedesktop.org/drm/intel/issues/8431
  [i915#8436]: https://gitlab.freedesktop.org/drm/intel/issues/8436
  [i915#8445]: https://gitlab.freedesktop.org/drm/intel/issues/8445
  [i915#8555]: https://gitlab.freedesktop.org/drm/intel/issues/8555
  [i915#8623]: https://gitlab.freedesktop.org/drm/intel/issues/8623
  [i915#8708]: https://gitlab.freedesktop.org/drm/intel/issues/8708
  [i915#8709]: https://gitlab.freedesktop.org/drm/intel/issues/8709
  [i915#8717]: https://gitlab.freedesktop.org/drm/intel/issues/8717
  [i915#8724]: https://gitlab.freedesktop.org/drm/intel/issues/8724
  [i915#8806]: https://gitlab.freedesktop.org/drm/intel/issues/8806
  [i915#8807]: https://gitlab.freedesktop.org/drm/intel/issues/8807
  [i915#8809]: https://gitlab.freedesktop.org/drm/intel/issues/8809
  [i915#8810]: https://gitlab.freedesktop.org/drm/intel/issues/8810
  [i915#8814]: https://gitlab.freedesktop.org/drm/intel/issues/8814
  [i915#8827]: https://gitlab.freedesktop.org/drm/intel/issues/8827
  [i915#8841]: https://gitlab.freedesktop.org/drm/intel/issues/8841
  [i915#8850]: https://gitlab.freedesktop.org/drm/intel/issues/8850
  [i915#8875]: https://gitlab.freedesktop.org/drm/intel/issues/8875
  [i915#8925]: https://gitlab.freedesktop.org/drm/intel/issues/8925
  [i915#8962]: https://gitlab.freedesktop.org/drm/intel/issues/8962
  [i915#9010]: https://gitlab.freedesktop.org/drm/intel/issues/9010
  [i915#9067]: https://gitlab.freedesktop.org/drm/intel/issues/9067
  [i915#9121]: https://gitlab.freedesktop.org/drm/intel/issues/9121
  [i915#9157]: https://gitlab.freedesktop.org/drm/intel/issues/9157
  [i915#9196]: https://gitlab.freedesktop.org/drm/intel/issues/9196
  [i915#9226]: https://gitlab.freedesktop.org/drm/intel/issues/9226
  [i915#9227]: https://gitlab.freedesktop.org/drm/intel/issues/9227
  [i915#9261]: https://gitlab.freedesktop.org/drm/intel/issues/9261
  [i915#9262]: https://gitlab.freedesktop.org/drm/intel/issues/9262
  [i915#9293]: https://gitlab.freedesktop.org/drm/intel/issues/9293
  [i915#9298]: https://gitlab.freedesktop.org/drm/intel/issues/9298
  [i915#9318]: https://gitlab.freedesktop.org/drm/intel/issues/9318
  [i915#9323]: https://gitlab.freedesktop.org/drm/intel/issues/9323
  [i915#9337]: https://gitlab.freedesktop.org/drm/intel/issues/9337
  [i915#9392]: https://gitlab.freedesktop.org/drm/intel/issues/9392
  [i915#9412]: https://gitlab.freedesktop.org/drm/intel/issues/9412
  [i915#9414]: https://gitlab.freedesktop.org/drm/intel/issues/9414


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_7503 -> IGTPW_9881
  * Piglit: piglit_4509 -> None

  CI-20190529: 20190529
  CI_DRM_13681: b57407d0de043fc22b000a941a404ab103849e06 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_9881: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/index.html
  IGT_7503: 7503
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9881/index.html

[-- Attachment #2: Type: text/html, Size: 113133 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface
  2023-09-26 13:00 ` [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
  2023-09-26 16:50   ` Tvrtko Ursulin
@ 2023-09-27  4:58   ` Aravind Iddamsetty
  1 sibling, 0 replies; 35+ messages in thread
From: Aravind Iddamsetty @ 2023-09-27  4:58 UTC (permalink / raw)
  To: Francois Dugast, igt-dev; +Cc: Rodrigo Vivi


On 26/09/23 18:30, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Align with commit ("drm/xe/pmu: Enable PMU interface")
if you can pick the fixup on top of this it would be better.

otherwise Acked-by: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>

Thanks,
Aravind
>
> Cc: Francois Dugast <francois.dugast@intel.com>
> Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>  include/drm-uapi/xe_drm.h | 38 ++++++++++++++++++++++++++++++++++++++
>  1 file changed, 38 insertions(+)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 804c02270..643eb6e82 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -1053,6 +1053,44 @@ struct drm_xe_vm_madvise {
>  	__u64 reserved[2];
>  };
>  
> +/**
> + * DOC: XE PMU event config IDs
> + *
> + * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
> + * as part of perf_event_open syscall to read a particular event.
> + *
> + * For example to open the XE_PMU_INTERRUPTS(0):
> + *
> + * .. code-block:: C
> + *	struct perf_event_attr attr;
> + *	long long count;
> + *	int cpu = 0;
> + *	int fd;
> + *
> + *	memset(&attr, 0, sizeof(struct perf_event_attr));
> + *	attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
> + *	attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
> + *	attr.use_clockid = 1;
> + *	attr.clockid = CLOCK_MONOTONIC;
> + *	attr.config = XE_PMU_INTERRUPTS(0);
> + *
> + *	fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
> + */
> +
> +/*
> + * Top bits of every counter are GT id.
> + */
> +#define __XE_PMU_GT_SHIFT (56)
> +
> +#define ___XE_PMU_OTHER(gt, x) \
> +	(((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
> +
> +#define XE_PMU_INTERRUPTS(gt)			___XE_PMU_OTHER(gt, 0)
> +#define XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
> +#define XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
> +#define XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 3)
> +#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 4)
> +
>  #if defined(__cplusplus)
>  }
>  #endif

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [igt-dev] [PATCH v3 11/24] drm-uapi/xe: Replace useless 'instance' per unique gt_id
  2023-09-26 16:47   ` Tvrtko Ursulin
@ 2023-09-27 16:53     ` Rodrigo Vivi
  2023-09-28  8:19       ` Tvrtko Ursulin
  0 siblings, 1 reply; 35+ messages in thread
From: Rodrigo Vivi @ 2023-09-27 16:53 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: igt-dev

On Tue, Sep 26, 2023 at 05:47:22PM +0100, Tvrtko Ursulin wrote:
> 
> On 26/09/2023 14:00, Francois Dugast wrote:
> > From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > 
> > Align with commit ("drm/xe/uapi: Replace useless 'instance' per unique gt_id")
> > 
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > ---
> >   include/drm-uapi/xe_drm.h | 65 ++++++++++++++++++++++++++-------------
> >   tests/intel/xe_query.c    |  2 +-
> >   2 files changed, 44 insertions(+), 23 deletions(-)
> > 
> > diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> > index 13c693393..68cc5e051 100644
> > --- a/include/drm-uapi/xe_drm.h
> > +++ b/include/drm-uapi/xe_drm.h
> > @@ -337,6 +337,47 @@ struct drm_xe_query_config {
> >   	__u64 info[];
> >   };
> > +/**
> > + * struct drm_xe_query_gt - describe an individual GT.
> > + *
> > + * To be used with drm_xe_query_gts, which will return a list with all the
> > + * existing GT individual descriptions.
> > + * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
> > + * implementing graphics and/or media operations.
> > + */
> > +struct drm_xe_query_gt {
> > +#define XE_QUERY_GT_TYPE_MAIN		0
> > +#define XE_QUERY_GT_TYPE_REMOTE		1
> > +#define XE_QUERY_GT_TYPE_MEDIA		2
> > +	/** @type: GT type: Main, Remote, or Media */
> > +	__u16 type;
> > +	/** @gt_id: Unique ID of this GT within the PCI Device */
> > +	__u16 gt_id;
> > +	/** @clock_freq: A clock frequency for timestamp */
> > +	__u32 clock_freq;
> > +	/** @features: Reserved for future information about GT features */
> > +	__u64 features;
> > +	/**
> > +	 * @native_mem_regions: Bit mask of instances from
> > +	 * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
> > +	 * direct access.
> > +	 */
> > +	__u64 native_mem_regions;
> 
> s/native/local/ ?
> 
> Although what was wrong with distance query? It was more future proof (can't
> ever end up with fast, slow, slower, .. ) and avoids a bit of a vague
> non-technical names like "slow".
> 
> > +	/**
> > +	 * @slow_mem_regions: Bit mask of instances from
> > +	 * drm_xe_query_mem_usage that this GT can indirectly access, although
> > +	 * they live on a different GPU/Tile.
> > +	 */
> > +	__u64 slow_mem_regions;
> > +	/**
> > +	 * @inaccessible_mem_regions: Bit mask of instances from
> > +	 * drm_xe_query_mem_usage that is not accessible by this GT at all.
> > +	 */
> > +	__u64 inaccessible_mem_regions;
> 
> Equal to ~(native | slow) so redundant?
> 
> Btw drm_xe_query_mem_usage is just a list of regions, nothing about usage
> like memory usage?

I agree with all your comments here. The way xe is currently handling
memory regions and mixing gt in the middle is strange and I'm going to
scrutinize and change that on a follow up.

This patch here now is just to change the name to the gt_id and this
IGT change ended up picking together the movement of the gt struct
out of the list struct definition.

So, basically the only relevant portion of this patch is s/instance/gt_id.
Everything else should be a follow-up work.

> 
> Regards,
> 
> Tvrtko
> 
> > +	/** @reserved: Reserved */
> > +	__u64 reserved[8];
> > +};
> > +
> >   /**
> >    * struct drm_xe_query_gts - describe GTs
> >    *
> > @@ -347,30 +388,10 @@ struct drm_xe_query_config {
> >   struct drm_xe_query_gts {
> >   	/** @num_gt: number of GTs returned in gts */
> >   	__u32 num_gt;
> > -
> >   	/** @pad: MBZ */
> >   	__u32 pad;
> > -
> > -	/**
> > -	 * @gts: The GTs returned for this device
> > -	 *
> > -	 * TODO: convert drm_xe_query_gt to proper kernel-doc.
> > -	 * TODO: Perhaps info about every mem region relative to this GT? e.g.
> > -	 * bandwidth between this GT and remote region?
> > -	 */
> > -	struct drm_xe_query_gt {
> > -#define XE_QUERY_GT_TYPE_MAIN		0
> > -#define XE_QUERY_GT_TYPE_REMOTE		1
> > -#define XE_QUERY_GT_TYPE_MEDIA		2
> > -		__u16 type;
> > -		__u16 instance;
> > -		__u32 clock_freq;
> > -		__u64 features;
> > -		__u64 native_mem_regions;	/* bit mask of instances from drm_xe_query_mem_usage */
> > -		__u64 slow_mem_regions;		/* bit mask of instances from drm_xe_query_mem_usage */
> > -		__u64 inaccessible_mem_regions;	/* bit mask of instances from drm_xe_query_mem_usage */
> > -		__u64 reserved[8];
> > -	} gts[];
> > +	/** @gts: The GT list returned for this device */
> > +	struct drm_xe_query_gt gts[];
> >   };
> >   /**
> > diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> > index acf069f46..eb8d52897 100644
> > --- a/tests/intel/xe_query.c
> > +++ b/tests/intel/xe_query.c
> > @@ -279,7 +279,7 @@ test_query_gts(int fd)
> >   	for (i = 0; i < gts->num_gt; i++) {
> >   		igt_info("type: %d\n", gts->gts[i].type);
> > -		igt_info("instance: %d\n", gts->gts[i].instance);
> > +		igt_info("gt_id: %d\n", gts->gts[i].gt_id);
> >   		igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
> >   		igt_info("features: 0x%016llx\n", gts->gts[i].features);
> >   		igt_info("native_mem_regions: 0x%016llx\n",

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface
  2023-09-26 16:50   ` Tvrtko Ursulin
@ 2023-09-27 16:55     ` Rodrigo Vivi
  2023-09-29  6:01       ` Aravind Iddamsetty
  0 siblings, 1 reply; 35+ messages in thread
From: Rodrigo Vivi @ 2023-09-27 16:55 UTC (permalink / raw)
  To: Tvrtko Ursulin, Aravind Iddamsetty; +Cc: igt-dev

On Tue, Sep 26, 2023 at 05:50:53PM +0100, Tvrtko Ursulin wrote:
> 
> On 26/09/2023 14:00, Francois Dugast wrote:
> > From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > 
> > Align with commit ("drm/xe/pmu: Enable PMU interface")
> > 
> > Cc: Francois Dugast <francois.dugast@intel.com>
> > Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > ---
> >   include/drm-uapi/xe_drm.h | 38 ++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 38 insertions(+)
> > 
> > diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> > index 804c02270..643eb6e82 100644
> > --- a/include/drm-uapi/xe_drm.h
> > +++ b/include/drm-uapi/xe_drm.h
> > @@ -1053,6 +1053,44 @@ struct drm_xe_vm_madvise {
> >   	__u64 reserved[2];
> >   };
> > +/**
> > + * DOC: XE PMU event config IDs
> > + *
> > + * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
> > + * as part of perf_event_open syscall to read a particular event.
> > + *
> > + * For example to open the XE_PMU_INTERRUPTS(0):
> > + *
> > + * .. code-block:: C
> > + *	struct perf_event_attr attr;
> > + *	long long count;
> > + *	int cpu = 0;
> > + *	int fd;
> > + *
> > + *	memset(&attr, 0, sizeof(struct perf_event_attr));
> > + *	attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
> > + *	attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
> > + *	attr.use_clockid = 1;
> > + *	attr.clockid = CLOCK_MONOTONIC;
> > + *	attr.config = XE_PMU_INTERRUPTS(0);
> > + *
> > + *	fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
> > + */
> > +
> > +/*
> > + * Top bits of every counter are GT id.
> > + */
> > +#define __XE_PMU_GT_SHIFT (56)
> > +
> > +#define ___XE_PMU_OTHER(gt, x) \
> > +	(((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
> > +
> > +#define XE_PMU_INTERRUPTS(gt)			___XE_PMU_OTHER(gt, 0)
> 
> AFAIR interrupts is probably the least useful counter and I don't remember
> that anyone asked much about it. Therefore I'd say it could be worth seeing
> if you could just drop it. Changes to intel_gpu_top to work with the below
> set (no per engine, no frequencies) will have to be extensive already
> anyway.

Aravind, could you please reply to that?

> 
> Regards,
> 
> Tvrtko
> 
> > +#define XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
> > +#define XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
> > +#define XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 3)
> > +#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 4)
> > +
> >   #if defined(__cplusplus)
> >   }
> >   #endif

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [igt-dev] [PATCH v3 11/24] drm-uapi/xe: Replace useless 'instance' per unique gt_id
  2023-09-27 16:53     ` Rodrigo Vivi
@ 2023-09-28  8:19       ` Tvrtko Ursulin
  0 siblings, 0 replies; 35+ messages in thread
From: Tvrtko Ursulin @ 2023-09-28  8:19 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: igt-dev


On 27/09/2023 17:53, Rodrigo Vivi wrote:
> On Tue, Sep 26, 2023 at 05:47:22PM +0100, Tvrtko Ursulin wrote:
>>
>> On 26/09/2023 14:00, Francois Dugast wrote:
>>> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>>>
>>> Align with commit ("drm/xe/uapi: Replace useless 'instance' per unique gt_id")
>>>
>>> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
>>> ---
>>>    include/drm-uapi/xe_drm.h | 65 ++++++++++++++++++++++++++-------------
>>>    tests/intel/xe_query.c    |  2 +-
>>>    2 files changed, 44 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
>>> index 13c693393..68cc5e051 100644
>>> --- a/include/drm-uapi/xe_drm.h
>>> +++ b/include/drm-uapi/xe_drm.h
>>> @@ -337,6 +337,47 @@ struct drm_xe_query_config {
>>>    	__u64 info[];
>>>    };
>>> +/**
>>> + * struct drm_xe_query_gt - describe an individual GT.
>>> + *
>>> + * To be used with drm_xe_query_gts, which will return a list with all the
>>> + * existing GT individual descriptions.
>>> + * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
>>> + * implementing graphics and/or media operations.
>>> + */
>>> +struct drm_xe_query_gt {
>>> +#define XE_QUERY_GT_TYPE_MAIN		0
>>> +#define XE_QUERY_GT_TYPE_REMOTE		1
>>> +#define XE_QUERY_GT_TYPE_MEDIA		2
>>> +	/** @type: GT type: Main, Remote, or Media */
>>> +	__u16 type;
>>> +	/** @gt_id: Unique ID of this GT within the PCI Device */
>>> +	__u16 gt_id;
>>> +	/** @clock_freq: A clock frequency for timestamp */
>>> +	__u32 clock_freq;
>>> +	/** @features: Reserved for future information about GT features */
>>> +	__u64 features;
>>> +	/**
>>> +	 * @native_mem_regions: Bit mask of instances from
>>> +	 * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
>>> +	 * direct access.
>>> +	 */
>>> +	__u64 native_mem_regions;
>>
>> s/native/local/ ?
>>
>> Although what was wrong with distance query? It was more future proof (can't
>> ever end up with fast, slow, slower, .. ) and avoids a bit of a vague
>> non-technical names like "slow".
>>
>>> +	/**
>>> +	 * @slow_mem_regions: Bit mask of instances from
>>> +	 * drm_xe_query_mem_usage that this GT can indirectly access, although
>>> +	 * they live on a different GPU/Tile.
>>> +	 */
>>> +	__u64 slow_mem_regions;
>>> +	/**
>>> +	 * @inaccessible_mem_regions: Bit mask of instances from
>>> +	 * drm_xe_query_mem_usage that is not accessible by this GT at all.
>>> +	 */
>>> +	__u64 inaccessible_mem_regions;
>>
>> Equal to ~(native | slow) so redundant?
>>
>> Btw drm_xe_query_mem_usage is just a list of regions, nothing about usage
>> like memory usage?
> 
> I agree with all your comments here. The way xe is currently handling
> memory regions and mixing gt in the middle is strange and I'm going to
> scrutinize and change that on a follow up.
> 
> This patch here now is just to change the name to the gt_id and this
> IGT change ended up picking together the movement of the gt struct
> out of the list struct definition.
> 
> So, basically the only relevant portion of this patch is s/instance/gt_id.
> Everything else should be a follow-up work.

Ah okay. I was asked to review the gem_wsim adaptation for xe so went 
looking for xe uapi. My bad looking at the wrong place. And from what 
you say I guess I can freely hold of looking into details until the uapi 
settles down.

Regards,

Tvrtko

> 
>>
>> Regards,
>>
>> Tvrtko
>>
>>> +	/** @reserved: Reserved */
>>> +	__u64 reserved[8];
>>> +};
>>> +
>>>    /**
>>>     * struct drm_xe_query_gts - describe GTs
>>>     *
>>> @@ -347,30 +388,10 @@ struct drm_xe_query_config {
>>>    struct drm_xe_query_gts {
>>>    	/** @num_gt: number of GTs returned in gts */
>>>    	__u32 num_gt;
>>> -
>>>    	/** @pad: MBZ */
>>>    	__u32 pad;
>>> -
>>> -	/**
>>> -	 * @gts: The GTs returned for this device
>>> -	 *
>>> -	 * TODO: convert drm_xe_query_gt to proper kernel-doc.
>>> -	 * TODO: Perhaps info about every mem region relative to this GT? e.g.
>>> -	 * bandwidth between this GT and remote region?
>>> -	 */
>>> -	struct drm_xe_query_gt {
>>> -#define XE_QUERY_GT_TYPE_MAIN		0
>>> -#define XE_QUERY_GT_TYPE_REMOTE		1
>>> -#define XE_QUERY_GT_TYPE_MEDIA		2
>>> -		__u16 type;
>>> -		__u16 instance;
>>> -		__u32 clock_freq;
>>> -		__u64 features;
>>> -		__u64 native_mem_regions;	/* bit mask of instances from drm_xe_query_mem_usage */
>>> -		__u64 slow_mem_regions;		/* bit mask of instances from drm_xe_query_mem_usage */
>>> -		__u64 inaccessible_mem_regions;	/* bit mask of instances from drm_xe_query_mem_usage */
>>> -		__u64 reserved[8];
>>> -	} gts[];
>>> +	/** @gts: The GT list returned for this device */
>>> +	struct drm_xe_query_gt gts[];
>>>    };
>>>    /**
>>> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
>>> index acf069f46..eb8d52897 100644
>>> --- a/tests/intel/xe_query.c
>>> +++ b/tests/intel/xe_query.c
>>> @@ -279,7 +279,7 @@ test_query_gts(int fd)
>>>    	for (i = 0; i < gts->num_gt; i++) {
>>>    		igt_info("type: %d\n", gts->gts[i].type);
>>> -		igt_info("instance: %d\n", gts->gts[i].instance);
>>> +		igt_info("gt_id: %d\n", gts->gts[i].gt_id);
>>>    		igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
>>>    		igt_info("features: 0x%016llx\n", gts->gts[i].features);
>>>    		igt_info("native_mem_regions: 0x%016llx\n",

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface
  2023-09-27 16:55     ` Rodrigo Vivi
@ 2023-09-29  6:01       ` Aravind Iddamsetty
  0 siblings, 0 replies; 35+ messages in thread
From: Aravind Iddamsetty @ 2023-09-29  6:01 UTC (permalink / raw)
  To: Rodrigo Vivi, Tvrtko Ursulin; +Cc: igt-dev


On 27/09/23 22:25, Rodrigo Vivi wrote:
> On Tue, Sep 26, 2023 at 05:50:53PM +0100, Tvrtko Ursulin wrote:
>> On 26/09/2023 14:00, Francois Dugast wrote:
>>> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>>>
>>> Align with commit ("drm/xe/pmu: Enable PMU interface")
>>>
>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>> Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
>>> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
>>> ---
>>>   include/drm-uapi/xe_drm.h | 38 ++++++++++++++++++++++++++++++++++++++
>>>   1 file changed, 38 insertions(+)
>>>
>>> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
>>> index 804c02270..643eb6e82 100644
>>> --- a/include/drm-uapi/xe_drm.h
>>> +++ b/include/drm-uapi/xe_drm.h
>>> @@ -1053,6 +1053,44 @@ struct drm_xe_vm_madvise {
>>>   	__u64 reserved[2];
>>>   };
>>> +/**
>>> + * DOC: XE PMU event config IDs
>>> + *
>>> + * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
>>> + * as part of perf_event_open syscall to read a particular event.
>>> + *
>>> + * For example to open the XE_PMU_INTERRUPTS(0):
>>> + *
>>> + * .. code-block:: C
>>> + *	struct perf_event_attr attr;
>>> + *	long long count;
>>> + *	int cpu = 0;
>>> + *	int fd;
>>> + *
>>> + *	memset(&attr, 0, sizeof(struct perf_event_attr));
>>> + *	attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
>>> + *	attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
>>> + *	attr.use_clockid = 1;
>>> + *	attr.clockid = CLOCK_MONOTONIC;
>>> + *	attr.config = XE_PMU_INTERRUPTS(0);
>>> + *
>>> + *	fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
>>> + */
>>> +
>>> +/*
>>> + * Top bits of every counter are GT id.
>>> + */
>>> +#define __XE_PMU_GT_SHIFT (56)
>>> +
>>> +#define ___XE_PMU_OTHER(gt, x) \
>>> +	(((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
>>> +
>>> +#define XE_PMU_INTERRUPTS(gt)			___XE_PMU_OTHER(gt, 0)
>> AFAIR interrupts is probably the least useful counter and I don't remember
>> that anyone asked much about it. Therefore I'd say it could be worth seeing
>> if you could just drop it. Changes to intel_gpu_top to work with the below
>> set (no per engine, no frequencies) will have to be extensive already
>> anyway.

Ok, will drop the interrupt.

Also, this is just the initial set, per engine will follow but sometime later.

Thanks,
Aravind.
> Aravind, could you please reply to that?
>
>> Regards,
>>
>> Tvrtko
>>
>>> +#define XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
>>> +#define XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
>>> +#define XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 3)
>>> +#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 4)
>>> +
>>>   #if defined(__cplusplus)
>>>   }
>>>   #endif

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2023-09-29  5:58 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-26 13:00 [igt-dev] [PATCH v3 00/24] uAPI Alignment - take 1 v3 Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 01/24] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
2023-09-26 16:50   ` Tvrtko Ursulin
2023-09-27 16:55     ` Rodrigo Vivi
2023-09-29  6:01       ` Aravind Iddamsetty
2023-09-27  4:58   ` Aravind Iddamsetty
2023-09-26 13:00 ` [igt-dev] [PATCH v3 02/24] tests/intel/xe_query: Add a test for querying cs cycles Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 03/24] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 04/24] drm-uapi/xe_drm: Remove MMIO ioctl and " Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 05/24] xe_exec_balancer: Enable parallel submission and compute mode Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 06/24] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 07/24] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 08/24] drm-uapi/xe: Use common drm_xe_ext_set_property extension Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 09/24] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 10/24] xe: Update to new VM bind uAPI Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 11/24] drm-uapi/xe: Replace useless 'instance' per unique gt_id Francois Dugast
2023-09-26 16:47   ` Tvrtko Ursulin
2023-09-27 16:53     ` Rodrigo Vivi
2023-09-28  8:19       ` Tvrtko Ursulin
2023-09-26 13:00 ` [igt-dev] [PATCH v3 12/24] drm-uapi/xe: Remove unused field of drm_xe_query_gt Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 13/24] drm-uapi/xe: Rename gts to gt_list Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 14/24] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 15/24] drm-uapi/xe: Align with documentation updates Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 16/24] drm-uapi/xe: Align with Crystal Reference Clock updates Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 17/24] drm-uapi/xe: Align with extension of drm_xe_vm_bind_op Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 18/24] drm-uapi/xe: Align with uAPI to query micro-controler firmware version Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 19/24] drm-uapi/xe: Align with DRM_XE_DEVICE_QUERY_HWCONFIG documentation Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 20/24] drm-uapi/xe: Align with uAPI to pad to drm_xe_engine_class_instance Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 21/24] drm-uapi/xe: Align with uAPI update query HuC micro-controler firmware version Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 22/24] drm-uapi/xe: Align with uAPI update for query config num_params Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 23/24] drm-uapi/xe: Align with uAPI update to add DRM_ prefix in uAPI constants Francois Dugast
2023-09-26 13:00 ` [igt-dev] [PATCH v3 24/24] drm-uapi/xe: Align with uAPI update to add _FLAG to constants usable for flags Francois Dugast
2023-09-26 15:03 ` [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - take 1 (rev2) Patchwork
2023-09-26 15:14 ` [igt-dev] ✓ CI.xeBAT: " Patchwork
2023-09-27  2:20 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.