All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe
@ 2023-04-28  6:22 Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 01/17] lib/xe_ioctl: Add missing header for direct resolving Zbigniew Kempczyński
                   ` (19 more replies)
  0 siblings, 20 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Series touches intel-bb + intel-buf which are rendercopy, gpgpu fill and
media fill facilities. Exercise both paths on CI (i915 and xe) to check
there's no regression in common part.

v2: fix allocator issue
v3: address review comments (Kamil)
v4: address review comments (Christoph)
v5: - address review comments (Kamil)
    - add region awareness in bufops (Zbigniew)
v6: alter igt_draw and kms_big_fb to be region aware (Zbigniew)
v7: - separate random->reloc docs fix patch (Kamil)
    - reorganize driver (i915/xe) blocks (Kamil)
    - fix xe_bound issue for single buffer (Zbigniew)
v8: separate change for RELOC in reset path (Christoph)

Zbigniew Kempczyński (17):
  lib/xe_ioctl: Add missing header for direct resolving
  lib/xe_query: Add region helpers and missing doc
  lib/xe_query: Remove commented out function prototype
  lib/intel_allocator: Add allocator support for Xe
  lib/drmtest: Add driver enum for i915/xe
  lib/intel_bufops: Add Xe support in bufops
  lib/intel_batchbuffer: Rename i915 -> fd as preparation step for xe
  lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset
    path
  lib/intel_batchbuffer: Update intel-bb docs
  lib/intel_batchbuffer: Add Xe support in intel-bb
  tests/xe_intel_bb: Check if intel-bb Xe support correctness
  tests/xe-fast-feedback: Add xe_intel_bb test to BAT
  lib/gpgpu_fill: Use RENDER engine flag to work on Xe
  tests/xe_gpgpu_fill: Exercise gpgpu fill on Xe
  lib/igt_fb: For xe assume vram is used on discrete
  lib/igt_draw: Pass region while building intel_buf from flink
  tests/kms_big_fb: Deduce region for xe framebuffer

 lib/drmtest.h                            |    8 +
 lib/gpgpu_fill.c                         |    4 +-
 lib/gpu_cmds.c                           |    2 +-
 lib/igt_draw.c                           |   14 +-
 lib/igt_fb.c                             |   17 +-
 lib/intel_allocator.c                    |   40 +-
 lib/intel_aux_pgtable.c                  |    2 +-
 lib/intel_batchbuffer.c                  |  421 ++++++--
 lib/intel_batchbuffer.h                  |   22 +-
 lib/intel_bufops.c                       |  123 ++-
 lib/intel_bufops.h                       |   24 +-
 lib/xe/xe_ioctl.h                        |    1 +
 lib/xe/xe_query.c                        |   45 +
 lib/xe/xe_query.h                        |    3 +-
 tests/i915/gem_caching.c                 |    4 +-
 tests/i915/gem_pxp.c                     |    2 +-
 tests/i915/kms_big_fb.c                  |   10 +-
 tests/intel-ci/xe-fast-feedback.testlist |   19 +
 tests/meson.build                        |    2 +
 tests/xe/xe_gpgpu_fill.c                 |  135 +++
 tests/xe/xe_intel_bb.c                   | 1185 ++++++++++++++++++++++
 21 files changed, 1927 insertions(+), 156 deletions(-)
 create mode 100644 tests/xe/xe_gpgpu_fill.c
 create mode 100644 tests/xe/xe_intel_bb.c

-- 
2.34.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 01/17] lib/xe_ioctl: Add missing header for direct resolving
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 02/17] lib/xe_query: Add region helpers and missing doc Zbigniew Kempczyński
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

xe_ioctl.h user expects all types resolving. Add missing stddef.h
header which contains size_t definition.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>
---
 lib/xe/xe_ioctl.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index d8c40eda01..049cd183d6 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -11,6 +11,7 @@
 #ifndef XE_IOCTL_H
 #define XE_IOCTL_H
 
+#include <stddef.h>
 #include <stdint.h>
 #include <xe_drm.h>
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 02/17] lib/xe_query: Add region helpers and missing doc
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 01/17] lib/xe_ioctl: Add missing header for direct resolving Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 03/17] lib/xe_query: Remove commented out function prototype Zbigniew Kempczyński
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

For easier iterate over memory regions and produce dynamic subtests
add xe_region_name() helper.

As Xe requires buffer size alignment during creating bo add
xe_min_page_size() helper.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>
---
 lib/xe/xe_query.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
 lib/xe/xe_query.h |  2 ++
 2 files changed, 47 insertions(+)

diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 2b627a78ac..bd5eb1d189 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -422,6 +422,13 @@ struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx)
 	return &xe_dev->hw_engines[idx];
 }
 
+/**
+ * xe_mem_region:
+ * @fd: xe device fd
+ * @region: region mask
+ *
+ * Returns memory region structure for @region mask.
+ */
 struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region)
 {
 	struct xe_device *xe_dev;
@@ -434,6 +441,44 @@ struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region)
 	return &xe_dev->mem_usage->regions[region_idx];
 }
 
+/**
+ * xe_region_name:
+ * @region: region mask
+ *
+ * Returns region string like "system" or "vram-n" where n=0...62.
+ */
+const char *xe_region_name(uint64_t region)
+{
+	static char **vrams;
+	int region_idx = ffs(region) - 1;
+
+	/* Populate the array */
+	if (!vrams) {
+		vrams = calloc(64, sizeof(char *));
+		for (int i = 0; i < 64; i++) {
+			if (i != 0)
+				asprintf(&vrams[i], "vram-%d", i - 1);
+			else
+				asprintf(&vrams[i], "system");
+			igt_assert(vrams[i]);
+		}
+	}
+
+	return vrams[region_idx];
+}
+
+/**
+ * xe_min_page_size:
+ * @fd: xe device fd
+ * @region: region mask
+ *
+ * Returns minimum page size for @region.
+ */
+uint32_t xe_min_page_size(int fd, uint64_t region)
+{
+	return xe_mem_region(fd, region)->min_page_size;
+}
+
 /**
  * xe_number_hw_engine:
  * @fd: xe device fd
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 0d4b810a10..f49acb1d7b 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -84,6 +84,8 @@ uint64_t vram_if_possible(int fd, int gt);
 struct drm_xe_engine_class_instance *xe_hw_engines(int fd);
 struct drm_xe_engine_class_instance *xe_hw_engine(int fd, int idx);
 struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region);
+const char *xe_region_name(uint64_t region);
+uint32_t xe_min_page_size(int fd, uint64_t region);
 unsigned int xe_number_hw_engines(int fd);
 bool xe_has_vram(int fd);
 //uint64_t xe_vram_size(int fd);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 03/17] lib/xe_query: Remove commented out function prototype
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 01/17] lib/xe_ioctl: Add missing header for direct resolving Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 02/17] lib/xe_query: Add region helpers and missing doc Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 04/17] lib/intel_allocator: Add allocator support for Xe Zbigniew Kempczyński
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Remove unnecessary commented out code.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>
---
 lib/xe/xe_query.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index f49acb1d7b..cc6e7cefdc 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -88,7 +88,6 @@ const char *xe_region_name(uint64_t region);
 uint32_t xe_min_page_size(int fd, uint64_t region);
 unsigned int xe_number_hw_engines(int fd);
 bool xe_has_vram(int fd);
-//uint64_t xe_vram_size(int fd);
 uint64_t xe_vram_size(int fd, int gt);
 uint32_t xe_get_default_alignment(int fd);
 uint32_t xe_va_bits(int fd);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 04/17] lib/intel_allocator: Add allocator support for Xe
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (2 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 03/17] lib/xe_query: Remove commented out function prototype Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 05/17] lib/drmtest: Add driver enum for i915/xe Zbigniew Kempczyński
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Start supporting va range configuration for xe allocator.

During opening allocator has to be aware of vm range (start and end).
i915 driver doesn't expose vm range information so those variables
have to be detected. In xe driver we get information of va size from
the kernel query so va end can be directly configured. At the moment
there's no autodetection of va start for xe what might need to be
address in the future if for some reason lower offsets might not be
in use.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com>
Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>
---
 lib/intel_allocator.c | 40 +++++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 15 deletions(-)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index 2b08dd5996..45c1168ab5 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -16,6 +16,7 @@
 #include "igt_map.h"
 #include "intel_allocator.h"
 #include "intel_allocator_msgchannel.h"
+#include "xe/xe_query.h"
 
 //#define ALLOCDBG
 #ifdef ALLOCDBG
@@ -910,24 +911,33 @@ static uint64_t __intel_allocator_open_full(int fd, uint32_t ctx,
 	struct alloc_resp resp;
 	uint64_t gtt_size;
 
-	if (!start)
-		req.open.start = gem_detect_safe_start_offset(fd);
+	if (is_i915_device(fd)) {
+		if (!start)
+			req.open.start = gem_detect_safe_start_offset(fd);
 
-	if (!end) {
-		igt_assert_f(can_report_gtt_size(fd), "Invalid fd\n");
-		gtt_size = gem_aperture_size(fd);
-		if (!gem_uses_full_ppgtt(fd))
-			gtt_size /= 2;
-		else
-			gtt_size -= RESERVED;
+		if (!end) {
+			igt_assert_f(can_report_gtt_size(fd), "Invalid fd\n");
+			gtt_size = gem_aperture_size(fd);
+			if (!gem_uses_full_ppgtt(fd))
+				gtt_size /= 2;
+			else
+				gtt_size -= RESERVED;
 
-		req.open.end = gtt_size;
-	}
+			req.open.end = gtt_size;
+		}
 
-	if (!default_alignment)
-		req.open.default_alignment = gem_detect_safe_alignment(fd);
+		if (!default_alignment)
+			req.open.default_alignment = gem_detect_safe_alignment(fd);
+
+		req.open.start = ALIGN(req.open.start, req.open.default_alignment);
+	} else {
+		struct xe_device *xe_dev = xe_device_get(fd);
 
-	req.open.start = ALIGN(req.open.start, req.open.default_alignment);
+		igt_assert(xe_dev);
+
+		if (!end)
+			req.open.end = 1ull << xe_dev->va_bits;
+	}
 
 	/* Get child_tid only once at open() */
 	if (child_tid == -1)
@@ -998,7 +1008,7 @@ uint64_t intel_allocator_open_vm_full(int fd, uint32_t vm,
 
 /**
  * intel_allocator_open:
- * @fd: i915 descriptor
+ * @fd: i915 or xe descriptor
  * @ctx: context
  * @allocator_type: one of INTEL_ALLOCATOR_* define
  *
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 05/17] lib/drmtest: Add driver enum for i915/xe
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (3 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 04/17] lib/intel_allocator: Add allocator support for Xe Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 06/17] lib/intel_bufops: Add Xe support in bufops Zbigniew Kempczyński
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Library code like intel-bb and intel-buf which requires adoption
to handle both drivers should store driver on which fd they are
working on instead of calling costful is_i915/xe_device() helper.

Introduce intel_driver enum which will be used on library code
adoption to Xe.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 lib/drmtest.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/drmtest.h b/lib/drmtest.h
index 5878f65129..3c88b85c6f 100644
--- a/lib/drmtest.h
+++ b/lib/drmtest.h
@@ -62,6 +62,14 @@
  */
 #define DRIVER_ANY 	~(DRIVER_VGEM)
 
+/*
+ * Compile friendly enum for i915/xe.
+ */
+enum intel_driver {
+	INTEL_DRIVER_I915 = 1,
+	INTEL_DRIVER_XE,
+};
+
 void __set_forced_driver(const char *name);
 
 /**
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 06/17] lib/intel_bufops: Add Xe support in bufops
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (4 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 05/17] lib/drmtest: Add driver enum for i915/xe Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  7:49   ` Kamil Konieczny
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 07/17] lib/intel_batchbuffer: Rename i915 -> fd as preparation step for xe Zbigniew Kempczyński
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Extend bufops to support Xe:
 - change region to 64bit region mask,
 - add initialization helper (full) which allows passing handle,
   size and region,
 - mapping functions (read + write) selects driver specific mapping

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
v5: add intel_buf_create_full() for future lib/igt_fb changes
    regarding region
v6: - fix igt_assert_f()
    - add buf_ops_get_driver() helper
v7: - alphabetical includes (Kamil)
    - assert if region is not valid for xe before bo create (Kamil)
    - ensure bops pointer is valid on public call (Christoph)
---
 lib/intel_bufops.c | 123 +++++++++++++++++++++++++++++++++++++++++----
 lib/intel_bufops.h |  24 ++++++++-
 2 files changed, 136 insertions(+), 11 deletions(-)

diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index cdc7a1698b..46fd981f09 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -29,6 +29,8 @@
 #include "igt.h"
 #include "igt_x86.h"
 #include "intel_bufops.h"
+#include "xe/xe_ioctl.h"
+#include "xe/xe_query.h"
 
 /**
  * SECTION:intel_bufops
@@ -106,6 +108,7 @@ typedef void (*bo_copy)(struct buf_ops *, struct intel_buf *, uint32_t *);
 
 struct buf_ops {
 	int fd;
+	enum intel_driver driver;
 	int gen_start;
 	int gen_end;
 	unsigned int intel_gen;
@@ -488,6 +491,9 @@ static void *mmap_write(int fd, struct intel_buf *buf)
 {
 	void *map = NULL;
 
+	if (buf->bops->driver == INTEL_DRIVER_XE)
+		return xe_bo_map(fd, buf->handle, buf->surface[0].size);
+
 	if (gem_has_lmem(fd)) {
 		/*
 		 * set/get_caching and set_domain are no longer supported on
@@ -530,6 +536,9 @@ static void *mmap_read(int fd, struct intel_buf *buf)
 {
 	void *map = NULL;
 
+	if (buf->bops->driver == INTEL_DRIVER_XE)
+		return xe_bo_map(fd, buf->handle, buf->surface[0].size);
+
 	if (gem_has_lmem(fd)) {
 		/*
 		 * set/get_caching and set_domain are no longer supported on
@@ -809,7 +818,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 			     int width, int height, int bpp, int alignment,
 			     uint32_t req_tiling, uint32_t compression,
 			     uint64_t bo_size, int bo_stride,
-			     uint32_t region)
+			     uint64_t region)
 {
 	uint32_t tiling = req_tiling;
 	uint64_t size;
@@ -899,9 +908,20 @@ static void __intel_buf_init(struct buf_ops *bops,
 	buf->size = size;
 	buf->handle = handle;
 
-	if (!handle)
-		if (__gem_create_in_memory_regions(bops->fd, &buf->handle, &size, region))
-			igt_assert_eq(__gem_create(bops->fd, &size, &buf->handle), 0);
+	if (bops->driver == INTEL_DRIVER_XE)
+		igt_assert_f(region != -1, "Xe requires region awareness, "
+					   "use api which passes valid region\n");
+	buf->region = region;
+
+	if (!handle) {
+		if (bops->driver == INTEL_DRIVER_I915) {
+			if (__gem_create_in_memory_regions(bops->fd, &buf->handle, &size, region))
+				igt_assert_eq(__gem_create(bops->fd, &size, &buf->handle), 0);
+		} else {
+			size = ALIGN(size, xe_get_default_alignment(bops->fd));
+			buf->handle = xe_bo_create_flags(bops->fd, 0, size, region);
+		}
+	}
 
 	/* Store gem bo size */
 	buf->bo_size = size;
@@ -930,8 +950,12 @@ void intel_buf_init(struct buf_ops *bops,
 		    int width, int height, int bpp, int alignment,
 		    uint32_t tiling, uint32_t compression)
 {
+	uint64_t region;
+
+	region = bops->driver == INTEL_DRIVER_I915 ? I915_SYSTEM_MEMORY :
+						     system_memory(bops->fd);
 	__intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
-			 tiling, compression, 0, 0, I915_SYSTEM_MEMORY);
+			 tiling, compression, 0, 0, region);
 
 	intel_buf_set_ownership(buf, true);
 }
@@ -945,7 +969,7 @@ void intel_buf_init_in_region(struct buf_ops *bops,
 			      struct intel_buf *buf,
 			      int width, int height, int bpp, int alignment,
 			      uint32_t tiling, uint32_t compression,
-			      uint32_t region)
+			      uint64_t region)
 {
 	__intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
 			 tiling, compression, 0, 0, region);
@@ -1010,6 +1034,43 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
 			 req_tiling, compression, 0, 0, -1);
 }
 
+/**
+ * intel_buf_init_full
+ * @bops: pointer to buf_ops
+ * @handle: BO handle created by the caller
+ * @buf: pointer to intel_buf structure to be filled
+ * @width: surface width
+ * @height: surface height
+ * @bpp: bits-per-pixel (8 / 16 / 32 / 64)
+ * @alignment: alignment of the stride for linear surfaces
+ * @req_tiling: surface tiling
+ * @compression: surface compression type
+ * @size: real bo size
+ * @stride: bo stride
+ * @region: region
+ *
+ * Function configures BO handle within intel_buf structure passed by the caller
+ * (with all its metadata - width, height, ...). Useful if BO was created
+ * outside. Allows passing real size which caller is aware of.
+ *
+ * Note: intel_buf_close() can be used because intel_buf is aware it is not
+ * buffer owner so it won't close it underneath.
+ */
+void intel_buf_init_full(struct buf_ops *bops,
+			 uint32_t handle,
+			 struct intel_buf *buf,
+			 int width, int height,
+			 int bpp, int alignment,
+			 uint32_t req_tiling,
+			 uint32_t compression,
+			 uint64_t size,
+			 int stride,
+			 uint64_t region)
+{
+	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
+			 req_tiling, compression, size, stride, region);
+}
+
 /**
  * intel_buf_create
  * @bops: pointer to buf_ops
@@ -1084,6 +1145,20 @@ struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
 							 uint32_t compression,
 							 uint64_t size,
 							 int stride)
+{
+	return intel_buf_create_full(bops, handle, width, height, bpp, alignment,
+				     req_tiling, compression, size, stride, -1);
+}
+
+struct intel_buf *intel_buf_create_full(struct buf_ops *bops,
+					uint32_t handle,
+					int width, int height,
+					int bpp, int alignment,
+					uint32_t req_tiling,
+					uint32_t compression,
+					uint64_t size,
+					int stride,
+					uint64_t region)
 {
 	struct intel_buf *buf;
 
@@ -1093,12 +1168,11 @@ struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
 	igt_assert(buf);
 
 	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
-			 req_tiling, compression, size, stride, -1);
+			 req_tiling, compression, size, stride, region);
 
 	return buf;
 }
 
-
 /**
  * intel_buf_destroy
  * @buf: intel_buf
@@ -1420,8 +1494,24 @@ static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency)
 
 	bops->fd = fd;
 	bops->intel_gen = generation;
-	igt_debug("generation: %d, supported tiles: 0x%02x\n",
-		  bops->intel_gen, bops->supported_tiles);
+	bops->driver = is_i915_device(fd) ? INTEL_DRIVER_I915 :
+					    is_xe_device(fd) ? INTEL_DRIVER_XE : 0;
+	igt_assert(bops->driver);
+	igt_debug("generation: %d, supported tiles: 0x%02x, driver: %s\n",
+		  bops->intel_gen, bops->supported_tiles,
+		  bops->driver == INTEL_DRIVER_I915 ? "i915" : "xe");
+
+	/* No tiling support in XE. */
+	if (bops->driver == INTEL_DRIVER_XE) {
+		bops->supported_hw_tiles = TILE_NONE;
+
+		bops->linear_to_x = copy_linear_to_x;
+		bops->x_to_linear = copy_x_to_linear;
+		bops->linear_to_y = copy_linear_to_y;
+		bops->y_to_linear = copy_y_to_linear;
+
+		return bops;
+	}
 
 	/*
 	 * Warning!
@@ -1569,6 +1659,19 @@ int buf_ops_get_fd(struct buf_ops *bops)
 	return bops->fd;
 }
 
+/**
+ * buf_ops_get_driver
+ * @bops: pointer to buf_ops
+ *
+ * Returns: intel driver enum value
+ */
+enum intel_driver buf_ops_get_driver(struct buf_ops *bops)
+{
+	igt_assert(bops);
+
+	return bops->driver;
+}
+
 /**
  * buf_ops_set_software_tiling
  * @bops: pointer to buf_ops
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 25b4307399..0037548a3b 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -43,6 +43,7 @@ struct intel_buf {
 	} addr;
 
 	uint64_t bo_size;
+	uint64_t region;
 
 	/* Tracking */
 	struct intel_bb *ibb;
@@ -109,6 +110,7 @@ struct buf_ops *buf_ops_create(int fd);
 struct buf_ops *buf_ops_create_with_selftest(int fd);
 void buf_ops_destroy(struct buf_ops *bops);
 int buf_ops_get_fd(struct buf_ops *bops);
+enum intel_driver buf_ops_get_driver(struct buf_ops *bops);
 
 bool buf_ops_set_software_tiling(struct buf_ops *bops,
 				 uint32_t tiling,
@@ -135,7 +137,7 @@ void intel_buf_init_in_region(struct buf_ops *bops,
 			      struct intel_buf *buf,
 			      int width, int height, int bpp, int alignment,
 			      uint32_t tiling, uint32_t compression,
-			      uint32_t region);
+			      uint64_t region);
 void intel_buf_close(struct buf_ops *bops, struct intel_buf *buf);
 
 void intel_buf_init_using_handle(struct buf_ops *bops,
@@ -143,6 +145,16 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
 				 struct intel_buf *buf,
 				 int width, int height, int bpp, int alignment,
 				 uint32_t req_tiling, uint32_t compression);
+void intel_buf_init_full(struct buf_ops *bops,
+			 uint32_t handle,
+			 struct intel_buf *buf,
+			 int width, int height,
+			 int bpp, int alignment,
+			 uint32_t req_tiling,
+			 uint32_t compression,
+			 uint64_t size,
+			 int stride,
+			 uint64_t region);
 
 struct intel_buf *intel_buf_create(struct buf_ops *bops,
 				   int width, int height,
@@ -164,6 +176,16 @@ struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
 							 uint32_t compression,
 							 uint64_t size,
 							 int stride);
+
+struct intel_buf *intel_buf_create_full(struct buf_ops *bops,
+					uint32_t handle,
+					int width, int height,
+					int bpp, int alignment,
+					uint32_t req_tiling,
+					uint32_t compression,
+					uint64_t size,
+					int stride,
+					uint64_t region);
 void intel_buf_destroy(struct intel_buf *buf);
 
 static inline void intel_buf_set_pxp(struct intel_buf *buf, bool new_pxp_state)
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 07/17] lib/intel_batchbuffer: Rename i915 -> fd as preparation step for xe
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (5 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 06/17] lib/intel_bufops: Add Xe support in bufops Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 08/17] lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset path Zbigniew Kempczyński
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Until now intel-bb was designed to handle i915 (relocations and
softpinning). We want to reuse it also for xe as softpinning
which requires allocator also unblocks this for vm_bind used
in xe.

This is preparation step which stops using i915 as internal fd
to avoid confusion.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 lib/gpu_cmds.c           |   2 +-
 lib/intel_aux_pgtable.c  |   2 +-
 lib/intel_batchbuffer.c  | 116 +++++++++++++++++++--------------------
 lib/intel_batchbuffer.h  |  16 +++---
 tests/i915/gem_caching.c |   4 +-
 tests/i915/gem_pxp.c     |   2 +-
 6 files changed, 71 insertions(+), 71 deletions(-)

diff --git a/lib/gpu_cmds.c b/lib/gpu_cmds.c
index cee81555d8..afb26d2990 100644
--- a/lib/gpu_cmds.c
+++ b/lib/gpu_cmds.c
@@ -251,7 +251,7 @@ gen7_fill_binding_table(struct intel_bb *ibb,
 {
 	uint32_t binding_table_offset;
 	uint32_t *binding_table;
-	uint32_t devid = intel_get_drm_devid(ibb->i915);
+	uint32_t devid = intel_get_drm_devid(ibb->fd);
 
 	intel_bb_ptr_align(ibb, 64);
 	binding_table_offset = intel_bb_offset(ibb);
diff --git a/lib/intel_aux_pgtable.c b/lib/intel_aux_pgtable.c
index 5205687080..946ca60b97 100644
--- a/lib/intel_aux_pgtable.c
+++ b/lib/intel_aux_pgtable.c
@@ -481,7 +481,7 @@ intel_aux_pgtable_create(struct intel_bb *ibb,
 	intel_bb_add_intel_buf_with_alignment(ibb, pgt->buf,
 					      pgt->max_align, false);
 
-	pgt_map(ibb->i915, pgt);
+	pgt_map(ibb->fd, pgt);
 	pgt_populate_entries(pgt, bufs, buf_count);
 	pgt_unmap(pgt);
 
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index a4eb4c2bbc..7dbd6dd582 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -828,7 +828,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
 
 /**
  * __intel_bb_create:
- * @i915: drm fd
+ * @fd: drm fd
  * @ctx: context id
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -873,7 +873,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  * Pointer the intel_bb, asserts on failure.
  */
 static struct intel_bb *
-__intel_bb_create(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
+__intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 		  uint32_t size, bool do_relocs,
 		  uint64_t start, uint64_t end,
 		  uint8_t allocator_type, enum allocator_strategy strategy)
@@ -883,8 +883,8 @@ __intel_bb_create(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 
 	igt_assert(ibb);
 
-	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(i915);
-	ibb->devid = intel_get_drm_devid(i915);
+	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
+	ibb->devid = intel_get_drm_devid(fd);
 	ibb->gen = intel_gen(ibb->devid);
 
 	/*
@@ -900,16 +900,16 @@ __intel_bb_create(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 	 * so we want kernel to not interfere with this.
 	 */
 	if (do_relocs)
-		ibb->allows_obj_alignment = gem_allows_obj_alignment(i915);
+		ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
 
 	/* Use safe start offset instead assuming 0x0 is safe */
-	start = max_t(uint64_t, start, gem_detect_safe_start_offset(i915));
+	start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
 
 	/* if relocs are set we won't use an allocator */
 	if (do_relocs)
 		allocator_type = INTEL_ALLOCATOR_NONE;
 	else
-		ibb->allocator_handle = intel_allocator_open_full(i915, ctx,
+		ibb->allocator_handle = intel_allocator_open_full(fd, ctx,
 								  start, end,
 								  allocator_type,
 								  strategy, 0);
@@ -918,11 +918,11 @@ __intel_bb_create(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 	ibb->allocator_start = start;
 	ibb->allocator_end = end;
 
-	ibb->i915 = i915;
+	ibb->fd = fd;
 	ibb->enforce_relocs = do_relocs;
-	ibb->handle = gem_create(i915, size);
+	ibb->handle = gem_create(fd, size);
 	ibb->size = size;
-	ibb->alignment = gem_detect_safe_alignment(i915);
+	ibb->alignment = gem_detect_safe_alignment(fd);
 	ibb->ctx = ctx;
 	ibb->vm_id = 0;
 	ibb->batch = calloc(1, size);
@@ -937,7 +937,7 @@ __intel_bb_create(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 		memcpy(ibb->cfg, cfg, sizeof(*cfg));
 	}
 
-	ibb->gtt_size = gem_aperture_size(i915);
+	ibb->gtt_size = gem_aperture_size(fd);
 	if ((ibb->gtt_size - 1) >> 32)
 		ibb->supports_48b_address = true;
 
@@ -961,7 +961,7 @@ __intel_bb_create(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 
 /**
  * intel_bb_create_full:
- * @i915: drm fd
+ * @fd: drm fd
  * @ctx: context
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -980,19 +980,19 @@ __intel_bb_create(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
  *
  * Pointer the intel_bb, asserts on failure.
  */
-struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx,
+struct intel_bb *intel_bb_create_full(int fd, uint32_t ctx,
 				      const intel_ctx_cfg_t *cfg, uint32_t size,
 				      uint64_t start, uint64_t end,
 				      uint8_t allocator_type,
 				      enum allocator_strategy strategy)
 {
-	return __intel_bb_create(i915, ctx, cfg, size, false, start, end,
+	return __intel_bb_create(fd, ctx, cfg, size, false, start, end,
 				 allocator_type, strategy);
 }
 
 /**
  * intel_bb_create_with_allocator:
- * @i915: drm fd
+ * @fd: drm fd
  * @ctx: context
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -1006,18 +1006,18 @@ struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx,
  *
  * Pointer the intel_bb, asserts on failure.
  */
-struct intel_bb *intel_bb_create_with_allocator(int i915, uint32_t ctx,
+struct intel_bb *intel_bb_create_with_allocator(int fd, uint32_t ctx,
 						const intel_ctx_cfg_t *cfg,
 						uint32_t size,
 						uint8_t allocator_type)
 {
-	return __intel_bb_create(i915, ctx, cfg, size, false, 0, 0,
+	return __intel_bb_create(fd, ctx, cfg, size, false, 0, 0,
 				 allocator_type, ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
-static bool aux_needs_softpin(int i915)
+static bool aux_needs_softpin(int fd)
 {
-	return intel_gen(intel_get_drm_devid(i915)) >= 12;
+	return intel_gen(intel_get_drm_devid(fd)) >= 12;
 }
 
 static bool has_ctx_cfg(struct intel_bb *ibb)
@@ -1027,7 +1027,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
 
 /**
  * intel_bb_create:
- * @i915: drm fd
+ * @fd: drm fd
  * @size: size of the batchbuffer
  *
  * Creates bb with default context.
@@ -1045,19 +1045,19 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
  * connection to it inside intel_bb is not valid anymore.
  * Trying to use it leads to catastrofic errors.
  */
-struct intel_bb *intel_bb_create(int i915, uint32_t size)
+struct intel_bb *intel_bb_create(int fd, uint32_t size)
 {
-	bool relocs = gem_has_relocations(i915);
+	bool relocs = gem_has_relocations(fd);
 
-	return __intel_bb_create(i915, 0, NULL, size,
-				 relocs && !aux_needs_softpin(i915), 0, 0,
+	return __intel_bb_create(fd, 0, NULL, size,
+				 relocs && !aux_needs_softpin(fd), 0, 0,
 				 INTEL_ALLOCATOR_SIMPLE,
 				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
  * intel_bb_create_with_context:
- * @i915: drm fd
+ * @fd: drm fd
  * @ctx: context id
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -1070,20 +1070,20 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
  * Pointer the intel_bb, asserts on failure.
  */
 struct intel_bb *
-intel_bb_create_with_context(int i915, uint32_t ctx,
+intel_bb_create_with_context(int fd, uint32_t ctx,
 			     const intel_ctx_cfg_t *cfg, uint32_t size)
 {
-	bool relocs = gem_has_relocations(i915);
+	bool relocs = gem_has_relocations(fd);
 
-	return __intel_bb_create(i915, ctx, cfg, size,
-				 relocs && !aux_needs_softpin(i915), 0, 0,
+	return __intel_bb_create(fd, ctx, cfg, size,
+				 relocs && !aux_needs_softpin(fd), 0, 0,
 				 INTEL_ALLOCATOR_SIMPLE,
 				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
  * intel_bb_create_with_relocs:
- * @i915: drm fd
+ * @fd: drm fd
  * @size: size of the batchbuffer
  *
  * Creates bb which will disable passing addresses.
@@ -1093,17 +1093,17 @@ intel_bb_create_with_context(int i915, uint32_t ctx,
  *
  * Pointer the intel_bb, asserts on failure.
  */
-struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
+struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
 {
-	igt_require(gem_has_relocations(i915));
+	igt_require(gem_has_relocations(fd));
 
-	return __intel_bb_create(i915, 0, NULL, size, true, 0, 0,
+	return __intel_bb_create(fd, 0, NULL, size, true, 0, 0,
 				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
 }
 
 /**
  * intel_bb_create_with_relocs_and_context:
- * @i915: drm fd
+ * @fd: drm fd
  * @ctx: context
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -1116,19 +1116,19 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
  * Pointer the intel_bb, asserts on failure.
  */
 struct intel_bb *
-intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx,
+intel_bb_create_with_relocs_and_context(int fd, uint32_t ctx,
 					const intel_ctx_cfg_t *cfg,
 					uint32_t size)
 {
-	igt_require(gem_has_relocations(i915));
+	igt_require(gem_has_relocations(fd));
 
-	return __intel_bb_create(i915, ctx, cfg, size, true, 0, 0,
+	return __intel_bb_create(fd, ctx, cfg, size, true, 0, 0,
 				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
 }
 
 /**
  * intel_bb_create_no_relocs:
- * @i915: drm fd
+ * @fd: drm fd
  * @size: size of the batchbuffer
  *
  * Creates bb with disabled relocations.
@@ -1138,11 +1138,11 @@ intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx,
  *
  * Pointer the intel_bb, asserts on failure.
  */
-struct intel_bb *intel_bb_create_no_relocs(int i915, uint32_t size)
+struct intel_bb *intel_bb_create_no_relocs(int fd, uint32_t size)
 {
-	igt_require(gem_uses_full_ppgtt(i915));
+	igt_require(gem_uses_full_ppgtt(fd));
 
-	return __intel_bb_create(i915, 0, NULL, size, false, 0, 0,
+	return __intel_bb_create(fd, 0, NULL, size, false, 0, 0,
 				 INTEL_ALLOCATOR_SIMPLE,
 				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
@@ -1217,7 +1217,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
 		intel_allocator_free(ibb->allocator_handle, ibb->handle);
 		intel_allocator_close(ibb->allocator_handle);
 	}
-	gem_close(ibb->i915, ibb->handle);
+	gem_close(ibb->fd, ibb->handle);
 
 	if (ibb->fence >= 0)
 		close(ibb->fence);
@@ -1277,8 +1277,8 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 		intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset,
 				       ibb->size);
 
-	gem_close(ibb->i915, ibb->handle);
-	ibb->handle = gem_create(ibb->i915, ibb->size);
+	gem_close(ibb->fd, ibb->handle);
+	ibb->handle = gem_create(ibb->fd, ibb->size);
 
 	/* Keep address for bb in reloc mode and RANDOM allocator */
 	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
@@ -1325,7 +1325,7 @@ int intel_bb_sync(struct intel_bb *ibb)
 void intel_bb_print(struct intel_bb *ibb)
 {
 	igt_info("drm fd: %d, gen: %d, devid: %u, debug: %d\n",
-		 ibb->i915, ibb->gen, ibb->devid, ibb->debug);
+		 ibb->fd, ibb->gen, ibb->devid, ibb->debug);
 	igt_info("handle: %u, size: %u, batch: %p, ptr: %p\n",
 		 ibb->handle, ibb->size, ibb->batch, ibb->ptr);
 	igt_info("gtt_size: %" PRIu64 ", supports 48bit: %d\n",
@@ -1350,7 +1350,7 @@ void intel_bb_dump(struct intel_bb *ibb, const char *filename)
 	FILE *out;
 	void *ptr;
 
-	ptr = gem_mmap__device_coherent(ibb->i915, ibb->handle, 0, ibb->size,
+	ptr = gem_mmap__device_coherent(ibb->fd, ibb->handle, 0, ibb->size,
 					PROT_READ);
 	out = fopen(filename, "wb");
 	igt_assert(out);
@@ -1524,7 +1524,7 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
 	igt_assert(is_power_of_two(alignment));
 
 	object = __add_to_cache(ibb, handle);
-	alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->i915));
+	alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
 	__add_to_objects(ibb, object);
 
 	/*
@@ -1999,7 +1999,7 @@ static void intel_bb_dump_execbuf(struct intel_bb *ibb,
 	uint64_t address;
 
 	igt_debug("execbuf [pid: %ld, fd: %d, ctx: %u]\n",
-		  (long) getpid(), ibb->i915, ibb->ctx);
+		  (long) getpid(), ibb->fd, ibb->ctx);
 	igt_debug("execbuf batch len: %u, start offset: 0x%x, "
 		  "DR1: 0x%x, DR4: 0x%x, "
 		  "num clip: %u, clipptr: 0x%llx, "
@@ -2160,7 +2160,7 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 	ibb->objects[0]->handle = ibb->handle;
 	ibb->objects[0]->offset = ibb->batch_offset;
 
-	gem_write(ibb->i915, ibb->handle, 0, ibb->batch, ibb->size);
+	gem_write(ibb->fd, ibb->handle, 0, ibb->batch, ibb->size);
 
 	memset(&execbuf, 0, sizeof(execbuf));
 	objects = create_objects_array(ibb);
@@ -2179,7 +2179,7 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 	/* For debugging on CI, remove in final series */
 	intel_bb_dump_execbuf(ibb, &execbuf);
 
-	ret = __gem_execbuf_wr(ibb->i915, &execbuf);
+	ret = __gem_execbuf_wr(ibb->fd, &execbuf);
 	if (ret) {
 		intel_bb_dump_execbuf(ibb, &execbuf);
 		free(objects);
@@ -2409,13 +2409,13 @@ uint32_t intel_bb_copy_data(struct intel_bb *ibb,
  */
 void intel_bb_blit_start(struct intel_bb *ibb, uint32_t flags)
 {
-	if (blt_has_xy_src_copy(ibb->i915))
+	if (blt_has_xy_src_copy(ibb->fd))
 		intel_bb_out(ibb, XY_SRC_COPY_BLT_CMD |
 			     XY_SRC_COPY_BLT_WRITE_ALPHA |
 			     XY_SRC_COPY_BLT_WRITE_RGB |
 			     flags |
 			     (6 + 2 * (ibb->gen >= 8)));
-	else if (blt_has_fast_copy(ibb->i915))
+	else if (blt_has_fast_copy(ibb->fd))
 		intel_bb_out(ibb, XY_FAST_COPY_BLT | flags);
 	else
 		igt_assert_f(0, "No supported blit command found\n");
@@ -2456,9 +2456,9 @@ void intel_bb_emit_blt_copy(struct intel_bb *ibb,
 
 	if (gen >= 4 && src->tiling != I915_TILING_NONE) {
 		src_pitch /= 4;
-		if (blt_has_xy_src_copy(ibb->i915))
+		if (blt_has_xy_src_copy(ibb->fd))
 			cmd_bits |= XY_SRC_COPY_BLT_SRC_TILED;
-		else if (blt_has_fast_copy(ibb->i915))
+		else if (blt_has_fast_copy(ibb->fd))
 			cmd_bits |= fast_copy_dword0(src->tiling, dst->tiling);
 		else
 			igt_assert_f(0, "No supported blit command found\n");
@@ -2466,7 +2466,7 @@ void intel_bb_emit_blt_copy(struct intel_bb *ibb,
 
 	if (gen >= 4 && dst->tiling != I915_TILING_NONE) {
 		dst_pitch /= 4;
-		if (blt_has_xy_src_copy(ibb->i915))
+		if (blt_has_xy_src_copy(ibb->fd))
 			cmd_bits |= XY_SRC_COPY_BLT_DST_TILED;
 		else
 			cmd_bits |= fast_copy_dword0(src->tiling, dst->tiling);
@@ -2480,7 +2480,7 @@ void intel_bb_emit_blt_copy(struct intel_bb *ibb,
 	CHECK_RANGE(src_pitch); CHECK_RANGE(dst_pitch);
 
 	br13_bits = 0;
-	if (blt_has_xy_src_copy(ibb->i915)) {
+	if (blt_has_xy_src_copy(ibb->fd)) {
 		switch (bpp) {
 		case 8:
 			break;
@@ -2496,7 +2496,7 @@ void intel_bb_emit_blt_copy(struct intel_bb *ibb,
 			igt_fail(IGT_EXIT_FAILURE);
 		}
 	} else {
-		br13_bits = fast_copy_dword1(ibb->i915, src->tiling, dst->tiling, bpp);
+		br13_bits = fast_copy_dword1(ibb->fd, src->tiling, dst->tiling, bpp);
 	}
 
 	if ((src->tiling | dst->tiling) >= I915_TILING_Y) {
@@ -2631,7 +2631,7 @@ static void __intel_bb_reinit_alloc(struct intel_bb *ibb)
 	if (ibb->allocator_type == INTEL_ALLOCATOR_NONE)
 		return;
 
-	ibb->allocator_handle = intel_allocator_open_full(ibb->i915, ibb->ctx,
+	ibb->allocator_handle = intel_allocator_open_full(ibb->fd, ibb->ctx,
 							  ibb->allocator_start, ibb->allocator_end,
 							  ibb->allocator_type,
 							  ibb->allocator_strategy,
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 10e4126606..4978b6fb29 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -246,7 +246,7 @@ struct intel_bb {
 	uint8_t allocator_type;
 	enum allocator_strategy allocator_strategy;
 
-	int i915;
+	int fd;
 	unsigned int gen;
 	bool debug;
 	bool dump_base64;
@@ -299,21 +299,21 @@ struct intel_bb {
 };
 
 struct intel_bb *
-intel_bb_create_full(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
+intel_bb_create_full(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 		     uint32_t size, uint64_t start, uint64_t end,
 		     uint8_t allocator_type, enum allocator_strategy strategy);
 struct intel_bb *
-intel_bb_create_with_allocator(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
+intel_bb_create_with_allocator(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 			       uint32_t size, uint8_t allocator_type);
-struct intel_bb *intel_bb_create(int i915, uint32_t size);
+struct intel_bb *intel_bb_create(int fd, uint32_t size);
 struct intel_bb *
-intel_bb_create_with_context(int i915, uint32_t ctx, const intel_ctx_cfg_t *cfg,
+intel_bb_create_with_context(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 			     uint32_t size);
-struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size);
+struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size);
 struct intel_bb *
-intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx,
+intel_bb_create_with_relocs_and_context(int fd, uint32_t ctx,
 					const intel_ctx_cfg_t *cfg, uint32_t size);
-struct intel_bb *intel_bb_create_no_relocs(int i915, uint32_t size);
+struct intel_bb *intel_bb_create_no_relocs(int fd, uint32_t size);
 void intel_bb_destroy(struct intel_bb *ibb);
 
 /* make it safe to use intel_allocator after failed test */
diff --git a/tests/i915/gem_caching.c b/tests/i915/gem_caching.c
index b6ecd8346c..6e944f0acb 100644
--- a/tests/i915/gem_caching.c
+++ b/tests/i915/gem_caching.c
@@ -83,7 +83,7 @@ copy_bo(struct intel_bb *ibb, struct intel_buf *src, struct intel_buf *dst)
 	intel_bb_add_intel_buf(ibb, src, false);
 	intel_bb_add_intel_buf(ibb, dst, true);
 
-	if (blt_has_xy_src_copy(ibb->i915)) {
+	if (blt_has_xy_src_copy(ibb->fd)) {
 		intel_bb_out(ibb,
 			     XY_SRC_COPY_BLT_CMD |
 			     XY_SRC_COPY_BLT_WRITE_ALPHA |
@@ -93,7 +93,7 @@ copy_bo(struct intel_bb *ibb, struct intel_buf *src, struct intel_buf *dst)
 		intel_bb_out(ibb, (3 << 24) | /* 32 bits */
 			     (0xcc << 16) | /* copy ROP */
 			     4096);
-	} else if (blt_has_fast_copy(ibb->i915)) {
+	} else if (blt_has_fast_copy(ibb->fd)) {
 		intel_bb_out(ibb, XY_FAST_COPY_BLT);
 		intel_bb_out(ibb, XY_FAST_COPY_COLOR_DEPTH_32 | 4096);
 	} else {
diff --git a/tests/i915/gem_pxp.c b/tests/i915/gem_pxp.c
index af657d0e1b..2f27abd582 100644
--- a/tests/i915/gem_pxp.c
+++ b/tests/i915/gem_pxp.c
@@ -809,7 +809,7 @@ static int gem_execbuf_flush_store_dw(int i915, struct intel_bb *ibb, uint32_t c
 	ret = __intel_bb_exec(ibb, intel_bb_offset(ibb),
 				  I915_EXEC_RENDER | I915_EXEC_NO_RELOC, false);
 	if (ret == 0) {
-		gem_sync(ibb->i915, fence->handle);
+		gem_sync(ibb->fd, fence->handle);
 		assert_pipectl_storedw_done(i915, fence->handle);
 	}
 	return ret;
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 08/17] lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset path
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (6 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 07/17] lib/intel_batchbuffer: Rename i915 -> fd as preparation step for xe Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  7:50   ` Kamil Konieczny
  2023-04-28  8:44   ` Manszewski, Christoph
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 09/17] lib/intel_batchbuffer: Update intel-bb docs Zbigniew Kempczyński
                   ` (11 subsequent siblings)
  19 siblings, 2 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

On reset path we recreate bo for batch (to avoid stalls) so we should
reacquire the offset too. At the moment simple allocator will return
same offset (so unfortunately we'll stall), but for reloc allocator
we'll get new one (so we avoid stall).

I've noticed this is missing during xe_intel_bb test, where on reloc
I noticed unexpected result (direct consequence of using same offset
which pointed to old batch, not new one).

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
---
 lib/intel_batchbuffer.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 7dbd6dd582..99b0b61585 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1280,8 +1280,9 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	gem_close(ibb->fd, ibb->handle);
 	ibb->handle = gem_create(ibb->fd, ibb->size);
 
-	/* Keep address for bb in reloc mode and RANDOM allocator */
-	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+	/* Reacquire offset for RELOC and SIMPLE */
+	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
+	    ibb->allocator_type == INTEL_ALLOCATOR_RELOC)
 		ibb->batch_offset = __intel_bb_get_offset(ibb,
 							  ibb->handle,
 							  ibb->size,
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 09/17] lib/intel_batchbuffer: Update intel-bb docs
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (7 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 08/17] lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset path Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  7:51   ` Kamil Konieczny
  2023-04-28  8:51   ` Manszewski, Christoph
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb Zbigniew Kempczyński
                   ` (10 subsequent siblings)
  19 siblings, 2 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

After RANDOM pseudo-allocator was removed and RELOC allocator becomed
stateful docs stays intact and documents old code. Fix this before
adding xe code.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
---
 lib/intel_batchbuffer.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 99b0b61585..306b7650e9 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -836,7 +836,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
  *
  * intel-bb assumes it will work in one of two modes - with relocations or
- * with using allocator (currently RANDOM and SIMPLE are implemented).
+ * with using allocator (currently RELOC and SIMPLE are implemented).
  * Some description is required to describe how they maintain the addresses.
  *
  * Before entering into each scenarios generic rule is intel-bb keeps objects
@@ -854,10 +854,10 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  *
  * This mode is valid only for ppgtt. Addresses are acquired from allocator
  * and softpinned. intel-bb cache must be then coherent with allocator
- * (simple is coherent, random is not due to fact we don't keep its state).
+ * (simple is coherent, reloc partially [doesn't support address reservation]).
  * When we do intel-bb reset with purging cache it has to reacquire addresses
  * from allocator (allocator should return same address - what is true for
- * simple allocator and false for random as mentioned before).
+ * simple and reloc allocators).
  *
  * If we do reset without purging caches we use addresses from intel-bb cache
  * during execbuf objects construction.
@@ -967,7 +967,7 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
  * @size: size of the batchbuffer
  * @start: allocator vm start address
  * @end: allocator vm start address
- * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ * @allocator_type: allocator type, SIMPLE, RELOC, ...
  * @strategy: allocation strategy
  *
  * Creates bb with context passed in @ctx, size in @size and allocator type
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (8 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 09/17] lib/intel_batchbuffer: Update intel-bb docs Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  7:53   ` Kamil Konieczny
  2023-04-28  8:40   ` Manszewski, Christoph
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 11/17] tests/xe_intel_bb: Check if intel-bb Xe support correctness Zbigniew Kempczyński
                   ` (9 subsequent siblings)
  19 siblings, 2 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Intention of creating intel-bb was to replace libdrm for i915.
Due to many code relies on it (kms for example) most rational way
is to extend and add Xe path to it.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
---
 lib/intel_batchbuffer.c | 336 ++++++++++++++++++++++++++++++++--------
 lib/intel_batchbuffer.h |   6 +
 2 files changed, 281 insertions(+), 61 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 306b7650e9..38ad792e55 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -28,18 +28,22 @@
 #include <search.h>
 #include <glib.h>
 
+#include "gpgpu_fill.h"
+#include "huc_copy.h"
 #include "i915/gem_create.h"
+#include "i915/gem_mman.h"
+#include "i915/i915_blt.h"
+#include "igt_aux.h"
+#include "igt_syncobj.h"
 #include "intel_batchbuffer.h"
 #include "intel_bufops.h"
 #include "intel_chipset.h"
 #include "media_fill.h"
 #include "media_spin.h"
-#include "i915/gem_mman.h"
-#include "veboxcopy.h"
 #include "sw_sync.h"
-#include "gpgpu_fill.h"
-#include "huc_copy.h"
-#include "i915/i915_blt.h"
+#include "veboxcopy.h"
+#include "xe/xe_ioctl.h"
+#include "xe/xe_query.h"
 
 #define BCS_SWCTRL 0x22200
 #define BCS_SRC_Y (1 << 0)
@@ -828,9 +832,10 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
 
 /**
  * __intel_bb_create:
- * @fd: drm fd
+ * @fd: drm fd - i915 or xe
  * @ctx: context id
- * @cfg: intel_ctx configuration, NULL for default context or legacy mode
+ * @cfg: for i915 intel_ctx configuration, NULL for default context or legacy mode,
+ *       unused for xe
  * @size: size of the batchbuffer
  * @do_relocs: use relocations or allocator
  * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
@@ -842,7 +847,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  * Before entering into each scenarios generic rule is intel-bb keeps objects
  * and their offsets in the internal cache and reuses in subsequent execs.
  *
- * 1. intel-bb with relocations
+ * 1. intel-bb with relocations (i915 only)
  *
  * Creating new intel-bb adds handle to cache implicitly and sets its address
  * to 0. Objects added to intel-bb later also have address 0 set for first run.
@@ -850,11 +855,12 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  * works in reloc mode addresses are only suggestion to the driver and we
  * cannot be sure they won't change at next exec.
  *
- * 2. with allocator
+ * 2. with allocator (i915 or xe)
  *
  * This mode is valid only for ppgtt. Addresses are acquired from allocator
- * and softpinned. intel-bb cache must be then coherent with allocator
- * (simple is coherent, reloc partially [doesn't support address reservation]).
+ * and softpinned (i915) or vm-binded (xe). intel-bb cache must be then
+ * coherent with allocator (simple is coherent, reloc partially [doesn't
+ * support address reservation]).
  * When we do intel-bb reset with purging cache it has to reacquire addresses
  * from allocator (allocator should return same address - what is true for
  * simple and reloc allocators).
@@ -883,48 +889,75 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 
 	igt_assert(ibb);
 
-	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
 	ibb->devid = intel_get_drm_devid(fd);
 	ibb->gen = intel_gen(ibb->devid);
+	ibb->ctx = ctx;
+
+	ibb->fd = fd;
+	ibb->driver = is_i915_device(fd) ? INTEL_DRIVER_I915 :
+					   is_xe_device(fd) ? INTEL_DRIVER_XE : 0;
+	igt_assert(ibb->driver);
 
 	/*
 	 * If we don't have full ppgtt driver can change our addresses
 	 * so allocator is useless in this case. Just enforce relocations
 	 * for such gens and don't use allocator at all.
 	 */
-	if (!ibb->uses_full_ppgtt)
-		do_relocs = true;
+	if (ibb->driver == INTEL_DRIVER_I915) {
+		ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
+		ibb->alignment = gem_detect_safe_alignment(fd);
+		ibb->gtt_size = gem_aperture_size(fd);
+		ibb->handle = gem_create(fd, size);
 
-	/*
-	 * For softpin mode allocator has full control over offsets allocation
-	 * so we want kernel to not interfere with this.
-	 */
-	if (do_relocs)
-		ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
+		if (!ibb->uses_full_ppgtt)
+			do_relocs = true;
+
+		/*
+		 * For softpin mode allocator has full control over offsets allocation
+		 * so we want kernel to not interfere with this.
+		 */
+		if (do_relocs) {
+			ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
+			allocator_type = INTEL_ALLOCATOR_NONE;
+		} else {
+			/* Use safe start offset instead assuming 0x0 is safe */
+			start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
+
+			/* if relocs are set we won't use an allocator */
+			ibb->allocator_handle =
+				intel_allocator_open_full(fd, ctx, start, end,
+							  allocator_type,
+							  strategy, 0);
+		}
 
-	/* Use safe start offset instead assuming 0x0 is safe */
-	start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
+		ibb->vm_id = 0;
+	} else {
+		igt_assert(!do_relocs);
+
+		ibb->alignment = xe_get_default_alignment(fd);
+		size = ALIGN(size, ibb->alignment);
+		ibb->handle = xe_bo_create_flags(fd, 0, size, vram_if_possible(fd, 0));
+		ibb->gtt_size = 1ull << xe_va_bits(fd);
+
+		if (!ctx)
+			ctx = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+
+		ibb->uses_full_ppgtt = true;
+		ibb->allocator_handle =
+			intel_allocator_open_full(fd, ctx, start, end,
+						  allocator_type, strategy,
+						  ibb->alignment);
+		ibb->vm_id = ctx;
+		ibb->last_engine = ~0U;
+	}
 
-	/* if relocs are set we won't use an allocator */
-	if (do_relocs)
-		allocator_type = INTEL_ALLOCATOR_NONE;
-	else
-		ibb->allocator_handle = intel_allocator_open_full(fd, ctx,
-								  start, end,
-								  allocator_type,
-								  strategy, 0);
 	ibb->allocator_type = allocator_type;
 	ibb->allocator_strategy = strategy;
 	ibb->allocator_start = start;
 	ibb->allocator_end = end;
-
-	ibb->fd = fd;
 	ibb->enforce_relocs = do_relocs;
-	ibb->handle = gem_create(fd, size);
+
 	ibb->size = size;
-	ibb->alignment = gem_detect_safe_alignment(fd);
-	ibb->ctx = ctx;
-	ibb->vm_id = 0;
 	ibb->batch = calloc(1, size);
 	igt_assert(ibb->batch);
 	ibb->ptr = ibb->batch;
@@ -937,7 +970,6 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 		memcpy(ibb->cfg, cfg, sizeof(*cfg));
 	}
 
-	ibb->gtt_size = gem_aperture_size(fd);
 	if ((ibb->gtt_size - 1) >> 32)
 		ibb->supports_48b_address = true;
 
@@ -961,7 +993,7 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
 
 /**
  * intel_bb_create_full:
- * @fd: drm fd
+ * @fd: drm fd - i915 or xe
  * @ctx: context
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -992,7 +1024,7 @@ struct intel_bb *intel_bb_create_full(int fd, uint32_t ctx,
 
 /**
  * intel_bb_create_with_allocator:
- * @fd: drm fd
+ * @fd: drm fd - i915 or xe
  * @ctx: context
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -1027,7 +1059,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
 
 /**
  * intel_bb_create:
- * @fd: drm fd
+ * @fd: drm fd - i915 or xe
  * @size: size of the batchbuffer
  *
  * Creates bb with default context.
@@ -1047,7 +1079,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
  */
 struct intel_bb *intel_bb_create(int fd, uint32_t size)
 {
-	bool relocs = gem_has_relocations(fd);
+	bool relocs = is_i915_device(fd) && gem_has_relocations(fd);
 
 	return __intel_bb_create(fd, 0, NULL, size,
 				 relocs && !aux_needs_softpin(fd), 0, 0,
@@ -1057,7 +1089,7 @@ struct intel_bb *intel_bb_create(int fd, uint32_t size)
 
 /**
  * intel_bb_create_with_context:
- * @fd: drm fd
+ * @fd: drm fd - i915 or xe
  * @ctx: context id
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -1073,7 +1105,7 @@ struct intel_bb *
 intel_bb_create_with_context(int fd, uint32_t ctx,
 			     const intel_ctx_cfg_t *cfg, uint32_t size)
 {
-	bool relocs = gem_has_relocations(fd);
+	bool relocs = is_i915_device(fd) && gem_has_relocations(fd);
 
 	return __intel_bb_create(fd, ctx, cfg, size,
 				 relocs && !aux_needs_softpin(fd), 0, 0,
@@ -1083,7 +1115,7 @@ intel_bb_create_with_context(int fd, uint32_t ctx,
 
 /**
  * intel_bb_create_with_relocs:
- * @fd: drm fd
+ * @fd: drm fd - i915
  * @size: size of the batchbuffer
  *
  * Creates bb which will disable passing addresses.
@@ -1095,7 +1127,7 @@ intel_bb_create_with_context(int fd, uint32_t ctx,
  */
 struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
 {
-	igt_require(gem_has_relocations(fd));
+	igt_require(is_i915_device(fd) && gem_has_relocations(fd));
 
 	return __intel_bb_create(fd, 0, NULL, size, true, 0, 0,
 				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
@@ -1103,7 +1135,7 @@ struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
 
 /**
  * intel_bb_create_with_relocs_and_context:
- * @fd: drm fd
+ * @fd: drm fd - i915
  * @ctx: context
  * @cfg: intel_ctx configuration, NULL for default context or legacy mode
  * @size: size of the batchbuffer
@@ -1120,7 +1152,7 @@ intel_bb_create_with_relocs_and_context(int fd, uint32_t ctx,
 					const intel_ctx_cfg_t *cfg,
 					uint32_t size)
 {
-	igt_require(gem_has_relocations(fd));
+	igt_require(is_i915_device(fd) && gem_has_relocations(fd));
 
 	return __intel_bb_create(fd, ctx, cfg, size, true, 0, 0,
 				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
@@ -1221,12 +1253,76 @@ void intel_bb_destroy(struct intel_bb *ibb)
 
 	if (ibb->fence >= 0)
 		close(ibb->fence);
+	if (ibb->engine_syncobj)
+		syncobj_destroy(ibb->fd, ibb->engine_syncobj);
+	if (ibb->vm_id && !ibb->ctx)
+		xe_vm_destroy(ibb->fd, ibb->vm_id);
 
 	free(ibb->batch);
 	free(ibb->cfg);
 	free(ibb);
 }
 
+static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
+						   uint32_t op, uint32_t region)
+{
+	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
+	struct drm_xe_vm_bind_op *bind_ops, *ops;
+	bool set_obj = (op & 0xffff) == XE_VM_BIND_OP_MAP;
+
+	bind_ops = calloc(ibb->num_objects, sizeof(*bind_ops));
+	igt_assert(bind_ops);
+
+	igt_debug("bind_ops: %s\n", set_obj ? "MAP" : "UNMAP");
+	for (int i = 0; i < ibb->num_objects; i++) {
+		ops = &bind_ops[i];
+
+		if (set_obj)
+			ops->obj = objects[i]->handle;
+
+		ops->op = op;
+		ops->obj_offset = 0;
+		ops->addr = objects[i]->offset;
+		ops->range = objects[i]->rsvd1;
+		ops->region = region;
+
+		igt_debug("  [%d]: handle: %u, offset: %llx, size: %llx\n",
+			  i, ops->obj, (long long)ops->addr, (long long)ops->range);
+	}
+
+	return bind_ops;
+}
+
+static void __unbind_xe_objects(struct intel_bb *ibb)
+{
+	struct drm_xe_sync syncs[2] = {
+		{ .flags = DRM_XE_SYNC_SYNCOBJ },
+		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+	};
+	int ret;
+
+	syncs[0].handle = ibb->engine_syncobj;
+	syncs[1].handle = syncobj_create(ibb->fd, 0);
+
+	if (ibb->num_objects > 1) {
+		struct drm_xe_vm_bind_op *bind_ops;
+		uint32_t op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+
+		bind_ops = xe_alloc_bind_ops(ibb, op, 0);
+		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
+				 ibb->num_objects, syncs, 2);
+		free(bind_ops);
+	} else {
+		xe_vm_unbind_async(ibb->fd, ibb->vm_id, 0, 0,
+				   ibb->batch_offset, ibb->size, syncs, 2);
+	}
+	ret = syncobj_wait_err(ibb->fd, &syncs[1].handle, 1, INT64_MAX, 0);
+	igt_assert_eq(ret, 0);
+	syncobj_destroy(ibb->fd, syncs[1].handle);
+
+	ibb->xe_bound = false;
+}
+
 /*
  * intel_bb_reset:
  * @ibb: pointer to intel_bb
@@ -1258,6 +1354,9 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	for (i = 0; i < ibb->num_objects; i++)
 		ibb->objects[i]->flags &= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 
+	if (is_xe_device(ibb->fd) && ibb->xe_bound)
+		__unbind_xe_objects(ibb);
+
 	__intel_bb_destroy_relocations(ibb);
 	__intel_bb_destroy_objects(ibb);
 	__reallocate_objects(ibb);
@@ -1278,7 +1377,11 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 				       ibb->size);
 
 	gem_close(ibb->fd, ibb->handle);
-	ibb->handle = gem_create(ibb->fd, ibb->size);
+	if (ibb->driver == INTEL_DRIVER_I915)
+		ibb->handle = gem_create(ibb->fd, ibb->size);
+	else
+		ibb->handle = xe_bo_create_flags(ibb->fd, 0, ibb->size,
+						 vram_if_possible(ibb->fd, 0));
 
 	/* Reacquire offset for RELOC and SIMPLE */
 	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
@@ -1305,13 +1408,19 @@ int intel_bb_sync(struct intel_bb *ibb)
 {
 	int ret;
 
-	if (ibb->fence < 0)
+	if (ibb->fence < 0 && !ibb->engine_syncobj)
 		return 0;
 
-	ret = sync_fence_wait(ibb->fence, -1);
-	if (ret == 0) {
-		close(ibb->fence);
-		ibb->fence = -1;
+	if (ibb->fence >= 0) {
+		ret = sync_fence_wait(ibb->fence, -1);
+		if (ret == 0) {
+			close(ibb->fence);
+			ibb->fence = -1;
+		}
+	} else {
+		igt_assert_neq(ibb->engine_syncobj, 0);
+		ret = syncobj_wait_err(ibb->fd, &ibb->engine_syncobj,
+				       1, INT64_MAX, 0);
 	}
 
 	return ret;
@@ -1502,7 +1611,7 @@ static void __remove_from_objects(struct intel_bb *ibb,
 }
 
 /**
- * intel_bb_add_object:
+ * __intel_bb_add_object:
  * @ibb: pointer to intel_bb
  * @handle: which handle to add to objects array
  * @size: object size
@@ -1514,9 +1623,9 @@ static void __remove_from_objects(struct intel_bb *ibb,
  * in the object tree. When object is a render target it has to
  * be marked with EXEC_OBJECT_WRITE flag.
  */
-struct drm_i915_gem_exec_object2 *
-intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
-		    uint64_t offset, uint64_t alignment, bool write)
+static struct drm_i915_gem_exec_object2 *
+__intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
+		      uint64_t offset, uint64_t alignment, bool write)
 {
 	struct drm_i915_gem_exec_object2 *object;
 
@@ -1524,8 +1633,12 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
 		   || ALIGN(offset, alignment) == offset);
 	igt_assert(is_power_of_two(alignment));
 
+	if (ibb->driver == INTEL_DRIVER_I915)
+		alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
+	else
+		alignment = max_t(uint64_t, ibb->alignment, alignment);
+
 	object = __add_to_cache(ibb, handle);
-	alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
 	__add_to_objects(ibb, object);
 
 	/*
@@ -1585,9 +1698,27 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
 	if (ibb->allows_obj_alignment)
 		object->alignment = alignment;
 
+	if (ibb->driver == INTEL_DRIVER_XE) {
+		object->alignment = alignment;
+		object->rsvd1 = size;
+	}
+
 	return object;
 }
 
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
+		    uint64_t offset, uint64_t alignment, bool write)
+{
+	struct drm_i915_gem_exec_object2 *obj = NULL;
+
+	obj = __intel_bb_add_object(ibb, handle, size, offset,
+				    alignment, write);
+	igt_assert(obj);
+
+	return obj;
+}
+
 bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
 			    uint64_t offset, uint64_t size)
 {
@@ -2136,6 +2267,82 @@ static void update_offsets(struct intel_bb *ibb,
 }
 
 #define LINELEN 76
+
+static int
+__xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
+{
+	uint32_t engine = flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK);
+	uint32_t engine_id;
+	struct drm_xe_sync syncs[2] = {
+		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+	};
+	struct drm_xe_vm_bind_op *bind_ops;
+	void *map;
+
+	igt_assert_eq(ibb->num_relocs, 0);
+	igt_assert_eq(ibb->xe_bound, false);
+
+	if (ibb->last_engine != engine) {
+		struct drm_xe_engine_class_instance inst = { };
+
+		inst.engine_instance =
+			(flags & I915_EXEC_BSD_MASK) >> I915_EXEC_BSD_SHIFT;
+
+		switch (flags & I915_EXEC_RING_MASK) {
+		case I915_EXEC_DEFAULT:
+		case I915_EXEC_BLT:
+			inst.engine_class = DRM_XE_ENGINE_CLASS_COPY;
+			break;
+		case I915_EXEC_BSD:
+			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_DECODE;
+			break;
+		case I915_EXEC_RENDER:
+			inst.engine_class = DRM_XE_ENGINE_CLASS_RENDER;
+			break;
+		case I915_EXEC_VEBOX:
+			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE;
+			break;
+		default:
+			igt_assert_f(false, "Unknown engine: %x", (uint32_t) flags);
+		}
+		igt_debug("Run on %s\n", xe_engine_class_string(inst.engine_class));
+
+		ibb->engine_id = engine_id =
+			xe_engine_create(ibb->fd, ibb->vm_id, &inst, 0);
+	} else {
+		engine_id = ibb->engine_id;
+	}
+	ibb->last_engine = engine;
+
+	map = xe_bo_map(ibb->fd, ibb->handle, ibb->size);
+	memcpy(map, ibb->batch, ibb->size);
+	gem_munmap(map, ibb->size);
+
+	syncs[0].handle = syncobj_create(ibb->fd, 0);
+	if (ibb->num_objects > 1) {
+		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, 0);
+		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
+				 ibb->num_objects, syncs, 1);
+		free(bind_ops);
+	} else {
+		xe_vm_bind_async(ibb->fd, ibb->vm_id, 0, ibb->handle, 0,
+				 ibb->batch_offset, ibb->size, syncs, 1);
+	}
+	ibb->xe_bound = true;
+
+	syncs[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+	ibb->engine_syncobj = syncobj_create(ibb->fd, 0);
+	syncs[1].handle = ibb->engine_syncobj;
+
+	xe_exec_sync(ibb->fd, engine_id, ibb->batch_offset, syncs, 2);
+
+	if (sync)
+		intel_bb_sync(ibb);
+
+	return 0;
+}
+
 /*
  * __intel_bb_exec:
  * @ibb: pointer to intel_bb
@@ -2221,7 +2428,7 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 /**
  * intel_bb_exec:
  * @ibb: pointer to intel_bb
- * @end_offset: offset of the last instruction in the bb
+ * @end_offset: offset of the last instruction in the bb (for i915)
  * @flags: flags passed directly to execbuf
  * @sync: if true wait for execbuf completion, otherwise caller is responsible
  * to wait for completion
@@ -2231,7 +2438,13 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 void intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 		   uint64_t flags, bool sync)
 {
-	igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
+	if (ibb->dump_base64)
+		intel_bb_dump_base64(ibb, LINELEN);
+
+	if (ibb->driver == INTEL_DRIVER_I915)
+		igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
+	else
+		igt_assert_eq(__xe_bb_exec(ibb, flags, sync), 0);
 }
 
 /**
@@ -2636,7 +2849,8 @@ static void __intel_bb_reinit_alloc(struct intel_bb *ibb)
 							  ibb->allocator_start, ibb->allocator_end,
 							  ibb->allocator_type,
 							  ibb->allocator_strategy,
-							  0);
+							  ibb->alignment);
+
 	intel_bb_reset(ibb, true);
 }
 
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 4978b6fb29..9a58fb7809 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -246,6 +246,7 @@ struct intel_bb {
 	uint8_t allocator_type;
 	enum allocator_strategy allocator_strategy;
 
+	enum intel_driver driver;
 	int fd;
 	unsigned int gen;
 	bool debug;
@@ -268,6 +269,11 @@ struct intel_bb {
 	uint32_t ctx;
 	uint32_t vm_id;
 
+	bool xe_bound;
+	uint32_t engine_syncobj;
+	uint32_t engine_id;
+	uint32_t last_engine;
+
 	/* Context configuration */
 	intel_ctx_cfg_t *cfg;
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 11/17] tests/xe_intel_bb: Check if intel-bb Xe support correctness
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (9 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  7:58   ` Kamil Konieczny
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 12/17] tests/xe-fast-feedback: Add xe_intel_bb test to BAT Zbigniew Kempczyński
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

As we're reusing intel-bb for Xe we need to check it behaves correctly
for buffer handling and submission.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>

---
v5: to run test quick use system memory instead of vram (mapping
    on system is wb)
---
 tests/meson.build      |    1 +
 tests/xe/xe_intel_bb.c | 1185 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1186 insertions(+)
 create mode 100644 tests/xe/xe_intel_bb.c

diff --git a/tests/meson.build b/tests/meson.build
index 8909cfa8fd..b026fac48b 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -256,6 +256,7 @@ xe_progs = [
 	'xe_exec_threads',
 	'xe_guc_pc',
 	'xe_huc_copy',
+	'xe_intel_bb',
 	'xe_mmap',
 	'xe_mmio',
 	'xe_module_load',
diff --git a/tests/xe/xe_intel_bb.c b/tests/xe/xe_intel_bb.c
new file mode 100644
index 0000000000..35d61608e1
--- /dev/null
+++ b/tests/xe/xe_intel_bb.c
@@ -0,0 +1,1185 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#include <cairo.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <glib.h>
+#include <inttypes.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+#include <sys/stat.h>
+#include <unistd.h>
+#include <zlib.h>
+
+#include "igt.h"
+#include "igt_crc.h"
+#include "intel_bufops.h"
+#include "xe/xe_ioctl.h"
+#include "xe/xe_query.h"
+
+/**
+ * TEST: Basic tests for intel-bb xe functionality
+ * Category: Software building block
+ * Sub-category: xe
+ * Test category: functionality test
+ */
+
+#define PAGE_SIZE 4096
+
+#define WIDTH	64
+#define HEIGHT	64
+#define STRIDE	(WIDTH * 4)
+#define SIZE	(HEIGHT * STRIDE)
+
+#define COLOR_00	0x00
+#define COLOR_33	0x33
+#define COLOR_77	0x77
+#define COLOR_CC	0xcc
+
+IGT_TEST_DESCRIPTION("xe_intel_bb API check.");
+
+static bool debug_bb;
+static bool write_png;
+static bool buf_info;
+static bool print_base64;
+
+static void *alloc_aligned(uint64_t size)
+{
+	void *p;
+
+	igt_assert_eq(posix_memalign(&p, 16, size), 0);
+
+	return p;
+}
+
+static void fill_buf(struct intel_buf *buf, uint8_t color)
+{
+	uint8_t *ptr;
+	int xe = buf_ops_get_fd(buf->bops);
+	int i;
+
+	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
+
+	for (i = 0; i < buf->surface[0].size; i++)
+		ptr[i] = color;
+
+	munmap(ptr, buf->surface[0].size);
+}
+
+static void check_buf(struct intel_buf *buf, uint8_t color)
+{
+	uint8_t *ptr;
+	int xe = buf_ops_get_fd(buf->bops);
+	int i;
+
+	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
+
+	for (i = 0; i < buf->surface[0].size; i++)
+		igt_assert(ptr[i] == color);
+
+	munmap(ptr, buf->surface[0].size);
+}
+
+static struct intel_buf *
+create_buf(struct buf_ops *bops, int width, int height, uint8_t color)
+{
+	struct intel_buf *buf;
+
+	buf = calloc(1, sizeof(*buf));
+	igt_assert(buf);
+
+	intel_buf_init(bops, buf, width/4, height, 32, 0, I915_TILING_NONE, 0);
+	fill_buf(buf, color);
+
+	return buf;
+}
+
+static void print_buf(struct intel_buf *buf, const char *name)
+{
+	uint8_t *ptr;
+	int xe = buf_ops_get_fd(buf->bops);
+
+	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
+
+	igt_debug("[%s] Buf handle: %d, size: %" PRIu64
+		  ", v: 0x%02x, presumed_addr: %p\n",
+		  name, buf->handle, buf->surface[0].size, ptr[0],
+		  from_user_pointer(buf->addr.offset));
+	munmap(ptr, buf->surface[0].size);
+}
+
+/**
+ * SUBTEST: reset-bb
+ * Description: check bb reset
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static void reset_bb(struct buf_ops *bops)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	intel_bb_reset(ibb, false);
+	intel_bb_destroy(ibb);
+}
+
+/**
+ * SUBTEST: purge-bb
+ * Description: check bb reset == full (purge)
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static void purge_bb(struct buf_ops *bops)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_buf *buf;
+	struct intel_bb *ibb;
+	uint64_t offset0, offset1;
+
+	buf = intel_buf_create(bops, 512, 512, 32, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	ibb = intel_bb_create(xe, 4096);
+	intel_bb_set_debug(ibb, true);
+
+	intel_bb_add_intel_buf(ibb, buf, false);
+	offset0 = buf->addr.offset;
+
+	intel_bb_reset(ibb, true);
+	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+
+	intel_bb_add_intel_buf(ibb, buf, false);
+	offset1 = buf->addr.offset;
+
+	igt_assert(offset0 == offset1);
+
+	intel_buf_destroy(buf);
+	intel_bb_destroy(ibb);
+}
+
+/**
+ * SUBTEST: simple-%s
+ * Description: Run simple bb xe %arg[1] test
+ * Run type: BAT
+ *
+ * arg[1]:
+ *
+ * @bb:     bb
+ * @bb-ctx: bb-ctx
+ */
+static void simple_bb(struct buf_ops *bops, bool new_context)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	uint32_t ctx = 0;
+
+	ibb = intel_bb_create_with_allocator(xe, ctx, NULL, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
+	intel_bb_ptr_align(ibb, 8);
+
+	intel_bb_exec(ibb, intel_bb_offset(ibb),
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	/* Check we're safe with reset and no double-free will occur */
+	intel_bb_reset(ibb, true);
+	intel_bb_reset(ibb, false);
+	intel_bb_reset(ibb, true);
+
+	if (new_context) {
+		ctx = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+		intel_bb_destroy(ibb);
+		ibb = intel_bb_create_with_context(xe, ctx, NULL, PAGE_SIZE);
+		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
+		intel_bb_ptr_align(ibb, 8);
+		intel_bb_exec(ibb, intel_bb_offset(ibb),
+			      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC,
+			      true);
+		xe_vm_destroy(xe, ctx);
+	}
+
+	intel_bb_destroy(ibb);
+}
+
+/**
+ * SUBTEST: bb-with-allocator
+ * Description: check bb with passed allocator
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static void bb_with_allocator(struct buf_ops *bops)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *dst;
+	uint32_t ctx = 0;
+
+	ibb = intel_bb_create_with_allocator(xe, ctx, NULL, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	intel_bb_remove_intel_buf(ibb, src);
+	intel_bb_remove_intel_buf(ibb, dst);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(dst);
+	intel_bb_destroy(ibb);
+}
+
+/**
+ * SUBTEST: lot-of-buffers
+ * Description: check running bb with many buffers
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+#define NUM_BUFS 500
+static void lot_of_buffers(struct buf_ops *bops)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *buf[NUM_BUFS];
+	int i;
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
+	intel_bb_ptr_align(ibb, 8);
+
+	for (i = 0; i < NUM_BUFS; i++) {
+		buf[i] = intel_buf_create(bops, 4096, 1, 8, 0, I915_TILING_NONE,
+					  I915_COMPRESSION_NONE);
+		if (i % 2)
+			intel_bb_add_intel_buf(ibb, buf[i], false);
+		else
+			intel_bb_add_intel_buf_with_alignment(ibb, buf[i],
+							      0x4000, false);
+	}
+
+	intel_bb_exec(ibb, intel_bb_offset(ibb),
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+	intel_bb_reset(ibb, false);
+
+	for (i = 0; i < NUM_BUFS; i++)
+		intel_buf_destroy(buf[i]);
+
+	intel_bb_destroy(ibb);
+}
+
+/**
+ * SUBTEST: add-remove-objects
+ * Description: check bb object manipulation (add + remove)
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static void add_remove_objects(struct buf_ops *bops)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *mid, *dst;
+	uint32_t offset;
+	const uint32_t width = 512;
+	const uint32_t height = 512;
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	mid = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, mid, true);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+	intel_bb_reset(ibb, false);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(mid);
+	intel_buf_destroy(dst);
+	intel_bb_destroy(ibb);
+}
+
+/**
+ * SUBTEST: destroy-bb
+ * Description: check bb destroy/create
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static void destroy_bb(struct buf_ops *bops)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *mid, *dst;
+	uint32_t offset;
+	const uint32_t width = 512;
+	const uint32_t height = 512;
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	mid = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, mid, true);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+	intel_bb_reset(ibb, false);
+
+	/* Check destroy will detach intel_bufs */
+	intel_bb_destroy(ibb);
+	igt_assert(src->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(src->ibb == NULL);
+	igt_assert(mid->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(mid->ibb == NULL);
+	igt_assert(dst->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(dst->ibb == NULL);
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+	intel_bb_reset(ibb, false);
+
+	intel_bb_destroy(ibb);
+	intel_buf_destroy(src);
+	intel_buf_destroy(mid);
+	intel_buf_destroy(dst);
+}
+
+/**
+ * SUBTEST: create-in-region
+ * Description: check size validation on available regions
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static void create_in_region(struct buf_ops *bops, uint64_t region)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf buf = {};
+	uint32_t handle, offset;
+	uint64_t size;
+	int width = 64;
+	int height = 64;
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	size = xe_min_page_size(xe, system_memory(xe));
+	handle = xe_bo_create_flags(xe, 0, size, system_memory(xe));
+	intel_buf_init_full(bops, handle, &buf,
+			    width/4, height, 32, 0,
+			    I915_TILING_NONE, 0,
+			    size, 0, region);
+	intel_buf_set_ownership(&buf, true);
+
+	intel_bb_add_intel_buf(ibb, &buf, false);
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+	intel_bb_reset(ibb, false);
+
+	intel_buf_close(bops, &buf);
+	intel_bb_destroy(ibb);
+}
+
+static void __emit_blit(struct intel_bb *ibb,
+			 struct intel_buf *src, struct intel_buf *dst)
+{
+	intel_bb_emit_blt_copy(ibb,
+			       src, 0, 0, src->surface[0].stride,
+			       dst, 0, 0, dst->surface[0].stride,
+			       intel_buf_width(dst),
+			       intel_buf_height(dst),
+			       dst->bpp);
+}
+
+/**
+ * SUBTEST: blit-%s
+ * Description: Run blit on %arg[1] allocator
+ * Run type: BAT
+ *
+ * arg[1]:
+ *
+ * @simple:				simple
+ * @reloc:				reloc
+ */
+static void blit(struct buf_ops *bops, uint8_t allocator_type)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *dst;
+	uint64_t poff_src, poff_dst;
+	uint64_t flags = 0;
+
+	ibb = intel_bb_create_with_allocator(xe, 0, NULL, PAGE_SIZE,
+					     allocator_type);
+	flags |= I915_EXEC_NO_RELOC;
+
+	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
+	dst = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
+
+	if (buf_info) {
+		print_buf(src, "src");
+		print_buf(dst, "dst");
+	}
+
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	__emit_blit(ibb, src, dst);
+	intel_bb_emit_bbe(ibb);
+	intel_bb_flush_blit(ibb);
+	intel_bb_sync(ibb);
+	intel_bb_reset(ibb, false);
+	check_buf(dst, COLOR_CC);
+
+	poff_src = intel_bb_get_object_offset(ibb, src->handle);
+	poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
+
+	/* Add buffers again */
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	igt_assert_f(poff_src == src->addr.offset,
+		     "prev src addr: %" PRIx64 " <> src addr %" PRIx64 "\n",
+		     poff_src, src->addr.offset);
+	igt_assert_f(poff_dst == dst->addr.offset,
+		     "prev dst addr: %" PRIx64 " <> dst addr %" PRIx64 "\n",
+		     poff_dst, dst->addr.offset);
+
+	fill_buf(src, COLOR_77);
+	fill_buf(dst, COLOR_00);
+
+	__emit_blit(ibb, src, dst);
+	intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, intel_bb_offset(ibb),
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+	intel_bb_reset(ibb, false);
+	check_buf(dst, COLOR_77);
+
+	intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, intel_bb_offset(ibb),
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+	intel_bb_reset(ibb, false);
+	check_buf(dst, COLOR_77);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(dst);
+	intel_bb_destroy(ibb);
+}
+
+static void scratch_buf_init(struct buf_ops *bops,
+			     struct intel_buf *buf,
+			     int width, int height,
+			     uint32_t req_tiling,
+			     enum i915_compression compression)
+{
+	int fd = buf_ops_get_fd(bops);
+	int bpp = 32;
+
+	/*
+	 * We use system memory even if vram is possible because wc mapping
+	 * is extremely slow.
+	 */
+	intel_buf_init_in_region(bops, buf, width, height, bpp, 0,
+				 req_tiling, compression,
+				 system_memory(fd));
+
+	igt_assert(intel_buf_width(buf) == width);
+	igt_assert(intel_buf_height(buf) == height);
+}
+
+static void scratch_buf_draw_pattern(struct buf_ops *bops,
+				     struct intel_buf *buf,
+				     int x, int y, int w, int h,
+				     int cx, int cy, int cw, int ch,
+				     bool use_alternate_colors)
+{
+	cairo_surface_t *surface;
+	cairo_pattern_t *pat;
+	cairo_t *cr;
+	void *linear;
+
+	linear = alloc_aligned(buf->surface[0].size);
+
+	surface = cairo_image_surface_create_for_data(linear,
+						      CAIRO_FORMAT_RGB24,
+						      intel_buf_width(buf),
+						      intel_buf_height(buf),
+						      buf->surface[0].stride);
+
+	cr = cairo_create(surface);
+
+	cairo_rectangle(cr, cx, cy, cw, ch);
+	cairo_clip(cr);
+
+	pat = cairo_pattern_create_mesh();
+	cairo_mesh_pattern_begin_patch(pat);
+	cairo_mesh_pattern_move_to(pat, x,   y);
+	cairo_mesh_pattern_line_to(pat, x+w, y);
+	cairo_mesh_pattern_line_to(pat, x+w, y+h);
+	cairo_mesh_pattern_line_to(pat, x,   y+h);
+	if (use_alternate_colors) {
+		cairo_mesh_pattern_set_corner_color_rgb(pat, 0, 0.0, 1.0, 1.0);
+		cairo_mesh_pattern_set_corner_color_rgb(pat, 1, 1.0, 0.0, 1.0);
+		cairo_mesh_pattern_set_corner_color_rgb(pat, 2, 1.0, 1.0, 0.0);
+		cairo_mesh_pattern_set_corner_color_rgb(pat, 3, 0.0, 0.0, 0.0);
+	} else {
+		cairo_mesh_pattern_set_corner_color_rgb(pat, 0, 1.0, 0.0, 0.0);
+		cairo_mesh_pattern_set_corner_color_rgb(pat, 1, 0.0, 1.0, 0.0);
+		cairo_mesh_pattern_set_corner_color_rgb(pat, 2, 0.0, 0.0, 1.0);
+		cairo_mesh_pattern_set_corner_color_rgb(pat, 3, 1.0, 1.0, 1.0);
+	}
+	cairo_mesh_pattern_end_patch(pat);
+
+	cairo_rectangle(cr, x, y, w, h);
+	cairo_set_source(cr, pat);
+	cairo_fill(cr);
+	cairo_pattern_destroy(pat);
+
+	cairo_destroy(cr);
+
+	cairo_surface_destroy(surface);
+
+	linear_to_intel_buf(bops, buf, linear);
+
+	free(linear);
+}
+
+#define GROUP_SIZE 4096
+static int compare_detail(const uint32_t *ptr1, uint32_t *ptr2,
+			  uint32_t size)
+{
+	int i, ok = 0, fail = 0;
+	int groups = size / GROUP_SIZE;
+	int *hist = calloc(GROUP_SIZE, groups);
+
+	igt_debug("size: %d, group_size: %d, groups: %d\n",
+		  size, GROUP_SIZE, groups);
+
+	for (i = 0; i < size / sizeof(uint32_t); i++) {
+		if (ptr1[i] == ptr2[i]) {
+			ok++;
+		} else {
+			fail++;
+			hist[i * sizeof(uint32_t) / GROUP_SIZE]++;
+		}
+	}
+
+	for (i = 0; i < groups; i++) {
+		if (hist[i])
+			igt_debug("[group %4x]: %d\n", i, hist[i]);
+	}
+	free(hist);
+
+	igt_debug("ok: %d, fail: %d\n", ok, fail);
+
+	return fail;
+}
+
+static int compare_bufs(struct intel_buf *buf1, struct intel_buf *buf2,
+			 bool detail_compare)
+{
+	void *ptr1, *ptr2;
+	int fd1, fd2, ret;
+
+	igt_assert(buf1->surface[0].size == buf2->surface[0].size);
+
+	fd1 = buf_ops_get_fd(buf1->bops);
+	fd2 = buf_ops_get_fd(buf2->bops);
+
+	ptr1 = xe_bo_map(fd1, buf1->handle, buf1->surface[0].size);
+	ptr2 = xe_bo_map(fd2, buf2->handle, buf2->surface[0].size);
+	ret = memcmp(ptr1, ptr2, buf1->surface[0].size);
+	if (detail_compare)
+		ret = compare_detail(ptr1, ptr2, buf1->surface[0].size);
+
+	munmap(ptr1, buf1->surface[0].size);
+	munmap(ptr2, buf2->surface[0].size);
+
+	return ret;
+}
+
+#define LINELEN 76ul
+static int dump_base64(const char *name, struct intel_buf *buf)
+{
+	void *ptr;
+	int fd, ret;
+	uLongf outsize = buf->surface[0].size * 3 / 2;
+	Bytef *destbuf = malloc(outsize);
+	gchar *str, *pos;
+
+	fd = buf_ops_get_fd(buf->bops);
+
+	ptr = gem_mmap__device_coherent(fd, buf->handle, 0,
+					buf->surface[0].size, PROT_READ);
+
+	ret = compress2(destbuf, &outsize, ptr, buf->surface[0].size,
+			Z_BEST_COMPRESSION);
+	if (ret != Z_OK) {
+		igt_warn("error compressing, ret: %d\n", ret);
+	} else {
+		igt_info("compressed %" PRIu64 " -> %lu\n",
+			 buf->surface[0].size, outsize);
+
+		igt_info("--- %s ---\n", name);
+		pos = str = g_base64_encode(destbuf, outsize);
+		outsize = strlen(str);
+		while (pos) {
+			char line[LINELEN + 1];
+			int to_copy = min(LINELEN, outsize);
+
+			memcpy(line, pos, to_copy);
+			line[to_copy] = 0;
+			igt_info("%s\n", line);
+			pos += LINELEN;
+			outsize -= to_copy;
+
+			if (outsize == 0)
+				break;
+		}
+		free(str);
+	}
+
+	munmap(ptr, buf->surface[0].size);
+	free(destbuf);
+
+	return ret;
+}
+
+static int __do_intel_bb_blit(struct buf_ops *bops, uint32_t tiling)
+{
+	struct intel_bb *ibb;
+	const int width = 1024;
+	const int height = 1024;
+	struct intel_buf src, dst, final;
+	char name[128];
+	int xe = buf_ops_get_fd(bops), fails;
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	scratch_buf_init(bops, &src, width, height, I915_TILING_NONE,
+			 I915_COMPRESSION_NONE);
+	scratch_buf_init(bops, &dst, width, height, tiling,
+			 I915_COMPRESSION_NONE);
+	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
+			 I915_COMPRESSION_NONE);
+
+	if (buf_info) {
+		intel_buf_print(&src);
+		intel_buf_print(&dst);
+	}
+
+	scratch_buf_draw_pattern(bops, &src,
+				 0, 0, width, height,
+				 0, 0, width, height, 0);
+
+	intel_bb_blt_copy(ibb,
+			  &src, 0, 0, src.surface[0].stride,
+			  &dst, 0, 0, dst.surface[0].stride,
+			  intel_buf_width(&dst),
+			  intel_buf_height(&dst),
+			  dst.bpp);
+
+	intel_bb_blt_copy(ibb,
+			  &dst, 0, 0, dst.surface[0].stride,
+			  &final, 0, 0, final.surface[0].stride,
+			  intel_buf_width(&dst),
+			  intel_buf_height(&dst),
+			  dst.bpp);
+
+	igt_assert(intel_bb_sync(ibb) == 0);
+	intel_bb_destroy(ibb);
+
+	if (write_png) {
+		snprintf(name, sizeof(name) - 1,
+			 "bb_blit_dst_tiling_%d.png", tiling);
+		intel_buf_write_to_png(&src, "bb_blit_src_tiling_none.png");
+		intel_buf_write_to_png(&dst, name);
+		intel_buf_write_to_png(&final, "bb_blit_final_tiling_none.png");
+	}
+
+	/* We'll fail on src <-> final compare so just warn */
+	if (tiling == I915_TILING_NONE) {
+		if (compare_bufs(&src, &dst, false) > 0)
+			igt_warn("none->none blit failed!");
+	} else {
+		if (compare_bufs(&src, &dst, false) == 0)
+			igt_warn("none->tiled blit failed!");
+	}
+
+	fails = compare_bufs(&src, &final, true);
+
+	intel_buf_close(bops, &src);
+	intel_buf_close(bops, &dst);
+	intel_buf_close(bops, &final);
+
+	return fails;
+}
+
+/**
+ * SUBTEST: intel-bb-blit-%s
+ * Description: Run simple bb xe %arg[1] test
+ * Run type: BAT
+ *
+ * arg[1]:
+ *
+ * @none:				none
+ * @x:					x
+ * @y:					y
+ */
+static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling)
+{
+	int i, fails = 0, xe = buf_ops_get_fd(bops);
+
+	/* We'll fix it for gen2/3 later. */
+	igt_require(intel_gen(intel_get_drm_devid(xe)) > 3);
+
+	for (i = 0; i < loops; i++)
+		fails += __do_intel_bb_blit(bops, tiling);
+
+	igt_assert_f(fails == 0, "intel-bb-blit (tiling: %d) fails: %d\n",
+		     tiling, fails);
+}
+
+/**
+ * SUBTEST: offset-control
+ * Description: check offset is kept on default simple allocator
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static void offset_control(struct buf_ops *bops)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *dst1, *dst2, *dst3;
+	uint64_t poff_src, poff_dst1, poff_dst2;
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
+	dst1 = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
+	dst2 = create_buf(bops, WIDTH, HEIGHT, COLOR_77);
+
+	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
+			    src->addr.offset, 0, false);
+	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
+			    dst1->addr.offset, 0, true);
+	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
+			    dst2->addr.offset, 0, true);
+
+	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
+	intel_bb_ptr_align(ibb, 8);
+
+	intel_bb_exec(ibb, intel_bb_offset(ibb),
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
+
+	if (buf_info) {
+		print_buf(src, "src ");
+		print_buf(dst1, "dst1");
+		print_buf(dst2, "dst2");
+	}
+
+	poff_src = src->addr.offset;
+	poff_dst1 = dst1->addr.offset;
+	poff_dst2 = dst2->addr.offset;
+	intel_bb_reset(ibb, true);
+
+	dst3 = create_buf(bops, WIDTH, HEIGHT, COLOR_33);
+	intel_bb_add_object(ibb, dst3->handle, intel_buf_bo_size(dst3),
+			    dst3->addr.offset, 0, true);
+	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
+			    src->addr.offset, 0, false);
+	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
+			    dst1->addr.offset, 0, true);
+	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
+			    dst2->addr.offset, 0, true);
+
+	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
+	intel_bb_ptr_align(ibb, 8);
+
+	intel_bb_exec(ibb, intel_bb_offset(ibb),
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
+	intel_bb_sync(ibb);
+	intel_bb_reset(ibb, true);
+
+	igt_assert(poff_src == src->addr.offset);
+	igt_assert(poff_dst1 == dst1->addr.offset);
+	igt_assert(poff_dst2 == dst2->addr.offset);
+
+	if (buf_info) {
+		print_buf(src, "src ");
+		print_buf(dst1, "dst1");
+		print_buf(dst2, "dst2");
+	}
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(dst1);
+	intel_buf_destroy(dst2);
+	intel_buf_destroy(dst3);
+	intel_bb_destroy(ibb);
+}
+
+/*
+ * Idea of the test is to verify delta is properly added to address
+ * when emit_reloc() is called.
+ */
+
+/**
+ * SUBTEST: delta-check
+ * Description: check delta is honoured in intel-bb pipelines
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+#define DELTA_BUFFERS 3
+static void delta_check(struct buf_ops *bops)
+{
+	const uint32_t expected = 0x1234abcd;
+	int xe = buf_ops_get_fd(bops);
+	uint32_t *ptr, hi, lo, val;
+	struct intel_buf *buf;
+	struct intel_bb *ibb;
+	uint64_t offset;
+	uint64_t obj_size = xe_get_default_alignment(xe) + 0x2000;
+	uint64_t obj_offset = (1ULL << 32) - xe_get_default_alignment(xe);
+	uint64_t delta = xe_get_default_alignment(xe) + 0x1000;
+
+	ibb = intel_bb_create_with_allocator(xe, 0, NULL, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	buf = create_buf(bops, obj_size, 0x1, COLOR_CC);
+	buf->addr.offset = obj_offset;
+	intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
+			    buf->addr.offset, 0, false);
+
+	intel_bb_out(ibb, MI_STORE_DWORD_IMM_GEN4);
+	intel_bb_emit_reloc(ibb, buf->handle,
+			    I915_GEM_DOMAIN_RENDER,
+			    I915_GEM_DOMAIN_RENDER,
+			    delta, buf->addr.offset);
+	intel_bb_out(ibb, expected);
+
+	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
+	intel_bb_ptr_align(ibb, 8);
+
+	intel_bb_exec(ibb, intel_bb_offset(ibb), I915_EXEC_DEFAULT, false);
+	intel_bb_sync(ibb);
+
+	/* Buffer should be @ obj_offset */
+	offset = intel_bb_get_object_offset(ibb, buf->handle);
+	igt_assert_eq_u64(offset, obj_offset);
+
+	ptr = xe_bo_map(xe, ibb->handle, ibb->size);
+	lo = ptr[1];
+	hi = ptr[2];
+	gem_munmap(ptr, ibb->size);
+
+	ptr = xe_bo_map(xe, buf->handle, intel_buf_size(buf));
+	val = ptr[delta / sizeof(uint32_t)];
+	gem_munmap(ptr, intel_buf_size(buf));
+
+	intel_buf_destroy(buf);
+	intel_bb_destroy(ibb);
+
+	/* Assert after all resources are freed */
+	igt_assert_f(lo == 0x1000 && hi == 0x1,
+		     "intel-bb doesn't properly handle delta in emit relocation\n");
+	igt_assert_f(val == expected,
+		     "Address doesn't contain expected [%x] value [%x]\n",
+		     expected, val);
+}
+
+/**
+ * SUBTEST: full-batch
+ * Description: check bb totally filled is executing correct
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static void full_batch(struct buf_ops *bops)
+{
+	int xe = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	int i;
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	for (i = 0; i < PAGE_SIZE / sizeof(uint32_t) - 1; i++)
+		intel_bb_out(ibb, 0);
+	intel_bb_emit_bbe(ibb);
+
+	igt_assert(intel_bb_offset(ibb) == PAGE_SIZE);
+	intel_bb_exec(ibb, intel_bb_offset(ibb),
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+	intel_bb_reset(ibb, false);
+
+	intel_bb_destroy(ibb);
+}
+
+/**
+ * SUBTEST: render
+ * Description: check intel-bb render pipeline
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+static int render(struct buf_ops *bops, uint32_t tiling,
+		  uint32_t width, uint32_t height)
+{
+	struct intel_bb *ibb;
+	struct intel_buf src, dst, final;
+	int xe = buf_ops_get_fd(bops);
+	uint32_t fails = 0;
+	char name[128];
+	uint32_t devid = intel_get_drm_devid(xe);
+	igt_render_copyfunc_t render_copy = NULL;
+
+	igt_debug("%s() gen: %d\n", __func__, intel_gen(devid));
+
+	ibb = intel_bb_create(xe, PAGE_SIZE);
+
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	if (print_base64)
+		intel_bb_set_dump_base64(ibb, true);
+
+	scratch_buf_init(bops, &src, width, height, I915_TILING_NONE,
+			 I915_COMPRESSION_NONE);
+	scratch_buf_init(bops, &dst, width, height, tiling,
+			 I915_COMPRESSION_NONE);
+	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
+			 I915_COMPRESSION_NONE);
+
+	scratch_buf_draw_pattern(bops, &src,
+				 0, 0, width, height,
+				 0, 0, width, height, 0);
+
+	render_copy = igt_get_render_copyfunc(devid);
+	igt_assert(render_copy);
+
+	render_copy(ibb,
+		    &src,
+		    0, 0, width, height,
+		    &dst,
+		    0, 0);
+
+	render_copy(ibb,
+		    &dst,
+		    0, 0, width, height,
+		    &final,
+		    0, 0);
+
+	intel_bb_sync(ibb);
+	intel_bb_destroy(ibb);
+
+	if (write_png) {
+		snprintf(name, sizeof(name) - 1,
+			 "render_dst_tiling_%d.png", tiling);
+		intel_buf_write_to_png(&src, "render_src_tiling_none.png");
+		intel_buf_write_to_png(&dst, name);
+		intel_buf_write_to_png(&final, "render_final_tiling_none.png");
+	}
+
+	/* We'll fail on src <-> final compare so just warn */
+	if (tiling == I915_TILING_NONE) {
+		if (compare_bufs(&src, &dst, false) > 0)
+			igt_warn("%s: none->none failed!\n", __func__);
+	} else {
+		if (compare_bufs(&src, &dst, false) == 0)
+			igt_warn("%s: none->tiled failed!\n", __func__);
+	}
+
+	fails = compare_bufs(&src, &final, true);
+
+	if (fails && print_base64) {
+		dump_base64("src", &src);
+		dump_base64("dst", &dst);
+		dump_base64("final", &final);
+	}
+
+	intel_buf_close(bops, &src);
+	intel_buf_close(bops, &dst);
+	intel_buf_close(bops, &final);
+
+	igt_assert_f(fails == 0, "%s: (tiling: %d) fails: %d\n",
+		     __func__, tiling, fails);
+
+	return fails;
+}
+
+static int opt_handler(int opt, int opt_index, void *data)
+{
+	switch (opt) {
+	case 'd':
+		debug_bb = true;
+		break;
+	case 'p':
+		write_png = true;
+		break;
+	case 'i':
+		buf_info = true;
+		break;
+	case 'b':
+		print_base64 = true;
+		break;
+	default:
+		return IGT_OPT_HANDLER_ERROR;
+	}
+
+	return IGT_OPT_HANDLER_SUCCESS;
+}
+
+const char *help_str =
+	"  -d\tDebug bb\n"
+	"  -p\tWrite surfaces to png\n"
+	"  -i\tPrint buffer info\n"
+	"  -b\tDump to base64 (bb and images)\n"
+	;
+
+igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
+{
+	int xe, i;
+	struct buf_ops *bops;
+	uint32_t width;
+
+	struct test {
+		uint32_t tiling;
+		const char *tiling_name;
+	} tests[] = {
+		{ I915_TILING_NONE, "none" },
+		{ I915_TILING_X, "x" },
+		{ I915_TILING_Y, "y" },
+	};
+
+	igt_fixture {
+		xe = drm_open_driver(DRIVER_XE);
+		bops = buf_ops_create(xe);
+		xe_device_get(xe);
+	}
+
+	igt_describe("Ensure reset is possible on fresh bb");
+	igt_subtest("reset-bb")
+		reset_bb(bops);
+
+	igt_subtest_f("purge-bb")
+		purge_bb(bops);
+
+	igt_subtest("simple-bb")
+		simple_bb(bops, false);
+
+	igt_subtest("simple-bb-ctx")
+		simple_bb(bops, true);
+
+	igt_subtest("bb-with-allocator")
+		bb_with_allocator(bops);
+
+	igt_subtest("lot-of-buffers")
+		lot_of_buffers(bops);
+
+	igt_subtest("add-remove-objects")
+		add_remove_objects(bops);
+
+	igt_subtest("destroy-bb")
+		destroy_bb(bops);
+
+	igt_subtest_with_dynamic("create-in-region") {
+		uint64_t memreg = all_memory_regions(xe), region;
+
+		xe_for_each_mem_region(fd, memreg, region)
+			igt_dynamic_f("region-%s", xe_region_name(region))
+				create_in_region(bops, region);
+	}
+
+	igt_subtest("blit-simple")
+		blit(bops, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("blit-reloc")
+		blit(bops, INTEL_ALLOCATOR_RELOC);
+
+	igt_subtest("intel-bb-blit-none")
+		do_intel_bb_blit(bops, 3, I915_TILING_NONE);
+
+	igt_subtest("intel-bb-blit-x")
+		do_intel_bb_blit(bops, 3, I915_TILING_X);
+
+	igt_subtest("intel-bb-blit-y") {
+		igt_require(intel_gen(intel_get_drm_devid(xe)) >= 6);
+		do_intel_bb_blit(bops, 3, I915_TILING_Y);
+	}
+
+	igt_subtest("offset-control")
+		offset_control(bops);
+
+	igt_subtest("delta-check")
+		delta_check(bops);
+
+	igt_subtest("full-batch")
+		full_batch(bops);
+
+	igt_subtest_with_dynamic("render") {
+		for (i = 0; i < ARRAY_SIZE(tests); i++) {
+			const struct test *t = &tests[i];
+
+			for (width = 512; width <= 1024; width += 512)
+				igt_dynamic_f("render-%s-%u", t->tiling_name, width)
+					render(bops, t->tiling, width, width);
+		}
+	}
+
+	igt_fixture {
+		xe_device_put(xe);
+		buf_ops_destroy(bops);
+		close(xe);
+	}
+}
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 12/17] tests/xe-fast-feedback: Add xe_intel_bb test to BAT
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (10 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 11/17] tests/xe_intel_bb: Check if intel-bb Xe support correctness Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  7:59   ` Kamil Konieczny
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 13/17] lib/gpgpu_fill: Use RENDER engine flag to work on Xe Zbigniew Kempczyński
                   ` (7 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Verifies intel-bb integration with xe.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com>
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 tests/intel-ci/xe-fast-feedback.testlist | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index 987e72ef4b..3d0603fd94 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -106,6 +106,24 @@ igt@xe_guc_pc@freq_range_idle
 igt@xe_guc_pc@rc6_on_idle
 igt@xe_guc_pc@rc0_on_exec
 igt@xe_huc_copy@huc_copy
+igt@xe_intel_bb@add-remove-objects
+igt@xe_intel_bb@bb-with-allocator
+igt@xe_intel_bb@blit-reloc
+igt@xe_intel_bb@blit-simple
+igt@xe_intel_bb@create-in-region
+igt@xe_intel_bb@delta-check
+igt@xe_intel_bb@destroy-bb
+igt@xe_intel_bb@full-batch
+igt@xe_intel_bb@intel-bb-blit-none
+igt@xe_intel_bb@intel-bb-blit-x
+igt@xe_intel_bb@intel-bb-blit-y
+igt@xe_intel_bb@lot-of-buffers
+igt@xe_intel_bb@offset-control
+igt@xe_intel_bb@purge-bb
+igt@xe_intel_bb@render
+igt@xe_intel_bb@reset-bb
+igt@xe_intel_bb@simple-bb
+igt@xe_intel_bb@simple-bb-ctx
 igt@xe_mmap@system
 igt@xe_mmap@vram
 igt@xe_mmap@vram-system
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 13/17] lib/gpgpu_fill: Use RENDER engine flag to work on Xe
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (11 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 12/17] tests/xe-fast-feedback: Add xe_intel_bb test to BAT Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 14/17] tests/xe_gpgpu_fill: Exercise gpgpu fill " Zbigniew Kempczyński
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Using I915_EXEC_DEFAULT works fine for i915 selecting appropriate
command streamer. Unfortunately this cannot be used on Xe as it
requires explicit engine selection. Submitting gpgpu work on
render is fine so change to I915_EXEC_RENDER doesn't break i915
and allows run on valid engine on Xe.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 lib/gpgpu_fill.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/gpgpu_fill.c b/lib/gpgpu_fill.c
index 4f7bab68f2..4db8775145 100644
--- a/lib/gpgpu_fill.c
+++ b/lib/gpgpu_fill.c
@@ -288,7 +288,7 @@ __gen9_gpgpu_fillfunc(int i915,
 	intel_bb_ptr_align(ibb, 32);
 
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
-		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+		      I915_EXEC_RENDER | I915_EXEC_NO_RELOC, true);
 
 	intel_bb_destroy(ibb);
 }
@@ -329,7 +329,7 @@ __xehp_gpgpu_fillfunc(int i915,
 	intel_bb_ptr_align(ibb, 32);
 
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
-		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+		      I915_EXEC_RENDER | I915_EXEC_NO_RELOC, true);
 
 	intel_bb_destroy(ibb);
 }
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 14/17] tests/xe_gpgpu_fill: Exercise gpgpu fill on Xe
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (12 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 13/17] lib/gpgpu_fill: Use RENDER engine flag to work on Xe Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 15/17] lib/igt_fb: For xe assume vram is used on discrete Zbigniew Kempczyński
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Reuse gpgpu fill already exercised on i915 on Xe.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 tests/intel-ci/xe-fast-feedback.testlist |   1 +
 tests/meson.build                        |   1 +
 tests/xe/xe_gpgpu_fill.c                 | 135 +++++++++++++++++++++++
 3 files changed, 137 insertions(+)
 create mode 100644 tests/xe/xe_gpgpu_fill.c

diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index 3d0603fd94..ecf44ca736 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -100,6 +100,7 @@ igt@xe_exec_threads@threads-mixed-fd-basic
 igt@xe_exec_threads@threads-bal-mixed-basic
 igt@xe_exec_threads@threads-bal-mixed-shared-vm-basic
 igt@xe_exec_threads@threads-bal-mixed-fd-basic
+igt@xe_gpgpu_fill@basic
 igt@xe_guc_pc@freq_basic_api
 igt@xe_guc_pc@freq_fixed_idle
 igt@xe_guc_pc@freq_range_idle
diff --git a/tests/meson.build b/tests/meson.build
index b026fac48b..c15eb3a08c 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -254,6 +254,7 @@ xe_progs = [
 	'xe_exec_fault_mode',
 	'xe_exec_reset',
 	'xe_exec_threads',
+	'xe_gpgpu_fill',
 	'xe_guc_pc',
 	'xe_huc_copy',
 	'xe_intel_bb',
diff --git a/tests/xe/xe_gpgpu_fill.c b/tests/xe/xe_gpgpu_fill.c
new file mode 100644
index 0000000000..5bca19c87e
--- /dev/null
+++ b/tests/xe/xe_gpgpu_fill.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+/**
+ * TEST: Basic tests for gpgpu functionality
+ * Category: Software building block
+ * Sub-category: gpgpu
+ * Test category: functionality test
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <inttypes.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+#include <sys/stat.h>
+#include <sys/time.h>
+#include <unistd.h>
+
+#include "drm.h"
+#include "i915/gem.h"
+#include "igt.h"
+#include "igt_collection.h"
+#include "intel_bufops.h"
+#include "xe/xe_ioctl.h"
+#include "xe/xe_query.h"
+
+#define WIDTH 64
+#define HEIGHT 64
+#define STRIDE (WIDTH)
+#define SIZE (HEIGHT*STRIDE)
+#define COLOR_C4	0xc4
+#define COLOR_4C	0x4c
+
+typedef struct {
+	int drm_fd;
+	uint32_t devid;
+	struct buf_ops *bops;
+} data_t;
+
+static struct intel_buf *
+create_buf(data_t *data, int width, int height, uint8_t color, uint64_t region)
+{
+	struct intel_buf *buf;
+	uint8_t *ptr;
+	int i;
+
+	buf = calloc(1, sizeof(*buf));
+	igt_assert(buf);
+
+	buf = intel_buf_create(data->bops, width/4, height, 32, 0,
+			       I915_TILING_NONE, 0);
+
+	ptr = xe_bo_map(data->drm_fd, buf->handle, buf->surface[0].size);
+
+	for (i = 0; i < buf->surface[0].size; i++)
+		ptr[i] = color;
+
+	munmap(ptr, buf->surface[0].size);
+
+	return buf;
+}
+
+static void buf_check(uint8_t *ptr, int x, int y, uint8_t color)
+{
+	uint8_t val;
+
+	val = ptr[y * WIDTH + x];
+	igt_assert_f(val == color,
+		     "Expected 0x%02x, found 0x%02x at (%d,%d)\n",
+		     color, val, x, y);
+}
+
+/**
+ * SUBTEST: basic
+ * Description: run gpgpu fill
+ * Run type: FULL
+ * TODO: change ``'Run type' == FULL`` to a better category
+ */
+
+static void gpgpu_fill(data_t *data, igt_fillfunc_t fill, uint32_t region)
+{
+	struct intel_buf *buf;
+	uint8_t *ptr;
+	int i, j;
+
+	buf = create_buf(data, WIDTH, HEIGHT, COLOR_C4, region);
+	ptr = xe_bo_map(data->drm_fd, buf->handle, buf->surface[0].size);
+
+	for (i = 0; i < WIDTH; i++)
+		for (j = 0; j < HEIGHT; j++)
+			buf_check(ptr, i, j, COLOR_C4);
+
+	fill(data->drm_fd, buf, 0, 0, WIDTH / 2, HEIGHT / 2, COLOR_4C);
+
+	for (i = 0; i < WIDTH; i++)
+		for (j = 0; j < HEIGHT; j++)
+			if (i < WIDTH / 2 && j < HEIGHT / 2)
+				buf_check(ptr, i, j, COLOR_4C);
+			else
+				buf_check(ptr, i, j, COLOR_C4);
+
+	munmap(ptr, buf->surface[0].size);
+}
+
+igt_main
+{
+	data_t data = {0, };
+	igt_fillfunc_t fill_fn = NULL;
+
+	igt_fixture {
+		data.drm_fd = drm_open_driver_render(DRIVER_XE);
+		data.devid = intel_get_drm_devid(data.drm_fd);
+		data.bops = buf_ops_create(data.drm_fd);
+
+		fill_fn = igt_get_gpgpu_fillfunc(data.devid);
+		igt_require_f(fill_fn, "no gpgpu-fill function\n");
+
+		xe_device_get(data.drm_fd);
+	}
+
+	igt_subtest("basic") {
+		gpgpu_fill(&data, fill_fn, 0);
+	}
+
+	igt_fixture {
+		xe_device_put(data.drm_fd);
+		buf_ops_destroy(data.bops);
+	}
+}
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 15/17] lib/igt_fb: For xe assume vram is used on discrete
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (13 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 14/17] tests/xe_gpgpu_fill: Exercise gpgpu fill " Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 16/17] lib/igt_draw: Pass region while building intel_buf from flink Zbigniew Kempczyński
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

Assume fb bo's were created on vram on discrete, otherwise use system
memory.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
---
v2: use -1 for i915 region (Bhanu)
---
 lib/igt_fb.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index 7379b99aa8..df3d7d91a9 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2545,6 +2545,7 @@ igt_fb_create_intel_buf(int fd, struct buf_ops *bops,
 {
 	struct intel_buf *buf;
 	uint32_t bo_name, handle, compression;
+	uint64_t region;
 	int num_surfaces;
 	int i;
 
@@ -2571,12 +2572,16 @@ igt_fb_create_intel_buf(int fd, struct buf_ops *bops,
 	bo_name = gem_flink(fd, fb->gem_handle);
 	handle = gem_open(fd, bo_name);
 
-	buf = intel_buf_create_using_handle_and_size(bops, handle,
-						     fb->width, fb->height,
-						     fb->plane_bpp[0], 0,
-						     igt_fb_mod_to_tiling(fb->modifier),
-						     compression, fb->size,
-						     fb->strides[0]);
+	/* For i915 region doesn't matter, for xe does */
+	region = buf_ops_get_driver(bops) == INTEL_DRIVER_XE ?
+				vram_if_possible(fd, 0) : -1;
+	buf = intel_buf_create_full(bops, handle,
+				    fb->width, fb->height,
+				    fb->plane_bpp[0], 0,
+				    igt_fb_mod_to_tiling(fb->modifier),
+				    compression, fb->size,
+				    fb->strides[0],
+				    region);
 	intel_buf_set_name(buf, name);
 
 	/* Make sure we close handle on destroy path */
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 16/17] lib/igt_draw: Pass region while building intel_buf from flink
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (14 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 15/17] lib/igt_fb: For xe assume vram is used on discrete Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 17/17] tests/kms_big_fb: Deduce region for xe framebuffer Zbigniew Kempczyński
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

For i915 region doesn't matter but for xe we need to be more strict and
region aware. This is related to size and alignment constraints. As we
don't have information about flink buffer origin let's assume region
is vram if possible, otherwise choose system region.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
---
 lib/igt_draw.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/lib/igt_draw.c b/lib/igt_draw.c
index ac512fac5a..c5f2fdfe2b 100644
--- a/lib/igt_draw.c
+++ b/lib/igt_draw.c
@@ -37,6 +37,7 @@
 #include "i915/gem_create.h"
 #include "i915/gem_mman.h"
 #include "i915/intel_mocs.h"
+#include "xe/xe_query.h"
 
 #ifndef PAGE_ALIGN
 #ifndef PAGE_SIZE
@@ -634,17 +635,24 @@ static struct intel_buf *create_buf(int fd, struct buf_ops *bops,
 				    struct buf_data *from, uint32_t tiling)
 {
 	struct intel_buf *buf;
+	enum intel_driver driver = buf_ops_get_driver(bops);
 	uint32_t handle, name, width, height;
+	uint64_t region = driver == INTEL_DRIVER_XE ? vram_if_possible(fd, 0) : -1;
+	uint64_t size = from->size;
 
 	width = from->stride / (from->bpp / 8);
 	height = from->size / from->stride;
+	if (driver == INTEL_DRIVER_XE)
+		size = ALIGN(size, xe_get_default_alignment(fd));
 
 	name = gem_flink(fd, from->handle);
 	handle = gem_open(fd, name);
 
-	buf = intel_buf_create_using_handle(bops, handle,
-					    width, height, from->bpp, 0,
-					    tiling, 0);
+	buf = intel_buf_create_full(bops, handle,
+				    width, height, from->bpp, 0,
+				    tiling, 0,
+				    size, 0,
+				    region);
 
 	/* Make sure we close handle on destroy path */
 	intel_buf_set_ownership(buf, true);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] [PATCH i-g-t v8 17/17] tests/kms_big_fb: Deduce region for xe framebuffer
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (15 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 16/17] lib/igt_draw: Pass region while building intel_buf from flink Zbigniew Kempczyński
@ 2023-04-28  6:22 ` Zbigniew Kempczyński
  2023-04-28  7:48 ` [igt-dev] ✓ Fi.CI.BAT: success for Integrate intel-bb with Xe (rev11) Patchwork
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  6:22 UTC (permalink / raw)
  To: igt-dev

For discrete framebuffer reside on vram/device memory and for
integrated on system memory. Due to xe requirement regarding
size and alignment on different memory regions during intel_buf
creation pass deduced memory region.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reviewed-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
---
 tests/i915/kms_big_fb.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/tests/i915/kms_big_fb.c b/tests/i915/kms_big_fb.c
index a29a05a282..a0d5ef7301 100644
--- a/tests/i915/kms_big_fb.c
+++ b/tests/i915/kms_big_fb.c
@@ -28,6 +28,7 @@
 #include <string.h>
 
 #include "i915/gem_create.h"
+#include "xe/xe_query.h"
 
 IGT_TEST_DESCRIPTION("Test big framebuffers");
 
@@ -62,7 +63,10 @@ static struct intel_buf *init_buf(data_t *data,
 				  const char *buf_name)
 {
 	struct intel_buf *buf;
+	enum intel_driver driver = buf_ops_get_driver(data->bops);
 	uint32_t name, handle, tiling, stride, width, height, bpp, size;
+	uint64_t region = driver == INTEL_DRIVER_XE ?
+				vram_if_possible(data->drm_fd, 0) : -1;
 
 	igt_assert_eq(fb->offsets[0], 0);
 
@@ -75,8 +79,10 @@ static struct intel_buf *init_buf(data_t *data,
 
 	name = gem_flink(data->drm_fd, fb->gem_handle);
 	handle = gem_open(data->drm_fd, name);
-	buf = intel_buf_create_using_handle(data->bops, handle, width, height,
-					    bpp, 0, tiling, 0);
+	buf = intel_buf_create_full(data->bops, handle, width, height,
+				    bpp, 0, tiling, 0, size, 0,
+				    region);
+
 	intel_buf_set_name(buf, buf_name);
 	intel_buf_set_ownership(buf, true);
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for Integrate intel-bb with Xe (rev11)
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (16 preceding siblings ...)
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 17/17] tests/kms_big_fb: Deduce region for xe framebuffer Zbigniew Kempczyński
@ 2023-04-28  7:48 ` Patchwork
  2023-04-28 10:05 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
  2023-04-28 12:52 ` [igt-dev] ✓ Fi.CI.IGT: success " Patchwork
  19 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2023-04-28  7:48 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 4529 bytes --]

== Series Details ==

Series: Integrate intel-bb with Xe (rev11)
URL   : https://patchwork.freedesktop.org/series/116578/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_13071 -> IGTPW_8881
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html

Participating hosts (38 -> 37)
------------------------------

  Additional (1): fi-kbl-soraka 
  Missing    (2): bat-mtlp-8 fi-snb-2520m 

Known issues
------------

  Here are the changes found in IGTPW_8881 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_huc_copy@huc-copy:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][1] ([fdo#109271] / [i915#2190])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/fi-kbl-soraka/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@basic:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][2] ([fdo#109271] / [i915#4613]) +3 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/fi-kbl-soraka/igt@gem_lmem_swapping@basic.html

  * igt@i915_selftest@live@gt_heartbeat:
    - fi-kbl-soraka:      NOTRUN -> [FAIL][3] ([i915#7916])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/fi-kbl-soraka/igt@i915_selftest@live@gt_heartbeat.html

  * igt@i915_selftest@live@gt_pm:
    - fi-kbl-soraka:      NOTRUN -> [FAIL][4] ([i915#7913] / [i915#8383])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/fi-kbl-soraka/igt@i915_selftest@live@gt_pm.html

  * igt@kms_chamelium_frames@hdmi-crc-fast:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][5] ([fdo#109271]) +16 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/fi-kbl-soraka/igt@kms_chamelium_frames@hdmi-crc-fast.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@gt_heartbeat:
    - fi-glk-j4005:       [FAIL][6] ([i915#7916]) -> [PASS][7]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/fi-glk-j4005/igt@i915_selftest@live@gt_heartbeat.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/fi-glk-j4005/igt@i915_selftest@live@gt_heartbeat.html

  * igt@i915_selftest@live@requests:
    - {bat-mtlp-6}:       [ABORT][8] ([i915#7920]) -> [PASS][9]
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/bat-mtlp-6/igt@i915_selftest@live@requests.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/bat-mtlp-6/igt@i915_selftest@live@requests.html

  
#### Warnings ####

  * igt@core_hotunplug@unbind-rebind:
    - fi-kbl-8809g:       [ABORT][10] ([i915#8299]) -> [ABORT][11] ([i915#8299] / [i915#8397])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/fi-kbl-8809g/igt@core_hotunplug@unbind-rebind.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/fi-kbl-8809g/igt@core_hotunplug@unbind-rebind.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#4078]: https://gitlab.freedesktop.org/drm/intel/issues/4078
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#6645]: https://gitlab.freedesktop.org/drm/intel/issues/6645
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7913]: https://gitlab.freedesktop.org/drm/intel/issues/7913
  [i915#7916]: https://gitlab.freedesktop.org/drm/intel/issues/7916
  [i915#7920]: https://gitlab.freedesktop.org/drm/intel/issues/7920
  [i915#8299]: https://gitlab.freedesktop.org/drm/intel/issues/8299
  [i915#8383]: https://gitlab.freedesktop.org/drm/intel/issues/8383
  [i915#8397]: https://gitlab.freedesktop.org/drm/intel/issues/8397


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_7273 -> IGTPW_8881

  CI-20190529: 20190529
  CI_DRM_13071: b9458e7075652669ec0e04abe039a5ed001701fe @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_8881: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html
  IGT_7273: f40ef4b058466219968b7792d22ff0648b82396b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git


Testlist changes
----------------

+igt@kms_plane_scaling@i915-max-src-size
-igt@kms_plane_scaling@intel-max-src-size

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html

[-- Attachment #2: Type: text/html, Size: 5321 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 06/17] lib/intel_bufops: Add Xe support in bufops
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 06/17] lib/intel_bufops: Add Xe support in bufops Zbigniew Kempczyński
@ 2023-04-28  7:49   ` Kamil Konieczny
  0 siblings, 0 replies; 34+ messages in thread
From: Kamil Konieczny @ 2023-04-28  7:49 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On 2023-04-28 at 08:22:13 +0200, Zbigniew Kempczyński wrote:
> Extend bufops to support Xe:
>  - change region to 64bit region mask,
>  - add initialization helper (full) which allows passing handle,
>    size and region,
>  - mapping functions (read + write) selects driver specific mapping
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
> Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
> v5: add intel_buf_create_full() for future lib/igt_fb changes
>     regarding region
> v6: - fix igt_assert_f()
>     - add buf_ops_get_driver() helper
> v7: - alphabetical includes (Kamil)
>     - assert if region is not valid for xe before bo create (Kamil)
>     - ensure bops pointer is valid on public call (Christoph)
> ---
>  lib/intel_bufops.c | 123 +++++++++++++++++++++++++++++++++++++++++----
>  lib/intel_bufops.h |  24 ++++++++-
>  2 files changed, 136 insertions(+), 11 deletions(-)
> 
> diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
> index cdc7a1698b..46fd981f09 100644
> --- a/lib/intel_bufops.c
> +++ b/lib/intel_bufops.c
> @@ -29,6 +29,8 @@
>  #include "igt.h"
>  #include "igt_x86.h"
>  #include "intel_bufops.h"
> +#include "xe/xe_ioctl.h"
> +#include "xe/xe_query.h"
>  
>  /**
>   * SECTION:intel_bufops
> @@ -106,6 +108,7 @@ typedef void (*bo_copy)(struct buf_ops *, struct intel_buf *, uint32_t *);
>  
>  struct buf_ops {
>  	int fd;
> +	enum intel_driver driver;
>  	int gen_start;
>  	int gen_end;
>  	unsigned int intel_gen;
> @@ -488,6 +491,9 @@ static void *mmap_write(int fd, struct intel_buf *buf)
>  {
>  	void *map = NULL;
>  
> +	if (buf->bops->driver == INTEL_DRIVER_XE)
> +		return xe_bo_map(fd, buf->handle, buf->surface[0].size);
> +
>  	if (gem_has_lmem(fd)) {
>  		/*
>  		 * set/get_caching and set_domain are no longer supported on
> @@ -530,6 +536,9 @@ static void *mmap_read(int fd, struct intel_buf *buf)
>  {
>  	void *map = NULL;
>  
> +	if (buf->bops->driver == INTEL_DRIVER_XE)
> +		return xe_bo_map(fd, buf->handle, buf->surface[0].size);
> +
>  	if (gem_has_lmem(fd)) {
>  		/*
>  		 * set/get_caching and set_domain are no longer supported on
> @@ -809,7 +818,7 @@ static void __intel_buf_init(struct buf_ops *bops,
>  			     int width, int height, int bpp, int alignment,
>  			     uint32_t req_tiling, uint32_t compression,
>  			     uint64_t bo_size, int bo_stride,
> -			     uint32_t region)
> +			     uint64_t region)
>  {
>  	uint32_t tiling = req_tiling;
>  	uint64_t size;
> @@ -899,9 +908,20 @@ static void __intel_buf_init(struct buf_ops *bops,
>  	buf->size = size;
>  	buf->handle = handle;
>  
> -	if (!handle)
> -		if (__gem_create_in_memory_regions(bops->fd, &buf->handle, &size, region))
> -			igt_assert_eq(__gem_create(bops->fd, &size, &buf->handle), 0);
> +	if (bops->driver == INTEL_DRIVER_XE)
> +		igt_assert_f(region != -1, "Xe requires region awareness, "
> +					   "use api which passes valid region\n");
> +	buf->region = region;
> +
> +	if (!handle) {
> +		if (bops->driver == INTEL_DRIVER_I915) {
> +			if (__gem_create_in_memory_regions(bops->fd, &buf->handle, &size, region))
> +				igt_assert_eq(__gem_create(bops->fd, &size, &buf->handle), 0);
> +		} else {
> +			size = ALIGN(size, xe_get_default_alignment(bops->fd));
> +			buf->handle = xe_bo_create_flags(bops->fd, 0, size, region);
> +		}
> +	}
>  
>  	/* Store gem bo size */
>  	buf->bo_size = size;
> @@ -930,8 +950,12 @@ void intel_buf_init(struct buf_ops *bops,
>  		    int width, int height, int bpp, int alignment,
>  		    uint32_t tiling, uint32_t compression)
>  {
> +	uint64_t region;
> +
> +	region = bops->driver == INTEL_DRIVER_I915 ? I915_SYSTEM_MEMORY :
> +						     system_memory(bops->fd);
>  	__intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
> -			 tiling, compression, 0, 0, I915_SYSTEM_MEMORY);
> +			 tiling, compression, 0, 0, region);
>  
>  	intel_buf_set_ownership(buf, true);
>  }
> @@ -945,7 +969,7 @@ void intel_buf_init_in_region(struct buf_ops *bops,
>  			      struct intel_buf *buf,
>  			      int width, int height, int bpp, int alignment,
>  			      uint32_t tiling, uint32_t compression,
> -			      uint32_t region)
> +			      uint64_t region)
>  {
>  	__intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
>  			 tiling, compression, 0, 0, region);
> @@ -1010,6 +1034,43 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
>  			 req_tiling, compression, 0, 0, -1);
>  }
>  
> +/**
> + * intel_buf_init_full
> + * @bops: pointer to buf_ops
> + * @handle: BO handle created by the caller
> + * @buf: pointer to intel_buf structure to be filled
> + * @width: surface width
> + * @height: surface height
> + * @bpp: bits-per-pixel (8 / 16 / 32 / 64)
> + * @alignment: alignment of the stride for linear surfaces
> + * @req_tiling: surface tiling
> + * @compression: surface compression type
> + * @size: real bo size
> + * @stride: bo stride
> + * @region: region
> + *
> + * Function configures BO handle within intel_buf structure passed by the caller
> + * (with all its metadata - width, height, ...). Useful if BO was created
> + * outside. Allows passing real size which caller is aware of.
> + *
> + * Note: intel_buf_close() can be used because intel_buf is aware it is not
> + * buffer owner so it won't close it underneath.
> + */
> +void intel_buf_init_full(struct buf_ops *bops,
> +			 uint32_t handle,
> +			 struct intel_buf *buf,
> +			 int width, int height,
> +			 int bpp, int alignment,
> +			 uint32_t req_tiling,
> +			 uint32_t compression,
> +			 uint64_t size,
> +			 int stride,
> +			 uint64_t region)
> +{
> +	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
> +			 req_tiling, compression, size, stride, region);
> +}
> +
>  /**
>   * intel_buf_create
>   * @bops: pointer to buf_ops
> @@ -1084,6 +1145,20 @@ struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
>  							 uint32_t compression,
>  							 uint64_t size,
>  							 int stride)
> +{
> +	return intel_buf_create_full(bops, handle, width, height, bpp, alignment,
> +				     req_tiling, compression, size, stride, -1);
> +}
> +
> +struct intel_buf *intel_buf_create_full(struct buf_ops *bops,
> +					uint32_t handle,
> +					int width, int height,
> +					int bpp, int alignment,
> +					uint32_t req_tiling,
> +					uint32_t compression,
> +					uint64_t size,
> +					int stride,
> +					uint64_t region)
>  {
>  	struct intel_buf *buf;
>  
> @@ -1093,12 +1168,11 @@ struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
>  	igt_assert(buf);
>  
>  	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
> -			 req_tiling, compression, size, stride, -1);
> +			 req_tiling, compression, size, stride, region);
>  
>  	return buf;
>  }
>  
> -
>  /**
>   * intel_buf_destroy
>   * @buf: intel_buf
> @@ -1420,8 +1494,24 @@ static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency)
>  
>  	bops->fd = fd;
>  	bops->intel_gen = generation;
> -	igt_debug("generation: %d, supported tiles: 0x%02x\n",
> -		  bops->intel_gen, bops->supported_tiles);
> +	bops->driver = is_i915_device(fd) ? INTEL_DRIVER_I915 :
> +					    is_xe_device(fd) ? INTEL_DRIVER_XE : 0;
> +	igt_assert(bops->driver);
> +	igt_debug("generation: %d, supported tiles: 0x%02x, driver: %s\n",
> +		  bops->intel_gen, bops->supported_tiles,
> +		  bops->driver == INTEL_DRIVER_I915 ? "i915" : "xe");
> +
> +	/* No tiling support in XE. */
> +	if (bops->driver == INTEL_DRIVER_XE) {
> +		bops->supported_hw_tiles = TILE_NONE;
> +
> +		bops->linear_to_x = copy_linear_to_x;
> +		bops->x_to_linear = copy_x_to_linear;
> +		bops->linear_to_y = copy_linear_to_y;
> +		bops->y_to_linear = copy_y_to_linear;
> +
> +		return bops;
> +	}
>  
>  	/*
>  	 * Warning!
> @@ -1569,6 +1659,19 @@ int buf_ops_get_fd(struct buf_ops *bops)
>  	return bops->fd;
>  }
>  
> +/**
> + * buf_ops_get_driver
> + * @bops: pointer to buf_ops
> + *
> + * Returns: intel driver enum value
> + */
> +enum intel_driver buf_ops_get_driver(struct buf_ops *bops)
> +{
> +	igt_assert(bops);
> +
> +	return bops->driver;
> +}
> +
>  /**
>   * buf_ops_set_software_tiling
>   * @bops: pointer to buf_ops
> diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
> index 25b4307399..0037548a3b 100644
> --- a/lib/intel_bufops.h
> +++ b/lib/intel_bufops.h
> @@ -43,6 +43,7 @@ struct intel_buf {
>  	} addr;
>  
>  	uint64_t bo_size;
> +	uint64_t region;
>  
>  	/* Tracking */
>  	struct intel_bb *ibb;
> @@ -109,6 +110,7 @@ struct buf_ops *buf_ops_create(int fd);
>  struct buf_ops *buf_ops_create_with_selftest(int fd);
>  void buf_ops_destroy(struct buf_ops *bops);
>  int buf_ops_get_fd(struct buf_ops *bops);
> +enum intel_driver buf_ops_get_driver(struct buf_ops *bops);
>  
>  bool buf_ops_set_software_tiling(struct buf_ops *bops,
>  				 uint32_t tiling,
> @@ -135,7 +137,7 @@ void intel_buf_init_in_region(struct buf_ops *bops,
>  			      struct intel_buf *buf,
>  			      int width, int height, int bpp, int alignment,
>  			      uint32_t tiling, uint32_t compression,
> -			      uint32_t region);
> +			      uint64_t region);
>  void intel_buf_close(struct buf_ops *bops, struct intel_buf *buf);
>  
>  void intel_buf_init_using_handle(struct buf_ops *bops,
> @@ -143,6 +145,16 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
>  				 struct intel_buf *buf,
>  				 int width, int height, int bpp, int alignment,
>  				 uint32_t req_tiling, uint32_t compression);
> +void intel_buf_init_full(struct buf_ops *bops,
> +			 uint32_t handle,
> +			 struct intel_buf *buf,
> +			 int width, int height,
> +			 int bpp, int alignment,
> +			 uint32_t req_tiling,
> +			 uint32_t compression,
> +			 uint64_t size,
> +			 int stride,
> +			 uint64_t region);
>  
>  struct intel_buf *intel_buf_create(struct buf_ops *bops,
>  				   int width, int height,
> @@ -164,6 +176,16 @@ struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
>  							 uint32_t compression,
>  							 uint64_t size,
>  							 int stride);
> +
> +struct intel_buf *intel_buf_create_full(struct buf_ops *bops,
> +					uint32_t handle,
> +					int width, int height,
> +					int bpp, int alignment,
> +					uint32_t req_tiling,
> +					uint32_t compression,
> +					uint64_t size,
> +					int stride,
> +					uint64_t region);
>  void intel_buf_destroy(struct intel_buf *buf);
>  
>  static inline void intel_buf_set_pxp(struct intel_buf *buf, bool new_pxp_state)
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 08/17] lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset path
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 08/17] lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset path Zbigniew Kempczyński
@ 2023-04-28  7:50   ` Kamil Konieczny
  2023-04-28  8:44   ` Manszewski, Christoph
  1 sibling, 0 replies; 34+ messages in thread
From: Kamil Konieczny @ 2023-04-28  7:50 UTC (permalink / raw)
  To: igt-dev

On 2023-04-28 at 08:22:15 +0200, Zbigniew Kempczyński wrote:
> On reset path we recreate bo for batch (to avoid stalls) so we should
> reacquire the offset too. At the moment simple allocator will return
> same offset (so unfortunately we'll stall), but for reloc allocator
> we'll get new one (so we avoid stall).
> 
> I've noticed this is missing during xe_intel_bb test, where on reloc
> I noticed unexpected result (direct consequence of using same offset
> which pointed to old batch, not new one).
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  lib/intel_batchbuffer.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 7dbd6dd582..99b0b61585 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -1280,8 +1280,9 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
>  	gem_close(ibb->fd, ibb->handle);
>  	ibb->handle = gem_create(ibb->fd, ibb->size);
>  
> -	/* Keep address for bb in reloc mode and RANDOM allocator */
> -	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
> +	/* Reacquire offset for RELOC and SIMPLE */
> +	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
> +	    ibb->allocator_type == INTEL_ALLOCATOR_RELOC)
>  		ibb->batch_offset = __intel_bb_get_offset(ibb,
>  							  ibb->handle,
>  							  ibb->size,
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 09/17] lib/intel_batchbuffer: Update intel-bb docs
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 09/17] lib/intel_batchbuffer: Update intel-bb docs Zbigniew Kempczyński
@ 2023-04-28  7:51   ` Kamil Konieczny
  2023-04-28  8:51   ` Manszewski, Christoph
  1 sibling, 0 replies; 34+ messages in thread
From: Kamil Konieczny @ 2023-04-28  7:51 UTC (permalink / raw)
  To: igt-dev

On 2023-04-28 at 08:22:16 +0200, Zbigniew Kempczyński wrote:
> After RANDOM pseudo-allocator was removed and RELOC allocator becomed
> stateful docs stays intact and documents old code. Fix this before
> adding xe code.
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  lib/intel_batchbuffer.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 99b0b61585..306b7650e9 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -836,7 +836,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>   * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
>   *
>   * intel-bb assumes it will work in one of two modes - with relocations or
> - * with using allocator (currently RANDOM and SIMPLE are implemented).
> + * with using allocator (currently RELOC and SIMPLE are implemented).
>   * Some description is required to describe how they maintain the addresses.
>   *
>   * Before entering into each scenarios generic rule is intel-bb keeps objects
> @@ -854,10 +854,10 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>   *
>   * This mode is valid only for ppgtt. Addresses are acquired from allocator
>   * and softpinned. intel-bb cache must be then coherent with allocator
> - * (simple is coherent, random is not due to fact we don't keep its state).
> + * (simple is coherent, reloc partially [doesn't support address reservation]).
>   * When we do intel-bb reset with purging cache it has to reacquire addresses
>   * from allocator (allocator should return same address - what is true for
> - * simple allocator and false for random as mentioned before).
> + * simple and reloc allocators).
>   *
>   * If we do reset without purging caches we use addresses from intel-bb cache
>   * during execbuf objects construction.
> @@ -967,7 +967,7 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
>   * @size: size of the batchbuffer
>   * @start: allocator vm start address
>   * @end: allocator vm start address
> - * @allocator_type: allocator type, SIMPLE, RANDOM, ...
> + * @allocator_type: allocator type, SIMPLE, RELOC, ...
>   * @strategy: allocation strategy
>   *
>   * Creates bb with context passed in @ctx, size in @size and allocator type
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb Zbigniew Kempczyński
@ 2023-04-28  7:53   ` Kamil Konieczny
  2023-04-28  8:40   ` Manszewski, Christoph
  1 sibling, 0 replies; 34+ messages in thread
From: Kamil Konieczny @ 2023-04-28  7:53 UTC (permalink / raw)
  To: igt-dev

On 2023-04-28 at 08:22:17 +0200, Zbigniew Kempczyński wrote:
> Intention of creating intel-bb was to replace libdrm for i915.
> Due to many code relies on it (kms for example) most rational way
> is to extend and add Xe path to it.
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  lib/intel_batchbuffer.c | 336 ++++++++++++++++++++++++++++++++--------
>  lib/intel_batchbuffer.h |   6 +
>  2 files changed, 281 insertions(+), 61 deletions(-)
> 
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 306b7650e9..38ad792e55 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -28,18 +28,22 @@
>  #include <search.h>
>  #include <glib.h>
>  
> +#include "gpgpu_fill.h"
> +#include "huc_copy.h"
>  #include "i915/gem_create.h"
> +#include "i915/gem_mman.h"
> +#include "i915/i915_blt.h"
> +#include "igt_aux.h"
> +#include "igt_syncobj.h"
>  #include "intel_batchbuffer.h"
>  #include "intel_bufops.h"
>  #include "intel_chipset.h"
>  #include "media_fill.h"
>  #include "media_spin.h"
> -#include "i915/gem_mman.h"
> -#include "veboxcopy.h"
>  #include "sw_sync.h"
> -#include "gpgpu_fill.h"
> -#include "huc_copy.h"
> -#include "i915/i915_blt.h"
> +#include "veboxcopy.h"
> +#include "xe/xe_ioctl.h"
> +#include "xe/xe_query.h"
>  
>  #define BCS_SWCTRL 0x22200
>  #define BCS_SRC_Y (1 << 0)
> @@ -828,9 +832,10 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>  
>  /**
>   * __intel_bb_create:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>   * @ctx: context id
> - * @cfg: intel_ctx configuration, NULL for default context or legacy mode
> + * @cfg: for i915 intel_ctx configuration, NULL for default context or legacy mode,
> + *       unused for xe
>   * @size: size of the batchbuffer
>   * @do_relocs: use relocations or allocator
>   * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
> @@ -842,7 +847,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>   * Before entering into each scenarios generic rule is intel-bb keeps objects
>   * and their offsets in the internal cache and reuses in subsequent execs.
>   *
> - * 1. intel-bb with relocations
> + * 1. intel-bb with relocations (i915 only)
>   *
>   * Creating new intel-bb adds handle to cache implicitly and sets its address
>   * to 0. Objects added to intel-bb later also have address 0 set for first run.
> @@ -850,11 +855,12 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>   * works in reloc mode addresses are only suggestion to the driver and we
>   * cannot be sure they won't change at next exec.
>   *
> - * 2. with allocator
> + * 2. with allocator (i915 or xe)
>   *
>   * This mode is valid only for ppgtt. Addresses are acquired from allocator
> - * and softpinned. intel-bb cache must be then coherent with allocator
> - * (simple is coherent, reloc partially [doesn't support address reservation]).
> + * and softpinned (i915) or vm-binded (xe). intel-bb cache must be then
> + * coherent with allocator (simple is coherent, reloc partially [doesn't
> + * support address reservation]).
>   * When we do intel-bb reset with purging cache it has to reacquire addresses
>   * from allocator (allocator should return same address - what is true for
>   * simple and reloc allocators).
> @@ -883,48 +889,75 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
>  
>  	igt_assert(ibb);
>  
> -	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
>  	ibb->devid = intel_get_drm_devid(fd);
>  	ibb->gen = intel_gen(ibb->devid);
> +	ibb->ctx = ctx;
> +
> +	ibb->fd = fd;
> +	ibb->driver = is_i915_device(fd) ? INTEL_DRIVER_I915 :
> +					   is_xe_device(fd) ? INTEL_DRIVER_XE : 0;
> +	igt_assert(ibb->driver);
>  
>  	/*
>  	 * If we don't have full ppgtt driver can change our addresses
>  	 * so allocator is useless in this case. Just enforce relocations
>  	 * for such gens and don't use allocator at all.
>  	 */
> -	if (!ibb->uses_full_ppgtt)
> -		do_relocs = true;
> +	if (ibb->driver == INTEL_DRIVER_I915) {
> +		ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
> +		ibb->alignment = gem_detect_safe_alignment(fd);
> +		ibb->gtt_size = gem_aperture_size(fd);
> +		ibb->handle = gem_create(fd, size);
>  
> -	/*
> -	 * For softpin mode allocator has full control over offsets allocation
> -	 * so we want kernel to not interfere with this.
> -	 */
> -	if (do_relocs)
> -		ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
> +		if (!ibb->uses_full_ppgtt)
> +			do_relocs = true;
> +
> +		/*
> +		 * For softpin mode allocator has full control over offsets allocation
> +		 * so we want kernel to not interfere with this.
> +		 */
> +		if (do_relocs) {
> +			ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
> +			allocator_type = INTEL_ALLOCATOR_NONE;
> +		} else {
> +			/* Use safe start offset instead assuming 0x0 is safe */
> +			start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
> +
> +			/* if relocs are set we won't use an allocator */
> +			ibb->allocator_handle =
> +				intel_allocator_open_full(fd, ctx, start, end,
> +							  allocator_type,
> +							  strategy, 0);
> +		}
>  
> -	/* Use safe start offset instead assuming 0x0 is safe */
> -	start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
> +		ibb->vm_id = 0;
> +	} else {
> +		igt_assert(!do_relocs);
> +
> +		ibb->alignment = xe_get_default_alignment(fd);
> +		size = ALIGN(size, ibb->alignment);
> +		ibb->handle = xe_bo_create_flags(fd, 0, size, vram_if_possible(fd, 0));
> +		ibb->gtt_size = 1ull << xe_va_bits(fd);
> +
> +		if (!ctx)
> +			ctx = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> +
> +		ibb->uses_full_ppgtt = true;
> +		ibb->allocator_handle =
> +			intel_allocator_open_full(fd, ctx, start, end,
> +						  allocator_type, strategy,
> +						  ibb->alignment);
> +		ibb->vm_id = ctx;
> +		ibb->last_engine = ~0U;
> +	}
>  
> -	/* if relocs are set we won't use an allocator */
> -	if (do_relocs)
> -		allocator_type = INTEL_ALLOCATOR_NONE;
> -	else
> -		ibb->allocator_handle = intel_allocator_open_full(fd, ctx,
> -								  start, end,
> -								  allocator_type,
> -								  strategy, 0);
>  	ibb->allocator_type = allocator_type;
>  	ibb->allocator_strategy = strategy;
>  	ibb->allocator_start = start;
>  	ibb->allocator_end = end;
> -
> -	ibb->fd = fd;
>  	ibb->enforce_relocs = do_relocs;
> -	ibb->handle = gem_create(fd, size);
> +
>  	ibb->size = size;
> -	ibb->alignment = gem_detect_safe_alignment(fd);
> -	ibb->ctx = ctx;
> -	ibb->vm_id = 0;
>  	ibb->batch = calloc(1, size);
>  	igt_assert(ibb->batch);
>  	ibb->ptr = ibb->batch;
> @@ -937,7 +970,6 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
>  		memcpy(ibb->cfg, cfg, sizeof(*cfg));
>  	}
>  
> -	ibb->gtt_size = gem_aperture_size(fd);
>  	if ((ibb->gtt_size - 1) >> 32)
>  		ibb->supports_48b_address = true;
>  
> @@ -961,7 +993,7 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
>  
>  /**
>   * intel_bb_create_full:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>   * @ctx: context
>   * @cfg: intel_ctx configuration, NULL for default context or legacy mode
>   * @size: size of the batchbuffer
> @@ -992,7 +1024,7 @@ struct intel_bb *intel_bb_create_full(int fd, uint32_t ctx,
>  
>  /**
>   * intel_bb_create_with_allocator:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>   * @ctx: context
>   * @cfg: intel_ctx configuration, NULL for default context or legacy mode
>   * @size: size of the batchbuffer
> @@ -1027,7 +1059,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
>  
>  /**
>   * intel_bb_create:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>   * @size: size of the batchbuffer
>   *
>   * Creates bb with default context.
> @@ -1047,7 +1079,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
>   */
>  struct intel_bb *intel_bb_create(int fd, uint32_t size)
>  {
> -	bool relocs = gem_has_relocations(fd);
> +	bool relocs = is_i915_device(fd) && gem_has_relocations(fd);
>  
>  	return __intel_bb_create(fd, 0, NULL, size,
>  				 relocs && !aux_needs_softpin(fd), 0, 0,
> @@ -1057,7 +1089,7 @@ struct intel_bb *intel_bb_create(int fd, uint32_t size)
>  
>  /**
>   * intel_bb_create_with_context:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>   * @ctx: context id
>   * @cfg: intel_ctx configuration, NULL for default context or legacy mode
>   * @size: size of the batchbuffer
> @@ -1073,7 +1105,7 @@ struct intel_bb *
>  intel_bb_create_with_context(int fd, uint32_t ctx,
>  			     const intel_ctx_cfg_t *cfg, uint32_t size)
>  {
> -	bool relocs = gem_has_relocations(fd);
> +	bool relocs = is_i915_device(fd) && gem_has_relocations(fd);
>  
>  	return __intel_bb_create(fd, ctx, cfg, size,
>  				 relocs && !aux_needs_softpin(fd), 0, 0,
> @@ -1083,7 +1115,7 @@ intel_bb_create_with_context(int fd, uint32_t ctx,
>  
>  /**
>   * intel_bb_create_with_relocs:
> - * @fd: drm fd
> + * @fd: drm fd - i915
>   * @size: size of the batchbuffer
>   *
>   * Creates bb which will disable passing addresses.
> @@ -1095,7 +1127,7 @@ intel_bb_create_with_context(int fd, uint32_t ctx,
>   */
>  struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
>  {
> -	igt_require(gem_has_relocations(fd));
> +	igt_require(is_i915_device(fd) && gem_has_relocations(fd));
>  
>  	return __intel_bb_create(fd, 0, NULL, size, true, 0, 0,
>  				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
> @@ -1103,7 +1135,7 @@ struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
>  
>  /**
>   * intel_bb_create_with_relocs_and_context:
> - * @fd: drm fd
> + * @fd: drm fd - i915
>   * @ctx: context
>   * @cfg: intel_ctx configuration, NULL for default context or legacy mode
>   * @size: size of the batchbuffer
> @@ -1120,7 +1152,7 @@ intel_bb_create_with_relocs_and_context(int fd, uint32_t ctx,
>  					const intel_ctx_cfg_t *cfg,
>  					uint32_t size)
>  {
> -	igt_require(gem_has_relocations(fd));
> +	igt_require(is_i915_device(fd) && gem_has_relocations(fd));
>  
>  	return __intel_bb_create(fd, ctx, cfg, size, true, 0, 0,
>  				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
> @@ -1221,12 +1253,76 @@ void intel_bb_destroy(struct intel_bb *ibb)
>  
>  	if (ibb->fence >= 0)
>  		close(ibb->fence);
> +	if (ibb->engine_syncobj)
> +		syncobj_destroy(ibb->fd, ibb->engine_syncobj);
> +	if (ibb->vm_id && !ibb->ctx)
> +		xe_vm_destroy(ibb->fd, ibb->vm_id);
>  
>  	free(ibb->batch);
>  	free(ibb->cfg);
>  	free(ibb);
>  }
>  
> +static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
> +						   uint32_t op, uint32_t region)
> +{
> +	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
> +	struct drm_xe_vm_bind_op *bind_ops, *ops;
> +	bool set_obj = (op & 0xffff) == XE_VM_BIND_OP_MAP;
> +
> +	bind_ops = calloc(ibb->num_objects, sizeof(*bind_ops));
> +	igt_assert(bind_ops);
> +
> +	igt_debug("bind_ops: %s\n", set_obj ? "MAP" : "UNMAP");
> +	for (int i = 0; i < ibb->num_objects; i++) {
> +		ops = &bind_ops[i];
> +
> +		if (set_obj)
> +			ops->obj = objects[i]->handle;
> +
> +		ops->op = op;
> +		ops->obj_offset = 0;
> +		ops->addr = objects[i]->offset;
> +		ops->range = objects[i]->rsvd1;
> +		ops->region = region;
> +
> +		igt_debug("  [%d]: handle: %u, offset: %llx, size: %llx\n",
> +			  i, ops->obj, (long long)ops->addr, (long long)ops->range);
> +	}
> +
> +	return bind_ops;
> +}
> +
> +static void __unbind_xe_objects(struct intel_bb *ibb)
> +{
> +	struct drm_xe_sync syncs[2] = {
> +		{ .flags = DRM_XE_SYNC_SYNCOBJ },
> +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +	};
> +	int ret;
> +
> +	syncs[0].handle = ibb->engine_syncobj;
> +	syncs[1].handle = syncobj_create(ibb->fd, 0);
> +
> +	if (ibb->num_objects > 1) {
> +		struct drm_xe_vm_bind_op *bind_ops;
> +		uint32_t op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
> +
> +		bind_ops = xe_alloc_bind_ops(ibb, op, 0);
> +		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> +				 ibb->num_objects, syncs, 2);
> +		free(bind_ops);
> +	} else {
> +		xe_vm_unbind_async(ibb->fd, ibb->vm_id, 0, 0,
> +				   ibb->batch_offset, ibb->size, syncs, 2);
> +	}
> +	ret = syncobj_wait_err(ibb->fd, &syncs[1].handle, 1, INT64_MAX, 0);
> +	igt_assert_eq(ret, 0);
> +	syncobj_destroy(ibb->fd, syncs[1].handle);
> +
> +	ibb->xe_bound = false;
> +}
> +
>  /*
>   * intel_bb_reset:
>   * @ibb: pointer to intel_bb
> @@ -1258,6 +1354,9 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
>  	for (i = 0; i < ibb->num_objects; i++)
>  		ibb->objects[i]->flags &= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
>  
> +	if (is_xe_device(ibb->fd) && ibb->xe_bound)
> +		__unbind_xe_objects(ibb);
> +
>  	__intel_bb_destroy_relocations(ibb);
>  	__intel_bb_destroy_objects(ibb);
>  	__reallocate_objects(ibb);
> @@ -1278,7 +1377,11 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
>  				       ibb->size);
>  
>  	gem_close(ibb->fd, ibb->handle);
> -	ibb->handle = gem_create(ibb->fd, ibb->size);
> +	if (ibb->driver == INTEL_DRIVER_I915)
> +		ibb->handle = gem_create(ibb->fd, ibb->size);
> +	else
> +		ibb->handle = xe_bo_create_flags(ibb->fd, 0, ibb->size,
> +						 vram_if_possible(ibb->fd, 0));
>  
>  	/* Reacquire offset for RELOC and SIMPLE */
>  	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
> @@ -1305,13 +1408,19 @@ int intel_bb_sync(struct intel_bb *ibb)
>  {
>  	int ret;
>  
> -	if (ibb->fence < 0)
> +	if (ibb->fence < 0 && !ibb->engine_syncobj)
>  		return 0;
>  
> -	ret = sync_fence_wait(ibb->fence, -1);
> -	if (ret == 0) {
> -		close(ibb->fence);
> -		ibb->fence = -1;
> +	if (ibb->fence >= 0) {
> +		ret = sync_fence_wait(ibb->fence, -1);
> +		if (ret == 0) {
> +			close(ibb->fence);
> +			ibb->fence = -1;
> +		}
> +	} else {
> +		igt_assert_neq(ibb->engine_syncobj, 0);
> +		ret = syncobj_wait_err(ibb->fd, &ibb->engine_syncobj,
> +				       1, INT64_MAX, 0);
>  	}
>  
>  	return ret;
> @@ -1502,7 +1611,7 @@ static void __remove_from_objects(struct intel_bb *ibb,
>  }
>  
>  /**
> - * intel_bb_add_object:
> + * __intel_bb_add_object:
>   * @ibb: pointer to intel_bb
>   * @handle: which handle to add to objects array
>   * @size: object size
> @@ -1514,9 +1623,9 @@ static void __remove_from_objects(struct intel_bb *ibb,
>   * in the object tree. When object is a render target it has to
>   * be marked with EXEC_OBJECT_WRITE flag.
>   */
> -struct drm_i915_gem_exec_object2 *
> -intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> -		    uint64_t offset, uint64_t alignment, bool write)
> +static struct drm_i915_gem_exec_object2 *
> +__intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> +		      uint64_t offset, uint64_t alignment, bool write)
>  {
>  	struct drm_i915_gem_exec_object2 *object;
>  
> @@ -1524,8 +1633,12 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
>  		   || ALIGN(offset, alignment) == offset);
>  	igt_assert(is_power_of_two(alignment));
>  
> +	if (ibb->driver == INTEL_DRIVER_I915)
> +		alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
> +	else
> +		alignment = max_t(uint64_t, ibb->alignment, alignment);
> +
>  	object = __add_to_cache(ibb, handle);
> -	alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
>  	__add_to_objects(ibb, object);
>  
>  	/*
> @@ -1585,9 +1698,27 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
>  	if (ibb->allows_obj_alignment)
>  		object->alignment = alignment;
>  
> +	if (ibb->driver == INTEL_DRIVER_XE) {
> +		object->alignment = alignment;
> +		object->rsvd1 = size;
> +	}
> +
>  	return object;
>  }
>  
> +struct drm_i915_gem_exec_object2 *
> +intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> +		    uint64_t offset, uint64_t alignment, bool write)
> +{
> +	struct drm_i915_gem_exec_object2 *obj = NULL;
> +
> +	obj = __intel_bb_add_object(ibb, handle, size, offset,
> +				    alignment, write);
> +	igt_assert(obj);
> +
> +	return obj;
> +}
> +
>  bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
>  			    uint64_t offset, uint64_t size)
>  {
> @@ -2136,6 +2267,82 @@ static void update_offsets(struct intel_bb *ibb,
>  }
>  
>  #define LINELEN 76
> +
> +static int
> +__xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
> +{
> +	uint32_t engine = flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK);
> +	uint32_t engine_id;
> +	struct drm_xe_sync syncs[2] = {
> +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +	};
> +	struct drm_xe_vm_bind_op *bind_ops;
> +	void *map;
> +
> +	igt_assert_eq(ibb->num_relocs, 0);
> +	igt_assert_eq(ibb->xe_bound, false);
> +
> +	if (ibb->last_engine != engine) {
> +		struct drm_xe_engine_class_instance inst = { };
> +
> +		inst.engine_instance =
> +			(flags & I915_EXEC_BSD_MASK) >> I915_EXEC_BSD_SHIFT;
> +
> +		switch (flags & I915_EXEC_RING_MASK) {
> +		case I915_EXEC_DEFAULT:
> +		case I915_EXEC_BLT:
> +			inst.engine_class = DRM_XE_ENGINE_CLASS_COPY;
> +			break;
> +		case I915_EXEC_BSD:
> +			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_DECODE;
> +			break;
> +		case I915_EXEC_RENDER:
> +			inst.engine_class = DRM_XE_ENGINE_CLASS_RENDER;
> +			break;
> +		case I915_EXEC_VEBOX:
> +			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE;
> +			break;
> +		default:
> +			igt_assert_f(false, "Unknown engine: %x", (uint32_t) flags);
> +		}
> +		igt_debug("Run on %s\n", xe_engine_class_string(inst.engine_class));
> +
> +		ibb->engine_id = engine_id =
> +			xe_engine_create(ibb->fd, ibb->vm_id, &inst, 0);
> +	} else {
> +		engine_id = ibb->engine_id;
> +	}
> +	ibb->last_engine = engine;
> +
> +	map = xe_bo_map(ibb->fd, ibb->handle, ibb->size);
> +	memcpy(map, ibb->batch, ibb->size);
> +	gem_munmap(map, ibb->size);
> +
> +	syncs[0].handle = syncobj_create(ibb->fd, 0);
> +	if (ibb->num_objects > 1) {
> +		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, 0);
> +		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> +				 ibb->num_objects, syncs, 1);
> +		free(bind_ops);
> +	} else {
> +		xe_vm_bind_async(ibb->fd, ibb->vm_id, 0, ibb->handle, 0,
> +				 ibb->batch_offset, ibb->size, syncs, 1);
> +	}
> +	ibb->xe_bound = true;
> +
> +	syncs[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +	ibb->engine_syncobj = syncobj_create(ibb->fd, 0);
> +	syncs[1].handle = ibb->engine_syncobj;
> +
> +	xe_exec_sync(ibb->fd, engine_id, ibb->batch_offset, syncs, 2);
> +
> +	if (sync)
> +		intel_bb_sync(ibb);
> +
> +	return 0;
> +}
> +
>  /*
>   * __intel_bb_exec:
>   * @ibb: pointer to intel_bb
> @@ -2221,7 +2428,7 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
>  /**
>   * intel_bb_exec:
>   * @ibb: pointer to intel_bb
> - * @end_offset: offset of the last instruction in the bb
> + * @end_offset: offset of the last instruction in the bb (for i915)
>   * @flags: flags passed directly to execbuf
>   * @sync: if true wait for execbuf completion, otherwise caller is responsible
>   * to wait for completion
> @@ -2231,7 +2438,13 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
>  void intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
>  		   uint64_t flags, bool sync)
>  {
> -	igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
> +	if (ibb->dump_base64)
> +		intel_bb_dump_base64(ibb, LINELEN);
> +
> +	if (ibb->driver == INTEL_DRIVER_I915)
> +		igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
> +	else
> +		igt_assert_eq(__xe_bb_exec(ibb, flags, sync), 0);
>  }
>  
>  /**
> @@ -2636,7 +2849,8 @@ static void __intel_bb_reinit_alloc(struct intel_bb *ibb)
>  							  ibb->allocator_start, ibb->allocator_end,
>  							  ibb->allocator_type,
>  							  ibb->allocator_strategy,
> -							  0);
> +							  ibb->alignment);
> +
>  	intel_bb_reset(ibb, true);
>  }
>  
> diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
> index 4978b6fb29..9a58fb7809 100644
> --- a/lib/intel_batchbuffer.h
> +++ b/lib/intel_batchbuffer.h
> @@ -246,6 +246,7 @@ struct intel_bb {
>  	uint8_t allocator_type;
>  	enum allocator_strategy allocator_strategy;
>  
> +	enum intel_driver driver;
>  	int fd;
>  	unsigned int gen;
>  	bool debug;
> @@ -268,6 +269,11 @@ struct intel_bb {
>  	uint32_t ctx;
>  	uint32_t vm_id;
>  
> +	bool xe_bound;
> +	uint32_t engine_syncobj;
> +	uint32_t engine_id;
> +	uint32_t last_engine;
> +
>  	/* Context configuration */
>  	intel_ctx_cfg_t *cfg;
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 11/17] tests/xe_intel_bb: Check if intel-bb Xe support correctness
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 11/17] tests/xe_intel_bb: Check if intel-bb Xe support correctness Zbigniew Kempczyński
@ 2023-04-28  7:58   ` Kamil Konieczny
  2023-04-28  8:18     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 34+ messages in thread
From: Kamil Konieczny @ 2023-04-28  7:58 UTC (permalink / raw)
  To: igt-dev

Hi,

On 2023-04-28 at 08:22:18 +0200, Zbigniew Kempczyński wrote:
> As we're reusing intel-bb for Xe we need to check it behaves correctly
> for buffer handling and submission.
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>

imho it will be more clear to state that you added new test here,
so instead of:

tests/xe_intel_bb: Check if intel-bb Xe support correctness

this will be better:

tests/xe_intel_bb: add new test for intel-bb on Xe platform

Then you describe rationale in message. It is up to you if you
would like to change this, so with or without it:

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

--
Kamil

> 
> ---
> v5: to run test quick use system memory instead of vram (mapping
>     on system is wb)
> ---
>  tests/meson.build      |    1 +
>  tests/xe/xe_intel_bb.c | 1185 ++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 1186 insertions(+)
>  create mode 100644 tests/xe/xe_intel_bb.c
> 
> diff --git a/tests/meson.build b/tests/meson.build
> index 8909cfa8fd..b026fac48b 100644
> --- a/tests/meson.build
> +++ b/tests/meson.build
> @@ -256,6 +256,7 @@ xe_progs = [
>  	'xe_exec_threads',
>  	'xe_guc_pc',
>  	'xe_huc_copy',
> +	'xe_intel_bb',
>  	'xe_mmap',
>  	'xe_mmio',
>  	'xe_module_load',
> diff --git a/tests/xe/xe_intel_bb.c b/tests/xe/xe_intel_bb.c
> new file mode 100644
> index 0000000000..35d61608e1
> --- /dev/null
> +++ b/tests/xe/xe_intel_bb.c
> @@ -0,0 +1,1185 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#include <cairo.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <glib.h>
> +#include <inttypes.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <sys/ioctl.h>
> +#include <sys/stat.h>
> +#include <unistd.h>
> +#include <zlib.h>
> +
> +#include "igt.h"
> +#include "igt_crc.h"
> +#include "intel_bufops.h"
> +#include "xe/xe_ioctl.h"
> +#include "xe/xe_query.h"
> +
> +/**
> + * TEST: Basic tests for intel-bb xe functionality
> + * Category: Software building block
> + * Sub-category: xe
> + * Test category: functionality test
> + */
> +
> +#define PAGE_SIZE 4096
> +
> +#define WIDTH	64
> +#define HEIGHT	64
> +#define STRIDE	(WIDTH * 4)
> +#define SIZE	(HEIGHT * STRIDE)
> +
> +#define COLOR_00	0x00
> +#define COLOR_33	0x33
> +#define COLOR_77	0x77
> +#define COLOR_CC	0xcc
> +
> +IGT_TEST_DESCRIPTION("xe_intel_bb API check.");
> +
> +static bool debug_bb;
> +static bool write_png;
> +static bool buf_info;
> +static bool print_base64;
> +
> +static void *alloc_aligned(uint64_t size)
> +{
> +	void *p;
> +
> +	igt_assert_eq(posix_memalign(&p, 16, size), 0);
> +
> +	return p;
> +}
> +
> +static void fill_buf(struct intel_buf *buf, uint8_t color)
> +{
> +	uint8_t *ptr;
> +	int xe = buf_ops_get_fd(buf->bops);
> +	int i;
> +
> +	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
> +
> +	for (i = 0; i < buf->surface[0].size; i++)
> +		ptr[i] = color;
> +
> +	munmap(ptr, buf->surface[0].size);
> +}
> +
> +static void check_buf(struct intel_buf *buf, uint8_t color)
> +{
> +	uint8_t *ptr;
> +	int xe = buf_ops_get_fd(buf->bops);
> +	int i;
> +
> +	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
> +
> +	for (i = 0; i < buf->surface[0].size; i++)
> +		igt_assert(ptr[i] == color);
> +
> +	munmap(ptr, buf->surface[0].size);
> +}
> +
> +static struct intel_buf *
> +create_buf(struct buf_ops *bops, int width, int height, uint8_t color)
> +{
> +	struct intel_buf *buf;
> +
> +	buf = calloc(1, sizeof(*buf));
> +	igt_assert(buf);
> +
> +	intel_buf_init(bops, buf, width/4, height, 32, 0, I915_TILING_NONE, 0);
> +	fill_buf(buf, color);
> +
> +	return buf;
> +}
> +
> +static void print_buf(struct intel_buf *buf, const char *name)
> +{
> +	uint8_t *ptr;
> +	int xe = buf_ops_get_fd(buf->bops);
> +
> +	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
> +
> +	igt_debug("[%s] Buf handle: %d, size: %" PRIu64
> +		  ", v: 0x%02x, presumed_addr: %p\n",
> +		  name, buf->handle, buf->surface[0].size, ptr[0],
> +		  from_user_pointer(buf->addr.offset));
> +	munmap(ptr, buf->surface[0].size);
> +}
> +
> +/**
> + * SUBTEST: reset-bb
> + * Description: check bb reset
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static void reset_bb(struct buf_ops *bops)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	intel_bb_reset(ibb, false);
> +	intel_bb_destroy(ibb);
> +}
> +
> +/**
> + * SUBTEST: purge-bb
> + * Description: check bb reset == full (purge)
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static void purge_bb(struct buf_ops *bops)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_buf *buf;
> +	struct intel_bb *ibb;
> +	uint64_t offset0, offset1;
> +
> +	buf = intel_buf_create(bops, 512, 512, 32, 0, I915_TILING_NONE,
> +			       I915_COMPRESSION_NONE);
> +	ibb = intel_bb_create(xe, 4096);
> +	intel_bb_set_debug(ibb, true);
> +
> +	intel_bb_add_intel_buf(ibb, buf, false);
> +	offset0 = buf->addr.offset;
> +
> +	intel_bb_reset(ibb, true);
> +	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
> +
> +	intel_bb_add_intel_buf(ibb, buf, false);
> +	offset1 = buf->addr.offset;
> +
> +	igt_assert(offset0 == offset1);
> +
> +	intel_buf_destroy(buf);
> +	intel_bb_destroy(ibb);
> +}
> +
> +/**
> + * SUBTEST: simple-%s
> + * Description: Run simple bb xe %arg[1] test
> + * Run type: BAT
> + *
> + * arg[1]:
> + *
> + * @bb:     bb
> + * @bb-ctx: bb-ctx
> + */
> +static void simple_bb(struct buf_ops *bops, bool new_context)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	uint32_t ctx = 0;
> +
> +	ibb = intel_bb_create_with_allocator(xe, ctx, NULL, PAGE_SIZE,
> +					     INTEL_ALLOCATOR_SIMPLE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> +	intel_bb_ptr_align(ibb, 8);
> +
> +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +
> +	/* Check we're safe with reset and no double-free will occur */
> +	intel_bb_reset(ibb, true);
> +	intel_bb_reset(ibb, false);
> +	intel_bb_reset(ibb, true);
> +
> +	if (new_context) {
> +		ctx = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> +		intel_bb_destroy(ibb);
> +		ibb = intel_bb_create_with_context(xe, ctx, NULL, PAGE_SIZE);
> +		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> +		intel_bb_ptr_align(ibb, 8);
> +		intel_bb_exec(ibb, intel_bb_offset(ibb),
> +			      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC,
> +			      true);
> +		xe_vm_destroy(xe, ctx);
> +	}
> +
> +	intel_bb_destroy(ibb);
> +}
> +
> +/**
> + * SUBTEST: bb-with-allocator
> + * Description: check bb with passed allocator
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static void bb_with_allocator(struct buf_ops *bops)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	struct intel_buf *src, *dst;
> +	uint32_t ctx = 0;
> +
> +	ibb = intel_bb_create_with_allocator(xe, ctx, NULL, PAGE_SIZE,
> +					     INTEL_ALLOCATOR_SIMPLE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	src = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
> +			       I915_COMPRESSION_NONE);
> +	dst = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
> +			       I915_COMPRESSION_NONE);
> +
> +	intel_bb_add_intel_buf(ibb, src, false);
> +	intel_bb_add_intel_buf(ibb, dst, true);
> +	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
> +	intel_bb_remove_intel_buf(ibb, src);
> +	intel_bb_remove_intel_buf(ibb, dst);
> +
> +	intel_buf_destroy(src);
> +	intel_buf_destroy(dst);
> +	intel_bb_destroy(ibb);
> +}
> +
> +/**
> + * SUBTEST: lot-of-buffers
> + * Description: check running bb with many buffers
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +#define NUM_BUFS 500
> +static void lot_of_buffers(struct buf_ops *bops)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	struct intel_buf *buf[NUM_BUFS];
> +	int i;
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> +	intel_bb_ptr_align(ibb, 8);
> +
> +	for (i = 0; i < NUM_BUFS; i++) {
> +		buf[i] = intel_buf_create(bops, 4096, 1, 8, 0, I915_TILING_NONE,
> +					  I915_COMPRESSION_NONE);
> +		if (i % 2)
> +			intel_bb_add_intel_buf(ibb, buf[i], false);
> +		else
> +			intel_bb_add_intel_buf_with_alignment(ibb, buf[i],
> +							      0x4000, false);
> +	}
> +
> +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +	intel_bb_reset(ibb, false);
> +
> +	for (i = 0; i < NUM_BUFS; i++)
> +		intel_buf_destroy(buf[i]);
> +
> +	intel_bb_destroy(ibb);
> +}
> +
> +/**
> + * SUBTEST: add-remove-objects
> + * Description: check bb object manipulation (add + remove)
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static void add_remove_objects(struct buf_ops *bops)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	struct intel_buf *src, *mid, *dst;
> +	uint32_t offset;
> +	const uint32_t width = 512;
> +	const uint32_t height = 512;
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	src = intel_buf_create(bops, width, height, 32, 0,
> +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> +	mid = intel_buf_create(bops, width, height, 32, 0,
> +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> +	dst = intel_buf_create(bops, width, height, 32, 0,
> +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> +
> +	intel_bb_add_intel_buf(ibb, src, false);
> +	intel_bb_add_intel_buf(ibb, mid, true);
> +	intel_bb_remove_intel_buf(ibb, mid);
> +	intel_bb_remove_intel_buf(ibb, mid);
> +	intel_bb_remove_intel_buf(ibb, mid);
> +	intel_bb_add_intel_buf(ibb, dst, true);
> +
> +	offset = intel_bb_emit_bbe(ibb);
> +	intel_bb_exec(ibb, offset,
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +	intel_bb_reset(ibb, false);
> +
> +	intel_buf_destroy(src);
> +	intel_buf_destroy(mid);
> +	intel_buf_destroy(dst);
> +	intel_bb_destroy(ibb);
> +}
> +
> +/**
> + * SUBTEST: destroy-bb
> + * Description: check bb destroy/create
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static void destroy_bb(struct buf_ops *bops)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	struct intel_buf *src, *mid, *dst;
> +	uint32_t offset;
> +	const uint32_t width = 512;
> +	const uint32_t height = 512;
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	src = intel_buf_create(bops, width, height, 32, 0,
> +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> +	mid = intel_buf_create(bops, width, height, 32, 0,
> +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> +	dst = intel_buf_create(bops, width, height, 32, 0,
> +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> +
> +	intel_bb_add_intel_buf(ibb, src, false);
> +	intel_bb_add_intel_buf(ibb, mid, true);
> +	intel_bb_add_intel_buf(ibb, dst, true);
> +
> +	offset = intel_bb_emit_bbe(ibb);
> +	intel_bb_exec(ibb, offset,
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +	intel_bb_reset(ibb, false);
> +
> +	/* Check destroy will detach intel_bufs */
> +	intel_bb_destroy(ibb);
> +	igt_assert(src->addr.offset == INTEL_BUF_INVALID_ADDRESS);
> +	igt_assert(src->ibb == NULL);
> +	igt_assert(mid->addr.offset == INTEL_BUF_INVALID_ADDRESS);
> +	igt_assert(mid->ibb == NULL);
> +	igt_assert(dst->addr.offset == INTEL_BUF_INVALID_ADDRESS);
> +	igt_assert(dst->ibb == NULL);
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	intel_bb_add_intel_buf(ibb, src, false);
> +	offset = intel_bb_emit_bbe(ibb);
> +	intel_bb_exec(ibb, offset,
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +	intel_bb_reset(ibb, false);
> +
> +	intel_bb_destroy(ibb);
> +	intel_buf_destroy(src);
> +	intel_buf_destroy(mid);
> +	intel_buf_destroy(dst);
> +}
> +
> +/**
> + * SUBTEST: create-in-region
> + * Description: check size validation on available regions
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static void create_in_region(struct buf_ops *bops, uint64_t region)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	struct intel_buf buf = {};
> +	uint32_t handle, offset;
> +	uint64_t size;
> +	int width = 64;
> +	int height = 64;
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	size = xe_min_page_size(xe, system_memory(xe));
> +	handle = xe_bo_create_flags(xe, 0, size, system_memory(xe));
> +	intel_buf_init_full(bops, handle, &buf,
> +			    width/4, height, 32, 0,
> +			    I915_TILING_NONE, 0,
> +			    size, 0, region);
> +	intel_buf_set_ownership(&buf, true);
> +
> +	intel_bb_add_intel_buf(ibb, &buf, false);
> +	offset = intel_bb_emit_bbe(ibb);
> +	intel_bb_exec(ibb, offset,
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +	intel_bb_reset(ibb, false);
> +
> +	intel_buf_close(bops, &buf);
> +	intel_bb_destroy(ibb);
> +}
> +
> +static void __emit_blit(struct intel_bb *ibb,
> +			 struct intel_buf *src, struct intel_buf *dst)
> +{
> +	intel_bb_emit_blt_copy(ibb,
> +			       src, 0, 0, src->surface[0].stride,
> +			       dst, 0, 0, dst->surface[0].stride,
> +			       intel_buf_width(dst),
> +			       intel_buf_height(dst),
> +			       dst->bpp);
> +}
> +
> +/**
> + * SUBTEST: blit-%s
> + * Description: Run blit on %arg[1] allocator
> + * Run type: BAT
> + *
> + * arg[1]:
> + *
> + * @simple:				simple
> + * @reloc:				reloc
> + */
> +static void blit(struct buf_ops *bops, uint8_t allocator_type)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	struct intel_buf *src, *dst;
> +	uint64_t poff_src, poff_dst;
> +	uint64_t flags = 0;
> +
> +	ibb = intel_bb_create_with_allocator(xe, 0, NULL, PAGE_SIZE,
> +					     allocator_type);
> +	flags |= I915_EXEC_NO_RELOC;
> +
> +	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
> +	dst = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
> +
> +	if (buf_info) {
> +		print_buf(src, "src");
> +		print_buf(dst, "dst");
> +	}
> +
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	__emit_blit(ibb, src, dst);
> +	intel_bb_emit_bbe(ibb);
> +	intel_bb_flush_blit(ibb);
> +	intel_bb_sync(ibb);
> +	intel_bb_reset(ibb, false);
> +	check_buf(dst, COLOR_CC);
> +
> +	poff_src = intel_bb_get_object_offset(ibb, src->handle);
> +	poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
> +
> +	/* Add buffers again */
> +	intel_bb_add_intel_buf(ibb, src, false);
> +	intel_bb_add_intel_buf(ibb, dst, true);
> +
> +	igt_assert_f(poff_src == src->addr.offset,
> +		     "prev src addr: %" PRIx64 " <> src addr %" PRIx64 "\n",
> +		     poff_src, src->addr.offset);
> +	igt_assert_f(poff_dst == dst->addr.offset,
> +		     "prev dst addr: %" PRIx64 " <> dst addr %" PRIx64 "\n",
> +		     poff_dst, dst->addr.offset);
> +
> +	fill_buf(src, COLOR_77);
> +	fill_buf(dst, COLOR_00);
> +
> +	__emit_blit(ibb, src, dst);
> +	intel_bb_emit_bbe(ibb);
> +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +	intel_bb_reset(ibb, false);
> +	check_buf(dst, COLOR_77);
> +
> +	intel_bb_emit_bbe(ibb);
> +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +	intel_bb_reset(ibb, false);
> +	check_buf(dst, COLOR_77);
> +
> +	intel_buf_destroy(src);
> +	intel_buf_destroy(dst);
> +	intel_bb_destroy(ibb);
> +}
> +
> +static void scratch_buf_init(struct buf_ops *bops,
> +			     struct intel_buf *buf,
> +			     int width, int height,
> +			     uint32_t req_tiling,
> +			     enum i915_compression compression)
> +{
> +	int fd = buf_ops_get_fd(bops);
> +	int bpp = 32;
> +
> +	/*
> +	 * We use system memory even if vram is possible because wc mapping
> +	 * is extremely slow.
> +	 */
> +	intel_buf_init_in_region(bops, buf, width, height, bpp, 0,
> +				 req_tiling, compression,
> +				 system_memory(fd));
> +
> +	igt_assert(intel_buf_width(buf) == width);
> +	igt_assert(intel_buf_height(buf) == height);
> +}
> +
> +static void scratch_buf_draw_pattern(struct buf_ops *bops,
> +				     struct intel_buf *buf,
> +				     int x, int y, int w, int h,
> +				     int cx, int cy, int cw, int ch,
> +				     bool use_alternate_colors)
> +{
> +	cairo_surface_t *surface;
> +	cairo_pattern_t *pat;
> +	cairo_t *cr;
> +	void *linear;
> +
> +	linear = alloc_aligned(buf->surface[0].size);
> +
> +	surface = cairo_image_surface_create_for_data(linear,
> +						      CAIRO_FORMAT_RGB24,
> +						      intel_buf_width(buf),
> +						      intel_buf_height(buf),
> +						      buf->surface[0].stride);
> +
> +	cr = cairo_create(surface);
> +
> +	cairo_rectangle(cr, cx, cy, cw, ch);
> +	cairo_clip(cr);
> +
> +	pat = cairo_pattern_create_mesh();
> +	cairo_mesh_pattern_begin_patch(pat);
> +	cairo_mesh_pattern_move_to(pat, x,   y);
> +	cairo_mesh_pattern_line_to(pat, x+w, y);
> +	cairo_mesh_pattern_line_to(pat, x+w, y+h);
> +	cairo_mesh_pattern_line_to(pat, x,   y+h);
> +	if (use_alternate_colors) {
> +		cairo_mesh_pattern_set_corner_color_rgb(pat, 0, 0.0, 1.0, 1.0);
> +		cairo_mesh_pattern_set_corner_color_rgb(pat, 1, 1.0, 0.0, 1.0);
> +		cairo_mesh_pattern_set_corner_color_rgb(pat, 2, 1.0, 1.0, 0.0);
> +		cairo_mesh_pattern_set_corner_color_rgb(pat, 3, 0.0, 0.0, 0.0);
> +	} else {
> +		cairo_mesh_pattern_set_corner_color_rgb(pat, 0, 1.0, 0.0, 0.0);
> +		cairo_mesh_pattern_set_corner_color_rgb(pat, 1, 0.0, 1.0, 0.0);
> +		cairo_mesh_pattern_set_corner_color_rgb(pat, 2, 0.0, 0.0, 1.0);
> +		cairo_mesh_pattern_set_corner_color_rgb(pat, 3, 1.0, 1.0, 1.0);
> +	}
> +	cairo_mesh_pattern_end_patch(pat);
> +
> +	cairo_rectangle(cr, x, y, w, h);
> +	cairo_set_source(cr, pat);
> +	cairo_fill(cr);
> +	cairo_pattern_destroy(pat);
> +
> +	cairo_destroy(cr);
> +
> +	cairo_surface_destroy(surface);
> +
> +	linear_to_intel_buf(bops, buf, linear);
> +
> +	free(linear);
> +}
> +
> +#define GROUP_SIZE 4096
> +static int compare_detail(const uint32_t *ptr1, uint32_t *ptr2,
> +			  uint32_t size)
> +{
> +	int i, ok = 0, fail = 0;
> +	int groups = size / GROUP_SIZE;
> +	int *hist = calloc(GROUP_SIZE, groups);
> +
> +	igt_debug("size: %d, group_size: %d, groups: %d\n",
> +		  size, GROUP_SIZE, groups);
> +
> +	for (i = 0; i < size / sizeof(uint32_t); i++) {
> +		if (ptr1[i] == ptr2[i]) {
> +			ok++;
> +		} else {
> +			fail++;
> +			hist[i * sizeof(uint32_t) / GROUP_SIZE]++;
> +		}
> +	}
> +
> +	for (i = 0; i < groups; i++) {
> +		if (hist[i])
> +			igt_debug("[group %4x]: %d\n", i, hist[i]);
> +	}
> +	free(hist);
> +
> +	igt_debug("ok: %d, fail: %d\n", ok, fail);
> +
> +	return fail;
> +}
> +
> +static int compare_bufs(struct intel_buf *buf1, struct intel_buf *buf2,
> +			 bool detail_compare)
> +{
> +	void *ptr1, *ptr2;
> +	int fd1, fd2, ret;
> +
> +	igt_assert(buf1->surface[0].size == buf2->surface[0].size);
> +
> +	fd1 = buf_ops_get_fd(buf1->bops);
> +	fd2 = buf_ops_get_fd(buf2->bops);
> +
> +	ptr1 = xe_bo_map(fd1, buf1->handle, buf1->surface[0].size);
> +	ptr2 = xe_bo_map(fd2, buf2->handle, buf2->surface[0].size);
> +	ret = memcmp(ptr1, ptr2, buf1->surface[0].size);
> +	if (detail_compare)
> +		ret = compare_detail(ptr1, ptr2, buf1->surface[0].size);
> +
> +	munmap(ptr1, buf1->surface[0].size);
> +	munmap(ptr2, buf2->surface[0].size);
> +
> +	return ret;
> +}
> +
> +#define LINELEN 76ul
> +static int dump_base64(const char *name, struct intel_buf *buf)
> +{
> +	void *ptr;
> +	int fd, ret;
> +	uLongf outsize = buf->surface[0].size * 3 / 2;
> +	Bytef *destbuf = malloc(outsize);
> +	gchar *str, *pos;
> +
> +	fd = buf_ops_get_fd(buf->bops);
> +
> +	ptr = gem_mmap__device_coherent(fd, buf->handle, 0,
> +					buf->surface[0].size, PROT_READ);
> +
> +	ret = compress2(destbuf, &outsize, ptr, buf->surface[0].size,
> +			Z_BEST_COMPRESSION);
> +	if (ret != Z_OK) {
> +		igt_warn("error compressing, ret: %d\n", ret);
> +	} else {
> +		igt_info("compressed %" PRIu64 " -> %lu\n",
> +			 buf->surface[0].size, outsize);
> +
> +		igt_info("--- %s ---\n", name);
> +		pos = str = g_base64_encode(destbuf, outsize);
> +		outsize = strlen(str);
> +		while (pos) {
> +			char line[LINELEN + 1];
> +			int to_copy = min(LINELEN, outsize);
> +
> +			memcpy(line, pos, to_copy);
> +			line[to_copy] = 0;
> +			igt_info("%s\n", line);
> +			pos += LINELEN;
> +			outsize -= to_copy;
> +
> +			if (outsize == 0)
> +				break;
> +		}
> +		free(str);
> +	}
> +
> +	munmap(ptr, buf->surface[0].size);
> +	free(destbuf);
> +
> +	return ret;
> +}
> +
> +static int __do_intel_bb_blit(struct buf_ops *bops, uint32_t tiling)
> +{
> +	struct intel_bb *ibb;
> +	const int width = 1024;
> +	const int height = 1024;
> +	struct intel_buf src, dst, final;
> +	char name[128];
> +	int xe = buf_ops_get_fd(bops), fails;
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	scratch_buf_init(bops, &src, width, height, I915_TILING_NONE,
> +			 I915_COMPRESSION_NONE);
> +	scratch_buf_init(bops, &dst, width, height, tiling,
> +			 I915_COMPRESSION_NONE);
> +	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
> +			 I915_COMPRESSION_NONE);
> +
> +	if (buf_info) {
> +		intel_buf_print(&src);
> +		intel_buf_print(&dst);
> +	}
> +
> +	scratch_buf_draw_pattern(bops, &src,
> +				 0, 0, width, height,
> +				 0, 0, width, height, 0);
> +
> +	intel_bb_blt_copy(ibb,
> +			  &src, 0, 0, src.surface[0].stride,
> +			  &dst, 0, 0, dst.surface[0].stride,
> +			  intel_buf_width(&dst),
> +			  intel_buf_height(&dst),
> +			  dst.bpp);
> +
> +	intel_bb_blt_copy(ibb,
> +			  &dst, 0, 0, dst.surface[0].stride,
> +			  &final, 0, 0, final.surface[0].stride,
> +			  intel_buf_width(&dst),
> +			  intel_buf_height(&dst),
> +			  dst.bpp);
> +
> +	igt_assert(intel_bb_sync(ibb) == 0);
> +	intel_bb_destroy(ibb);
> +
> +	if (write_png) {
> +		snprintf(name, sizeof(name) - 1,
> +			 "bb_blit_dst_tiling_%d.png", tiling);
> +		intel_buf_write_to_png(&src, "bb_blit_src_tiling_none.png");
> +		intel_buf_write_to_png(&dst, name);
> +		intel_buf_write_to_png(&final, "bb_blit_final_tiling_none.png");
> +	}
> +
> +	/* We'll fail on src <-> final compare so just warn */
> +	if (tiling == I915_TILING_NONE) {
> +		if (compare_bufs(&src, &dst, false) > 0)
> +			igt_warn("none->none blit failed!");
> +	} else {
> +		if (compare_bufs(&src, &dst, false) == 0)
> +			igt_warn("none->tiled blit failed!");
> +	}
> +
> +	fails = compare_bufs(&src, &final, true);
> +
> +	intel_buf_close(bops, &src);
> +	intel_buf_close(bops, &dst);
> +	intel_buf_close(bops, &final);
> +
> +	return fails;
> +}
> +
> +/**
> + * SUBTEST: intel-bb-blit-%s
> + * Description: Run simple bb xe %arg[1] test
> + * Run type: BAT
> + *
> + * arg[1]:
> + *
> + * @none:				none
> + * @x:					x
> + * @y:					y
> + */
> +static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling)
> +{
> +	int i, fails = 0, xe = buf_ops_get_fd(bops);
> +
> +	/* We'll fix it for gen2/3 later. */
> +	igt_require(intel_gen(intel_get_drm_devid(xe)) > 3);
> +
> +	for (i = 0; i < loops; i++)
> +		fails += __do_intel_bb_blit(bops, tiling);
> +
> +	igt_assert_f(fails == 0, "intel-bb-blit (tiling: %d) fails: %d\n",
> +		     tiling, fails);
> +}
> +
> +/**
> + * SUBTEST: offset-control
> + * Description: check offset is kept on default simple allocator
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static void offset_control(struct buf_ops *bops)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	struct intel_buf *src, *dst1, *dst2, *dst3;
> +	uint64_t poff_src, poff_dst1, poff_dst2;
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
> +	dst1 = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
> +	dst2 = create_buf(bops, WIDTH, HEIGHT, COLOR_77);
> +
> +	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
> +			    src->addr.offset, 0, false);
> +	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
> +			    dst1->addr.offset, 0, true);
> +	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
> +			    dst2->addr.offset, 0, true);
> +
> +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> +	intel_bb_ptr_align(ibb, 8);
> +
> +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
> +
> +	if (buf_info) {
> +		print_buf(src, "src ");
> +		print_buf(dst1, "dst1");
> +		print_buf(dst2, "dst2");
> +	}
> +
> +	poff_src = src->addr.offset;
> +	poff_dst1 = dst1->addr.offset;
> +	poff_dst2 = dst2->addr.offset;
> +	intel_bb_reset(ibb, true);
> +
> +	dst3 = create_buf(bops, WIDTH, HEIGHT, COLOR_33);
> +	intel_bb_add_object(ibb, dst3->handle, intel_buf_bo_size(dst3),
> +			    dst3->addr.offset, 0, true);
> +	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
> +			    src->addr.offset, 0, false);
> +	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
> +			    dst1->addr.offset, 0, true);
> +	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
> +			    dst2->addr.offset, 0, true);
> +
> +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> +	intel_bb_ptr_align(ibb, 8);
> +
> +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
> +	intel_bb_sync(ibb);
> +	intel_bb_reset(ibb, true);
> +
> +	igt_assert(poff_src == src->addr.offset);
> +	igt_assert(poff_dst1 == dst1->addr.offset);
> +	igt_assert(poff_dst2 == dst2->addr.offset);
> +
> +	if (buf_info) {
> +		print_buf(src, "src ");
> +		print_buf(dst1, "dst1");
> +		print_buf(dst2, "dst2");
> +	}
> +
> +	intel_buf_destroy(src);
> +	intel_buf_destroy(dst1);
> +	intel_buf_destroy(dst2);
> +	intel_buf_destroy(dst3);
> +	intel_bb_destroy(ibb);
> +}
> +
> +/*
> + * Idea of the test is to verify delta is properly added to address
> + * when emit_reloc() is called.
> + */
> +
> +/**
> + * SUBTEST: delta-check
> + * Description: check delta is honoured in intel-bb pipelines
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +#define DELTA_BUFFERS 3
> +static void delta_check(struct buf_ops *bops)
> +{
> +	const uint32_t expected = 0x1234abcd;
> +	int xe = buf_ops_get_fd(bops);
> +	uint32_t *ptr, hi, lo, val;
> +	struct intel_buf *buf;
> +	struct intel_bb *ibb;
> +	uint64_t offset;
> +	uint64_t obj_size = xe_get_default_alignment(xe) + 0x2000;
> +	uint64_t obj_offset = (1ULL << 32) - xe_get_default_alignment(xe);
> +	uint64_t delta = xe_get_default_alignment(xe) + 0x1000;
> +
> +	ibb = intel_bb_create_with_allocator(xe, 0, NULL, PAGE_SIZE,
> +					     INTEL_ALLOCATOR_SIMPLE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	buf = create_buf(bops, obj_size, 0x1, COLOR_CC);
> +	buf->addr.offset = obj_offset;
> +	intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
> +			    buf->addr.offset, 0, false);
> +
> +	intel_bb_out(ibb, MI_STORE_DWORD_IMM_GEN4);
> +	intel_bb_emit_reloc(ibb, buf->handle,
> +			    I915_GEM_DOMAIN_RENDER,
> +			    I915_GEM_DOMAIN_RENDER,
> +			    delta, buf->addr.offset);
> +	intel_bb_out(ibb, expected);
> +
> +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> +	intel_bb_ptr_align(ibb, 8);
> +
> +	intel_bb_exec(ibb, intel_bb_offset(ibb), I915_EXEC_DEFAULT, false);
> +	intel_bb_sync(ibb);
> +
> +	/* Buffer should be @ obj_offset */
> +	offset = intel_bb_get_object_offset(ibb, buf->handle);
> +	igt_assert_eq_u64(offset, obj_offset);
> +
> +	ptr = xe_bo_map(xe, ibb->handle, ibb->size);
> +	lo = ptr[1];
> +	hi = ptr[2];
> +	gem_munmap(ptr, ibb->size);
> +
> +	ptr = xe_bo_map(xe, buf->handle, intel_buf_size(buf));
> +	val = ptr[delta / sizeof(uint32_t)];
> +	gem_munmap(ptr, intel_buf_size(buf));
> +
> +	intel_buf_destroy(buf);
> +	intel_bb_destroy(ibb);
> +
> +	/* Assert after all resources are freed */
> +	igt_assert_f(lo == 0x1000 && hi == 0x1,
> +		     "intel-bb doesn't properly handle delta in emit relocation\n");
> +	igt_assert_f(val == expected,
> +		     "Address doesn't contain expected [%x] value [%x]\n",
> +		     expected, val);
> +}
> +
> +/**
> + * SUBTEST: full-batch
> + * Description: check bb totally filled is executing correct
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static void full_batch(struct buf_ops *bops)
> +{
> +	int xe = buf_ops_get_fd(bops);
> +	struct intel_bb *ibb;
> +	int i;
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	for (i = 0; i < PAGE_SIZE / sizeof(uint32_t) - 1; i++)
> +		intel_bb_out(ibb, 0);
> +	intel_bb_emit_bbe(ibb);
> +
> +	igt_assert(intel_bb_offset(ibb) == PAGE_SIZE);
> +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> +	intel_bb_reset(ibb, false);
> +
> +	intel_bb_destroy(ibb);
> +}
> +
> +/**
> + * SUBTEST: render
> + * Description: check intel-bb render pipeline
> + * Run type: FULL
> + * TODO: change ``'Run type' == FULL`` to a better category
> + */
> +static int render(struct buf_ops *bops, uint32_t tiling,
> +		  uint32_t width, uint32_t height)
> +{
> +	struct intel_bb *ibb;
> +	struct intel_buf src, dst, final;
> +	int xe = buf_ops_get_fd(bops);
> +	uint32_t fails = 0;
> +	char name[128];
> +	uint32_t devid = intel_get_drm_devid(xe);
> +	igt_render_copyfunc_t render_copy = NULL;
> +
> +	igt_debug("%s() gen: %d\n", __func__, intel_gen(devid));
> +
> +	ibb = intel_bb_create(xe, PAGE_SIZE);
> +
> +	if (debug_bb)
> +		intel_bb_set_debug(ibb, true);
> +
> +	if (print_base64)
> +		intel_bb_set_dump_base64(ibb, true);
> +
> +	scratch_buf_init(bops, &src, width, height, I915_TILING_NONE,
> +			 I915_COMPRESSION_NONE);
> +	scratch_buf_init(bops, &dst, width, height, tiling,
> +			 I915_COMPRESSION_NONE);
> +	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
> +			 I915_COMPRESSION_NONE);
> +
> +	scratch_buf_draw_pattern(bops, &src,
> +				 0, 0, width, height,
> +				 0, 0, width, height, 0);
> +
> +	render_copy = igt_get_render_copyfunc(devid);
> +	igt_assert(render_copy);
> +
> +	render_copy(ibb,
> +		    &src,
> +		    0, 0, width, height,
> +		    &dst,
> +		    0, 0);
> +
> +	render_copy(ibb,
> +		    &dst,
> +		    0, 0, width, height,
> +		    &final,
> +		    0, 0);
> +
> +	intel_bb_sync(ibb);
> +	intel_bb_destroy(ibb);
> +
> +	if (write_png) {
> +		snprintf(name, sizeof(name) - 1,
> +			 "render_dst_tiling_%d.png", tiling);
> +		intel_buf_write_to_png(&src, "render_src_tiling_none.png");
> +		intel_buf_write_to_png(&dst, name);
> +		intel_buf_write_to_png(&final, "render_final_tiling_none.png");
> +	}
> +
> +	/* We'll fail on src <-> final compare so just warn */
> +	if (tiling == I915_TILING_NONE) {
> +		if (compare_bufs(&src, &dst, false) > 0)
> +			igt_warn("%s: none->none failed!\n", __func__);
> +	} else {
> +		if (compare_bufs(&src, &dst, false) == 0)
> +			igt_warn("%s: none->tiled failed!\n", __func__);
> +	}
> +
> +	fails = compare_bufs(&src, &final, true);
> +
> +	if (fails && print_base64) {
> +		dump_base64("src", &src);
> +		dump_base64("dst", &dst);
> +		dump_base64("final", &final);
> +	}
> +
> +	intel_buf_close(bops, &src);
> +	intel_buf_close(bops, &dst);
> +	intel_buf_close(bops, &final);
> +
> +	igt_assert_f(fails == 0, "%s: (tiling: %d) fails: %d\n",
> +		     __func__, tiling, fails);
> +
> +	return fails;
> +}
> +
> +static int opt_handler(int opt, int opt_index, void *data)
> +{
> +	switch (opt) {
> +	case 'd':
> +		debug_bb = true;
> +		break;
> +	case 'p':
> +		write_png = true;
> +		break;
> +	case 'i':
> +		buf_info = true;
> +		break;
> +	case 'b':
> +		print_base64 = true;
> +		break;
> +	default:
> +		return IGT_OPT_HANDLER_ERROR;
> +	}
> +
> +	return IGT_OPT_HANDLER_SUCCESS;
> +}
> +
> +const char *help_str =
> +	"  -d\tDebug bb\n"
> +	"  -p\tWrite surfaces to png\n"
> +	"  -i\tPrint buffer info\n"
> +	"  -b\tDump to base64 (bb and images)\n"
> +	;
> +
> +igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
> +{
> +	int xe, i;
> +	struct buf_ops *bops;
> +	uint32_t width;
> +
> +	struct test {
> +		uint32_t tiling;
> +		const char *tiling_name;
> +	} tests[] = {
> +		{ I915_TILING_NONE, "none" },
> +		{ I915_TILING_X, "x" },
> +		{ I915_TILING_Y, "y" },
> +	};
> +
> +	igt_fixture {
> +		xe = drm_open_driver(DRIVER_XE);
> +		bops = buf_ops_create(xe);
> +		xe_device_get(xe);
> +	}
> +
> +	igt_describe("Ensure reset is possible on fresh bb");
> +	igt_subtest("reset-bb")
> +		reset_bb(bops);
> +
> +	igt_subtest_f("purge-bb")
> +		purge_bb(bops);
> +
> +	igt_subtest("simple-bb")
> +		simple_bb(bops, false);
> +
> +	igt_subtest("simple-bb-ctx")
> +		simple_bb(bops, true);
> +
> +	igt_subtest("bb-with-allocator")
> +		bb_with_allocator(bops);
> +
> +	igt_subtest("lot-of-buffers")
> +		lot_of_buffers(bops);
> +
> +	igt_subtest("add-remove-objects")
> +		add_remove_objects(bops);
> +
> +	igt_subtest("destroy-bb")
> +		destroy_bb(bops);
> +
> +	igt_subtest_with_dynamic("create-in-region") {
> +		uint64_t memreg = all_memory_regions(xe), region;
> +
> +		xe_for_each_mem_region(fd, memreg, region)
> +			igt_dynamic_f("region-%s", xe_region_name(region))
> +				create_in_region(bops, region);
> +	}
> +
> +	igt_subtest("blit-simple")
> +		blit(bops, INTEL_ALLOCATOR_SIMPLE);
> +
> +	igt_subtest("blit-reloc")
> +		blit(bops, INTEL_ALLOCATOR_RELOC);
> +
> +	igt_subtest("intel-bb-blit-none")
> +		do_intel_bb_blit(bops, 3, I915_TILING_NONE);
> +
> +	igt_subtest("intel-bb-blit-x")
> +		do_intel_bb_blit(bops, 3, I915_TILING_X);
> +
> +	igt_subtest("intel-bb-blit-y") {
> +		igt_require(intel_gen(intel_get_drm_devid(xe)) >= 6);
> +		do_intel_bb_blit(bops, 3, I915_TILING_Y);
> +	}
> +
> +	igt_subtest("offset-control")
> +		offset_control(bops);
> +
> +	igt_subtest("delta-check")
> +		delta_check(bops);
> +
> +	igt_subtest("full-batch")
> +		full_batch(bops);
> +
> +	igt_subtest_with_dynamic("render") {
> +		for (i = 0; i < ARRAY_SIZE(tests); i++) {
> +			const struct test *t = &tests[i];
> +
> +			for (width = 512; width <= 1024; width += 512)
> +				igt_dynamic_f("render-%s-%u", t->tiling_name, width)
> +					render(bops, t->tiling, width, width);
> +		}
> +	}
> +
> +	igt_fixture {
> +		xe_device_put(xe);
> +		buf_ops_destroy(bops);
> +		close(xe);
> +	}
> +}
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 12/17] tests/xe-fast-feedback: Add xe_intel_bb test to BAT
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 12/17] tests/xe-fast-feedback: Add xe_intel_bb test to BAT Zbigniew Kempczyński
@ 2023-04-28  7:59   ` Kamil Konieczny
  0 siblings, 0 replies; 34+ messages in thread
From: Kamil Konieczny @ 2023-04-28  7:59 UTC (permalink / raw)
  To: igt-dev

On 2023-04-28 at 08:22:19 +0200, Zbigniew Kempczyński wrote:
> Verifies intel-bb integration with xe.
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com>
> Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>

Acked-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  tests/intel-ci/xe-fast-feedback.testlist | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
> index 987e72ef4b..3d0603fd94 100644
> --- a/tests/intel-ci/xe-fast-feedback.testlist
> +++ b/tests/intel-ci/xe-fast-feedback.testlist
> @@ -106,6 +106,24 @@ igt@xe_guc_pc@freq_range_idle
>  igt@xe_guc_pc@rc6_on_idle
>  igt@xe_guc_pc@rc0_on_exec
>  igt@xe_huc_copy@huc_copy
> +igt@xe_intel_bb@add-remove-objects
> +igt@xe_intel_bb@bb-with-allocator
> +igt@xe_intel_bb@blit-reloc
> +igt@xe_intel_bb@blit-simple
> +igt@xe_intel_bb@create-in-region
> +igt@xe_intel_bb@delta-check
> +igt@xe_intel_bb@destroy-bb
> +igt@xe_intel_bb@full-batch
> +igt@xe_intel_bb@intel-bb-blit-none
> +igt@xe_intel_bb@intel-bb-blit-x
> +igt@xe_intel_bb@intel-bb-blit-y
> +igt@xe_intel_bb@lot-of-buffers
> +igt@xe_intel_bb@offset-control
> +igt@xe_intel_bb@purge-bb
> +igt@xe_intel_bb@render
> +igt@xe_intel_bb@reset-bb
> +igt@xe_intel_bb@simple-bb
> +igt@xe_intel_bb@simple-bb-ctx
>  igt@xe_mmap@system
>  igt@xe_mmap@vram
>  igt@xe_mmap@vram-system
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 11/17] tests/xe_intel_bb: Check if intel-bb Xe support correctness
  2023-04-28  7:58   ` Kamil Konieczny
@ 2023-04-28  8:18     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  8:18 UTC (permalink / raw)
  To: Kamil Konieczny, igt-dev, Christoph Manszewski

On Fri, Apr 28, 2023 at 09:58:59AM +0200, Kamil Konieczny wrote:
> Hi,
> 
> On 2023-04-28 at 08:22:18 +0200, Zbigniew Kempczyński wrote:
> > As we're reusing intel-bb for Xe we need to check it behaves correctly
> > for buffer handling and submission.
> > 
> > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>
> 
> imho it will be more clear to state that you added new test here,
> so instead of:
> 
> tests/xe_intel_bb: Check if intel-bb Xe support correctness
> 
> this will be better:
> 
> tests/xe_intel_bb: add new test for intel-bb on Xe platform
> 
> Then you describe rationale in message. It is up to you if you
> would like to change this, so with or without it:
> 
> Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>
> 
> --
> Kamil
> 

Thank you for the review. I'm going to rename subject before merge
when I'll get CI results.

--
Zbigniew

> > 
> > ---
> > v5: to run test quick use system memory instead of vram (mapping
> >     on system is wb)
> > ---
> >  tests/meson.build      |    1 +
> >  tests/xe/xe_intel_bb.c | 1185 ++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 1186 insertions(+)
> >  create mode 100644 tests/xe/xe_intel_bb.c
> > 
> > diff --git a/tests/meson.build b/tests/meson.build
> > index 8909cfa8fd..b026fac48b 100644
> > --- a/tests/meson.build
> > +++ b/tests/meson.build
> > @@ -256,6 +256,7 @@ xe_progs = [
> >  	'xe_exec_threads',
> >  	'xe_guc_pc',
> >  	'xe_huc_copy',
> > +	'xe_intel_bb',
> >  	'xe_mmap',
> >  	'xe_mmio',
> >  	'xe_module_load',
> > diff --git a/tests/xe/xe_intel_bb.c b/tests/xe/xe_intel_bb.c
> > new file mode 100644
> > index 0000000000..35d61608e1
> > --- /dev/null
> > +++ b/tests/xe/xe_intel_bb.c
> > @@ -0,0 +1,1185 @@
> > +// SPDX-License-Identifier: MIT
> > +/*
> > + * Copyright © 2023 Intel Corporation
> > + */
> > +
> > +#include <cairo.h>
> > +#include <errno.h>
> > +#include <fcntl.h>
> > +#include <glib.h>
> > +#include <inttypes.h>
> > +#include <stdio.h>
> > +#include <stdlib.h>
> > +#include <string.h>
> > +#include <sys/ioctl.h>
> > +#include <sys/stat.h>
> > +#include <unistd.h>
> > +#include <zlib.h>
> > +
> > +#include "igt.h"
> > +#include "igt_crc.h"
> > +#include "intel_bufops.h"
> > +#include "xe/xe_ioctl.h"
> > +#include "xe/xe_query.h"
> > +
> > +/**
> > + * TEST: Basic tests for intel-bb xe functionality
> > + * Category: Software building block
> > + * Sub-category: xe
> > + * Test category: functionality test
> > + */
> > +
> > +#define PAGE_SIZE 4096
> > +
> > +#define WIDTH	64
> > +#define HEIGHT	64
> > +#define STRIDE	(WIDTH * 4)
> > +#define SIZE	(HEIGHT * STRIDE)
> > +
> > +#define COLOR_00	0x00
> > +#define COLOR_33	0x33
> > +#define COLOR_77	0x77
> > +#define COLOR_CC	0xcc
> > +
> > +IGT_TEST_DESCRIPTION("xe_intel_bb API check.");
> > +
> > +static bool debug_bb;
> > +static bool write_png;
> > +static bool buf_info;
> > +static bool print_base64;
> > +
> > +static void *alloc_aligned(uint64_t size)
> > +{
> > +	void *p;
> > +
> > +	igt_assert_eq(posix_memalign(&p, 16, size), 0);
> > +
> > +	return p;
> > +}
> > +
> > +static void fill_buf(struct intel_buf *buf, uint8_t color)
> > +{
> > +	uint8_t *ptr;
> > +	int xe = buf_ops_get_fd(buf->bops);
> > +	int i;
> > +
> > +	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
> > +
> > +	for (i = 0; i < buf->surface[0].size; i++)
> > +		ptr[i] = color;
> > +
> > +	munmap(ptr, buf->surface[0].size);
> > +}
> > +
> > +static void check_buf(struct intel_buf *buf, uint8_t color)
> > +{
> > +	uint8_t *ptr;
> > +	int xe = buf_ops_get_fd(buf->bops);
> > +	int i;
> > +
> > +	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
> > +
> > +	for (i = 0; i < buf->surface[0].size; i++)
> > +		igt_assert(ptr[i] == color);
> > +
> > +	munmap(ptr, buf->surface[0].size);
> > +}
> > +
> > +static struct intel_buf *
> > +create_buf(struct buf_ops *bops, int width, int height, uint8_t color)
> > +{
> > +	struct intel_buf *buf;
> > +
> > +	buf = calloc(1, sizeof(*buf));
> > +	igt_assert(buf);
> > +
> > +	intel_buf_init(bops, buf, width/4, height, 32, 0, I915_TILING_NONE, 0);
> > +	fill_buf(buf, color);
> > +
> > +	return buf;
> > +}
> > +
> > +static void print_buf(struct intel_buf *buf, const char *name)
> > +{
> > +	uint8_t *ptr;
> > +	int xe = buf_ops_get_fd(buf->bops);
> > +
> > +	ptr = xe_bo_map(xe, buf->handle, buf->surface[0].size);
> > +
> > +	igt_debug("[%s] Buf handle: %d, size: %" PRIu64
> > +		  ", v: 0x%02x, presumed_addr: %p\n",
> > +		  name, buf->handle, buf->surface[0].size, ptr[0],
> > +		  from_user_pointer(buf->addr.offset));
> > +	munmap(ptr, buf->surface[0].size);
> > +}
> > +
> > +/**
> > + * SUBTEST: reset-bb
> > + * Description: check bb reset
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static void reset_bb(struct buf_ops *bops)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	intel_bb_reset(ibb, false);
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +/**
> > + * SUBTEST: purge-bb
> > + * Description: check bb reset == full (purge)
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static void purge_bb(struct buf_ops *bops)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_buf *buf;
> > +	struct intel_bb *ibb;
> > +	uint64_t offset0, offset1;
> > +
> > +	buf = intel_buf_create(bops, 512, 512, 32, 0, I915_TILING_NONE,
> > +			       I915_COMPRESSION_NONE);
> > +	ibb = intel_bb_create(xe, 4096);
> > +	intel_bb_set_debug(ibb, true);
> > +
> > +	intel_bb_add_intel_buf(ibb, buf, false);
> > +	offset0 = buf->addr.offset;
> > +
> > +	intel_bb_reset(ibb, true);
> > +	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
> > +
> > +	intel_bb_add_intel_buf(ibb, buf, false);
> > +	offset1 = buf->addr.offset;
> > +
> > +	igt_assert(offset0 == offset1);
> > +
> > +	intel_buf_destroy(buf);
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +/**
> > + * SUBTEST: simple-%s
> > + * Description: Run simple bb xe %arg[1] test
> > + * Run type: BAT
> > + *
> > + * arg[1]:
> > + *
> > + * @bb:     bb
> > + * @bb-ctx: bb-ctx
> > + */
> > +static void simple_bb(struct buf_ops *bops, bool new_context)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	uint32_t ctx = 0;
> > +
> > +	ibb = intel_bb_create_with_allocator(xe, ctx, NULL, PAGE_SIZE,
> > +					     INTEL_ALLOCATOR_SIMPLE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> > +	intel_bb_ptr_align(ibb, 8);
> > +
> > +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +
> > +	/* Check we're safe with reset and no double-free will occur */
> > +	intel_bb_reset(ibb, true);
> > +	intel_bb_reset(ibb, false);
> > +	intel_bb_reset(ibb, true);
> > +
> > +	if (new_context) {
> > +		ctx = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > +		intel_bb_destroy(ibb);
> > +		ibb = intel_bb_create_with_context(xe, ctx, NULL, PAGE_SIZE);
> > +		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> > +		intel_bb_ptr_align(ibb, 8);
> > +		intel_bb_exec(ibb, intel_bb_offset(ibb),
> > +			      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC,
> > +			      true);
> > +		xe_vm_destroy(xe, ctx);
> > +	}
> > +
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +/**
> > + * SUBTEST: bb-with-allocator
> > + * Description: check bb with passed allocator
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static void bb_with_allocator(struct buf_ops *bops)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	struct intel_buf *src, *dst;
> > +	uint32_t ctx = 0;
> > +
> > +	ibb = intel_bb_create_with_allocator(xe, ctx, NULL, PAGE_SIZE,
> > +					     INTEL_ALLOCATOR_SIMPLE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	src = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
> > +			       I915_COMPRESSION_NONE);
> > +	dst = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
> > +			       I915_COMPRESSION_NONE);
> > +
> > +	intel_bb_add_intel_buf(ibb, src, false);
> > +	intel_bb_add_intel_buf(ibb, dst, true);
> > +	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
> > +	intel_bb_remove_intel_buf(ibb, src);
> > +	intel_bb_remove_intel_buf(ibb, dst);
> > +
> > +	intel_buf_destroy(src);
> > +	intel_buf_destroy(dst);
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +/**
> > + * SUBTEST: lot-of-buffers
> > + * Description: check running bb with many buffers
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +#define NUM_BUFS 500
> > +static void lot_of_buffers(struct buf_ops *bops)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	struct intel_buf *buf[NUM_BUFS];
> > +	int i;
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> > +	intel_bb_ptr_align(ibb, 8);
> > +
> > +	for (i = 0; i < NUM_BUFS; i++) {
> > +		buf[i] = intel_buf_create(bops, 4096, 1, 8, 0, I915_TILING_NONE,
> > +					  I915_COMPRESSION_NONE);
> > +		if (i % 2)
> > +			intel_bb_add_intel_buf(ibb, buf[i], false);
> > +		else
> > +			intel_bb_add_intel_buf_with_alignment(ibb, buf[i],
> > +							      0x4000, false);
> > +	}
> > +
> > +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +	intel_bb_reset(ibb, false);
> > +
> > +	for (i = 0; i < NUM_BUFS; i++)
> > +		intel_buf_destroy(buf[i]);
> > +
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +/**
> > + * SUBTEST: add-remove-objects
> > + * Description: check bb object manipulation (add + remove)
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static void add_remove_objects(struct buf_ops *bops)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	struct intel_buf *src, *mid, *dst;
> > +	uint32_t offset;
> > +	const uint32_t width = 512;
> > +	const uint32_t height = 512;
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	src = intel_buf_create(bops, width, height, 32, 0,
> > +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> > +	mid = intel_buf_create(bops, width, height, 32, 0,
> > +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> > +	dst = intel_buf_create(bops, width, height, 32, 0,
> > +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> > +
> > +	intel_bb_add_intel_buf(ibb, src, false);
> > +	intel_bb_add_intel_buf(ibb, mid, true);
> > +	intel_bb_remove_intel_buf(ibb, mid);
> > +	intel_bb_remove_intel_buf(ibb, mid);
> > +	intel_bb_remove_intel_buf(ibb, mid);
> > +	intel_bb_add_intel_buf(ibb, dst, true);
> > +
> > +	offset = intel_bb_emit_bbe(ibb);
> > +	intel_bb_exec(ibb, offset,
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +	intel_bb_reset(ibb, false);
> > +
> > +	intel_buf_destroy(src);
> > +	intel_buf_destroy(mid);
> > +	intel_buf_destroy(dst);
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +/**
> > + * SUBTEST: destroy-bb
> > + * Description: check bb destroy/create
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static void destroy_bb(struct buf_ops *bops)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	struct intel_buf *src, *mid, *dst;
> > +	uint32_t offset;
> > +	const uint32_t width = 512;
> > +	const uint32_t height = 512;
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	src = intel_buf_create(bops, width, height, 32, 0,
> > +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> > +	mid = intel_buf_create(bops, width, height, 32, 0,
> > +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> > +	dst = intel_buf_create(bops, width, height, 32, 0,
> > +			       I915_TILING_NONE, I915_COMPRESSION_NONE);
> > +
> > +	intel_bb_add_intel_buf(ibb, src, false);
> > +	intel_bb_add_intel_buf(ibb, mid, true);
> > +	intel_bb_add_intel_buf(ibb, dst, true);
> > +
> > +	offset = intel_bb_emit_bbe(ibb);
> > +	intel_bb_exec(ibb, offset,
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +	intel_bb_reset(ibb, false);
> > +
> > +	/* Check destroy will detach intel_bufs */
> > +	intel_bb_destroy(ibb);
> > +	igt_assert(src->addr.offset == INTEL_BUF_INVALID_ADDRESS);
> > +	igt_assert(src->ibb == NULL);
> > +	igt_assert(mid->addr.offset == INTEL_BUF_INVALID_ADDRESS);
> > +	igt_assert(mid->ibb == NULL);
> > +	igt_assert(dst->addr.offset == INTEL_BUF_INVALID_ADDRESS);
> > +	igt_assert(dst->ibb == NULL);
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	intel_bb_add_intel_buf(ibb, src, false);
> > +	offset = intel_bb_emit_bbe(ibb);
> > +	intel_bb_exec(ibb, offset,
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +	intel_bb_reset(ibb, false);
> > +
> > +	intel_bb_destroy(ibb);
> > +	intel_buf_destroy(src);
> > +	intel_buf_destroy(mid);
> > +	intel_buf_destroy(dst);
> > +}
> > +
> > +/**
> > + * SUBTEST: create-in-region
> > + * Description: check size validation on available regions
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static void create_in_region(struct buf_ops *bops, uint64_t region)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	struct intel_buf buf = {};
> > +	uint32_t handle, offset;
> > +	uint64_t size;
> > +	int width = 64;
> > +	int height = 64;
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	size = xe_min_page_size(xe, system_memory(xe));
> > +	handle = xe_bo_create_flags(xe, 0, size, system_memory(xe));
> > +	intel_buf_init_full(bops, handle, &buf,
> > +			    width/4, height, 32, 0,
> > +			    I915_TILING_NONE, 0,
> > +			    size, 0, region);
> > +	intel_buf_set_ownership(&buf, true);
> > +
> > +	intel_bb_add_intel_buf(ibb, &buf, false);
> > +	offset = intel_bb_emit_bbe(ibb);
> > +	intel_bb_exec(ibb, offset,
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +	intel_bb_reset(ibb, false);
> > +
> > +	intel_buf_close(bops, &buf);
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +static void __emit_blit(struct intel_bb *ibb,
> > +			 struct intel_buf *src, struct intel_buf *dst)
> > +{
> > +	intel_bb_emit_blt_copy(ibb,
> > +			       src, 0, 0, src->surface[0].stride,
> > +			       dst, 0, 0, dst->surface[0].stride,
> > +			       intel_buf_width(dst),
> > +			       intel_buf_height(dst),
> > +			       dst->bpp);
> > +}
> > +
> > +/**
> > + * SUBTEST: blit-%s
> > + * Description: Run blit on %arg[1] allocator
> > + * Run type: BAT
> > + *
> > + * arg[1]:
> > + *
> > + * @simple:				simple
> > + * @reloc:				reloc
> > + */
> > +static void blit(struct buf_ops *bops, uint8_t allocator_type)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	struct intel_buf *src, *dst;
> > +	uint64_t poff_src, poff_dst;
> > +	uint64_t flags = 0;
> > +
> > +	ibb = intel_bb_create_with_allocator(xe, 0, NULL, PAGE_SIZE,
> > +					     allocator_type);
> > +	flags |= I915_EXEC_NO_RELOC;
> > +
> > +	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
> > +	dst = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
> > +
> > +	if (buf_info) {
> > +		print_buf(src, "src");
> > +		print_buf(dst, "dst");
> > +	}
> > +
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	__emit_blit(ibb, src, dst);
> > +	intel_bb_emit_bbe(ibb);
> > +	intel_bb_flush_blit(ibb);
> > +	intel_bb_sync(ibb);
> > +	intel_bb_reset(ibb, false);
> > +	check_buf(dst, COLOR_CC);
> > +
> > +	poff_src = intel_bb_get_object_offset(ibb, src->handle);
> > +	poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
> > +
> > +	/* Add buffers again */
> > +	intel_bb_add_intel_buf(ibb, src, false);
> > +	intel_bb_add_intel_buf(ibb, dst, true);
> > +
> > +	igt_assert_f(poff_src == src->addr.offset,
> > +		     "prev src addr: %" PRIx64 " <> src addr %" PRIx64 "\n",
> > +		     poff_src, src->addr.offset);
> > +	igt_assert_f(poff_dst == dst->addr.offset,
> > +		     "prev dst addr: %" PRIx64 " <> dst addr %" PRIx64 "\n",
> > +		     poff_dst, dst->addr.offset);
> > +
> > +	fill_buf(src, COLOR_77);
> > +	fill_buf(dst, COLOR_00);
> > +
> > +	__emit_blit(ibb, src, dst);
> > +	intel_bb_emit_bbe(ibb);
> > +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +	intel_bb_reset(ibb, false);
> > +	check_buf(dst, COLOR_77);
> > +
> > +	intel_bb_emit_bbe(ibb);
> > +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +	intel_bb_reset(ibb, false);
> > +	check_buf(dst, COLOR_77);
> > +
> > +	intel_buf_destroy(src);
> > +	intel_buf_destroy(dst);
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +static void scratch_buf_init(struct buf_ops *bops,
> > +			     struct intel_buf *buf,
> > +			     int width, int height,
> > +			     uint32_t req_tiling,
> > +			     enum i915_compression compression)
> > +{
> > +	int fd = buf_ops_get_fd(bops);
> > +	int bpp = 32;
> > +
> > +	/*
> > +	 * We use system memory even if vram is possible because wc mapping
> > +	 * is extremely slow.
> > +	 */
> > +	intel_buf_init_in_region(bops, buf, width, height, bpp, 0,
> > +				 req_tiling, compression,
> > +				 system_memory(fd));
> > +
> > +	igt_assert(intel_buf_width(buf) == width);
> > +	igt_assert(intel_buf_height(buf) == height);
> > +}
> > +
> > +static void scratch_buf_draw_pattern(struct buf_ops *bops,
> > +				     struct intel_buf *buf,
> > +				     int x, int y, int w, int h,
> > +				     int cx, int cy, int cw, int ch,
> > +				     bool use_alternate_colors)
> > +{
> > +	cairo_surface_t *surface;
> > +	cairo_pattern_t *pat;
> > +	cairo_t *cr;
> > +	void *linear;
> > +
> > +	linear = alloc_aligned(buf->surface[0].size);
> > +
> > +	surface = cairo_image_surface_create_for_data(linear,
> > +						      CAIRO_FORMAT_RGB24,
> > +						      intel_buf_width(buf),
> > +						      intel_buf_height(buf),
> > +						      buf->surface[0].stride);
> > +
> > +	cr = cairo_create(surface);
> > +
> > +	cairo_rectangle(cr, cx, cy, cw, ch);
> > +	cairo_clip(cr);
> > +
> > +	pat = cairo_pattern_create_mesh();
> > +	cairo_mesh_pattern_begin_patch(pat);
> > +	cairo_mesh_pattern_move_to(pat, x,   y);
> > +	cairo_mesh_pattern_line_to(pat, x+w, y);
> > +	cairo_mesh_pattern_line_to(pat, x+w, y+h);
> > +	cairo_mesh_pattern_line_to(pat, x,   y+h);
> > +	if (use_alternate_colors) {
> > +		cairo_mesh_pattern_set_corner_color_rgb(pat, 0, 0.0, 1.0, 1.0);
> > +		cairo_mesh_pattern_set_corner_color_rgb(pat, 1, 1.0, 0.0, 1.0);
> > +		cairo_mesh_pattern_set_corner_color_rgb(pat, 2, 1.0, 1.0, 0.0);
> > +		cairo_mesh_pattern_set_corner_color_rgb(pat, 3, 0.0, 0.0, 0.0);
> > +	} else {
> > +		cairo_mesh_pattern_set_corner_color_rgb(pat, 0, 1.0, 0.0, 0.0);
> > +		cairo_mesh_pattern_set_corner_color_rgb(pat, 1, 0.0, 1.0, 0.0);
> > +		cairo_mesh_pattern_set_corner_color_rgb(pat, 2, 0.0, 0.0, 1.0);
> > +		cairo_mesh_pattern_set_corner_color_rgb(pat, 3, 1.0, 1.0, 1.0);
> > +	}
> > +	cairo_mesh_pattern_end_patch(pat);
> > +
> > +	cairo_rectangle(cr, x, y, w, h);
> > +	cairo_set_source(cr, pat);
> > +	cairo_fill(cr);
> > +	cairo_pattern_destroy(pat);
> > +
> > +	cairo_destroy(cr);
> > +
> > +	cairo_surface_destroy(surface);
> > +
> > +	linear_to_intel_buf(bops, buf, linear);
> > +
> > +	free(linear);
> > +}
> > +
> > +#define GROUP_SIZE 4096
> > +static int compare_detail(const uint32_t *ptr1, uint32_t *ptr2,
> > +			  uint32_t size)
> > +{
> > +	int i, ok = 0, fail = 0;
> > +	int groups = size / GROUP_SIZE;
> > +	int *hist = calloc(GROUP_SIZE, groups);
> > +
> > +	igt_debug("size: %d, group_size: %d, groups: %d\n",
> > +		  size, GROUP_SIZE, groups);
> > +
> > +	for (i = 0; i < size / sizeof(uint32_t); i++) {
> > +		if (ptr1[i] == ptr2[i]) {
> > +			ok++;
> > +		} else {
> > +			fail++;
> > +			hist[i * sizeof(uint32_t) / GROUP_SIZE]++;
> > +		}
> > +	}
> > +
> > +	for (i = 0; i < groups; i++) {
> > +		if (hist[i])
> > +			igt_debug("[group %4x]: %d\n", i, hist[i]);
> > +	}
> > +	free(hist);
> > +
> > +	igt_debug("ok: %d, fail: %d\n", ok, fail);
> > +
> > +	return fail;
> > +}
> > +
> > +static int compare_bufs(struct intel_buf *buf1, struct intel_buf *buf2,
> > +			 bool detail_compare)
> > +{
> > +	void *ptr1, *ptr2;
> > +	int fd1, fd2, ret;
> > +
> > +	igt_assert(buf1->surface[0].size == buf2->surface[0].size);
> > +
> > +	fd1 = buf_ops_get_fd(buf1->bops);
> > +	fd2 = buf_ops_get_fd(buf2->bops);
> > +
> > +	ptr1 = xe_bo_map(fd1, buf1->handle, buf1->surface[0].size);
> > +	ptr2 = xe_bo_map(fd2, buf2->handle, buf2->surface[0].size);
> > +	ret = memcmp(ptr1, ptr2, buf1->surface[0].size);
> > +	if (detail_compare)
> > +		ret = compare_detail(ptr1, ptr2, buf1->surface[0].size);
> > +
> > +	munmap(ptr1, buf1->surface[0].size);
> > +	munmap(ptr2, buf2->surface[0].size);
> > +
> > +	return ret;
> > +}
> > +
> > +#define LINELEN 76ul
> > +static int dump_base64(const char *name, struct intel_buf *buf)
> > +{
> > +	void *ptr;
> > +	int fd, ret;
> > +	uLongf outsize = buf->surface[0].size * 3 / 2;
> > +	Bytef *destbuf = malloc(outsize);
> > +	gchar *str, *pos;
> > +
> > +	fd = buf_ops_get_fd(buf->bops);
> > +
> > +	ptr = gem_mmap__device_coherent(fd, buf->handle, 0,
> > +					buf->surface[0].size, PROT_READ);
> > +
> > +	ret = compress2(destbuf, &outsize, ptr, buf->surface[0].size,
> > +			Z_BEST_COMPRESSION);
> > +	if (ret != Z_OK) {
> > +		igt_warn("error compressing, ret: %d\n", ret);
> > +	} else {
> > +		igt_info("compressed %" PRIu64 " -> %lu\n",
> > +			 buf->surface[0].size, outsize);
> > +
> > +		igt_info("--- %s ---\n", name);
> > +		pos = str = g_base64_encode(destbuf, outsize);
> > +		outsize = strlen(str);
> > +		while (pos) {
> > +			char line[LINELEN + 1];
> > +			int to_copy = min(LINELEN, outsize);
> > +
> > +			memcpy(line, pos, to_copy);
> > +			line[to_copy] = 0;
> > +			igt_info("%s\n", line);
> > +			pos += LINELEN;
> > +			outsize -= to_copy;
> > +
> > +			if (outsize == 0)
> > +				break;
> > +		}
> > +		free(str);
> > +	}
> > +
> > +	munmap(ptr, buf->surface[0].size);
> > +	free(destbuf);
> > +
> > +	return ret;
> > +}
> > +
> > +static int __do_intel_bb_blit(struct buf_ops *bops, uint32_t tiling)
> > +{
> > +	struct intel_bb *ibb;
> > +	const int width = 1024;
> > +	const int height = 1024;
> > +	struct intel_buf src, dst, final;
> > +	char name[128];
> > +	int xe = buf_ops_get_fd(bops), fails;
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	scratch_buf_init(bops, &src, width, height, I915_TILING_NONE,
> > +			 I915_COMPRESSION_NONE);
> > +	scratch_buf_init(bops, &dst, width, height, tiling,
> > +			 I915_COMPRESSION_NONE);
> > +	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
> > +			 I915_COMPRESSION_NONE);
> > +
> > +	if (buf_info) {
> > +		intel_buf_print(&src);
> > +		intel_buf_print(&dst);
> > +	}
> > +
> > +	scratch_buf_draw_pattern(bops, &src,
> > +				 0, 0, width, height,
> > +				 0, 0, width, height, 0);
> > +
> > +	intel_bb_blt_copy(ibb,
> > +			  &src, 0, 0, src.surface[0].stride,
> > +			  &dst, 0, 0, dst.surface[0].stride,
> > +			  intel_buf_width(&dst),
> > +			  intel_buf_height(&dst),
> > +			  dst.bpp);
> > +
> > +	intel_bb_blt_copy(ibb,
> > +			  &dst, 0, 0, dst.surface[0].stride,
> > +			  &final, 0, 0, final.surface[0].stride,
> > +			  intel_buf_width(&dst),
> > +			  intel_buf_height(&dst),
> > +			  dst.bpp);
> > +
> > +	igt_assert(intel_bb_sync(ibb) == 0);
> > +	intel_bb_destroy(ibb);
> > +
> > +	if (write_png) {
> > +		snprintf(name, sizeof(name) - 1,
> > +			 "bb_blit_dst_tiling_%d.png", tiling);
> > +		intel_buf_write_to_png(&src, "bb_blit_src_tiling_none.png");
> > +		intel_buf_write_to_png(&dst, name);
> > +		intel_buf_write_to_png(&final, "bb_blit_final_tiling_none.png");
> > +	}
> > +
> > +	/* We'll fail on src <-> final compare so just warn */
> > +	if (tiling == I915_TILING_NONE) {
> > +		if (compare_bufs(&src, &dst, false) > 0)
> > +			igt_warn("none->none blit failed!");
> > +	} else {
> > +		if (compare_bufs(&src, &dst, false) == 0)
> > +			igt_warn("none->tiled blit failed!");
> > +	}
> > +
> > +	fails = compare_bufs(&src, &final, true);
> > +
> > +	intel_buf_close(bops, &src);
> > +	intel_buf_close(bops, &dst);
> > +	intel_buf_close(bops, &final);
> > +
> > +	return fails;
> > +}
> > +
> > +/**
> > + * SUBTEST: intel-bb-blit-%s
> > + * Description: Run simple bb xe %arg[1] test
> > + * Run type: BAT
> > + *
> > + * arg[1]:
> > + *
> > + * @none:				none
> > + * @x:					x
> > + * @y:					y
> > + */
> > +static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling)
> > +{
> > +	int i, fails = 0, xe = buf_ops_get_fd(bops);
> > +
> > +	/* We'll fix it for gen2/3 later. */
> > +	igt_require(intel_gen(intel_get_drm_devid(xe)) > 3);
> > +
> > +	for (i = 0; i < loops; i++)
> > +		fails += __do_intel_bb_blit(bops, tiling);
> > +
> > +	igt_assert_f(fails == 0, "intel-bb-blit (tiling: %d) fails: %d\n",
> > +		     tiling, fails);
> > +}
> > +
> > +/**
> > + * SUBTEST: offset-control
> > + * Description: check offset is kept on default simple allocator
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static void offset_control(struct buf_ops *bops)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	struct intel_buf *src, *dst1, *dst2, *dst3;
> > +	uint64_t poff_src, poff_dst1, poff_dst2;
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
> > +	dst1 = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
> > +	dst2 = create_buf(bops, WIDTH, HEIGHT, COLOR_77);
> > +
> > +	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
> > +			    src->addr.offset, 0, false);
> > +	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
> > +			    dst1->addr.offset, 0, true);
> > +	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
> > +			    dst2->addr.offset, 0, true);
> > +
> > +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> > +	intel_bb_ptr_align(ibb, 8);
> > +
> > +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
> > +
> > +	if (buf_info) {
> > +		print_buf(src, "src ");
> > +		print_buf(dst1, "dst1");
> > +		print_buf(dst2, "dst2");
> > +	}
> > +
> > +	poff_src = src->addr.offset;
> > +	poff_dst1 = dst1->addr.offset;
> > +	poff_dst2 = dst2->addr.offset;
> > +	intel_bb_reset(ibb, true);
> > +
> > +	dst3 = create_buf(bops, WIDTH, HEIGHT, COLOR_33);
> > +	intel_bb_add_object(ibb, dst3->handle, intel_buf_bo_size(dst3),
> > +			    dst3->addr.offset, 0, true);
> > +	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
> > +			    src->addr.offset, 0, false);
> > +	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
> > +			    dst1->addr.offset, 0, true);
> > +	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
> > +			    dst2->addr.offset, 0, true);
> > +
> > +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> > +	intel_bb_ptr_align(ibb, 8);
> > +
> > +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
> > +	intel_bb_sync(ibb);
> > +	intel_bb_reset(ibb, true);
> > +
> > +	igt_assert(poff_src == src->addr.offset);
> > +	igt_assert(poff_dst1 == dst1->addr.offset);
> > +	igt_assert(poff_dst2 == dst2->addr.offset);
> > +
> > +	if (buf_info) {
> > +		print_buf(src, "src ");
> > +		print_buf(dst1, "dst1");
> > +		print_buf(dst2, "dst2");
> > +	}
> > +
> > +	intel_buf_destroy(src);
> > +	intel_buf_destroy(dst1);
> > +	intel_buf_destroy(dst2);
> > +	intel_buf_destroy(dst3);
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +/*
> > + * Idea of the test is to verify delta is properly added to address
> > + * when emit_reloc() is called.
> > + */
> > +
> > +/**
> > + * SUBTEST: delta-check
> > + * Description: check delta is honoured in intel-bb pipelines
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +#define DELTA_BUFFERS 3
> > +static void delta_check(struct buf_ops *bops)
> > +{
> > +	const uint32_t expected = 0x1234abcd;
> > +	int xe = buf_ops_get_fd(bops);
> > +	uint32_t *ptr, hi, lo, val;
> > +	struct intel_buf *buf;
> > +	struct intel_bb *ibb;
> > +	uint64_t offset;
> > +	uint64_t obj_size = xe_get_default_alignment(xe) + 0x2000;
> > +	uint64_t obj_offset = (1ULL << 32) - xe_get_default_alignment(xe);
> > +	uint64_t delta = xe_get_default_alignment(xe) + 0x1000;
> > +
> > +	ibb = intel_bb_create_with_allocator(xe, 0, NULL, PAGE_SIZE,
> > +					     INTEL_ALLOCATOR_SIMPLE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	buf = create_buf(bops, obj_size, 0x1, COLOR_CC);
> > +	buf->addr.offset = obj_offset;
> > +	intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
> > +			    buf->addr.offset, 0, false);
> > +
> > +	intel_bb_out(ibb, MI_STORE_DWORD_IMM_GEN4);
> > +	intel_bb_emit_reloc(ibb, buf->handle,
> > +			    I915_GEM_DOMAIN_RENDER,
> > +			    I915_GEM_DOMAIN_RENDER,
> > +			    delta, buf->addr.offset);
> > +	intel_bb_out(ibb, expected);
> > +
> > +	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
> > +	intel_bb_ptr_align(ibb, 8);
> > +
> > +	intel_bb_exec(ibb, intel_bb_offset(ibb), I915_EXEC_DEFAULT, false);
> > +	intel_bb_sync(ibb);
> > +
> > +	/* Buffer should be @ obj_offset */
> > +	offset = intel_bb_get_object_offset(ibb, buf->handle);
> > +	igt_assert_eq_u64(offset, obj_offset);
> > +
> > +	ptr = xe_bo_map(xe, ibb->handle, ibb->size);
> > +	lo = ptr[1];
> > +	hi = ptr[2];
> > +	gem_munmap(ptr, ibb->size);
> > +
> > +	ptr = xe_bo_map(xe, buf->handle, intel_buf_size(buf));
> > +	val = ptr[delta / sizeof(uint32_t)];
> > +	gem_munmap(ptr, intel_buf_size(buf));
> > +
> > +	intel_buf_destroy(buf);
> > +	intel_bb_destroy(ibb);
> > +
> > +	/* Assert after all resources are freed */
> > +	igt_assert_f(lo == 0x1000 && hi == 0x1,
> > +		     "intel-bb doesn't properly handle delta in emit relocation\n");
> > +	igt_assert_f(val == expected,
> > +		     "Address doesn't contain expected [%x] value [%x]\n",
> > +		     expected, val);
> > +}
> > +
> > +/**
> > + * SUBTEST: full-batch
> > + * Description: check bb totally filled is executing correct
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static void full_batch(struct buf_ops *bops)
> > +{
> > +	int xe = buf_ops_get_fd(bops);
> > +	struct intel_bb *ibb;
> > +	int i;
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	for (i = 0; i < PAGE_SIZE / sizeof(uint32_t) - 1; i++)
> > +		intel_bb_out(ibb, 0);
> > +	intel_bb_emit_bbe(ibb);
> > +
> > +	igt_assert(intel_bb_offset(ibb) == PAGE_SIZE);
> > +	intel_bb_exec(ibb, intel_bb_offset(ibb),
> > +		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
> > +	intel_bb_reset(ibb, false);
> > +
> > +	intel_bb_destroy(ibb);
> > +}
> > +
> > +/**
> > + * SUBTEST: render
> > + * Description: check intel-bb render pipeline
> > + * Run type: FULL
> > + * TODO: change ``'Run type' == FULL`` to a better category
> > + */
> > +static int render(struct buf_ops *bops, uint32_t tiling,
> > +		  uint32_t width, uint32_t height)
> > +{
> > +	struct intel_bb *ibb;
> > +	struct intel_buf src, dst, final;
> > +	int xe = buf_ops_get_fd(bops);
> > +	uint32_t fails = 0;
> > +	char name[128];
> > +	uint32_t devid = intel_get_drm_devid(xe);
> > +	igt_render_copyfunc_t render_copy = NULL;
> > +
> > +	igt_debug("%s() gen: %d\n", __func__, intel_gen(devid));
> > +
> > +	ibb = intel_bb_create(xe, PAGE_SIZE);
> > +
> > +	if (debug_bb)
> > +		intel_bb_set_debug(ibb, true);
> > +
> > +	if (print_base64)
> > +		intel_bb_set_dump_base64(ibb, true);
> > +
> > +	scratch_buf_init(bops, &src, width, height, I915_TILING_NONE,
> > +			 I915_COMPRESSION_NONE);
> > +	scratch_buf_init(bops, &dst, width, height, tiling,
> > +			 I915_COMPRESSION_NONE);
> > +	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
> > +			 I915_COMPRESSION_NONE);
> > +
> > +	scratch_buf_draw_pattern(bops, &src,
> > +				 0, 0, width, height,
> > +				 0, 0, width, height, 0);
> > +
> > +	render_copy = igt_get_render_copyfunc(devid);
> > +	igt_assert(render_copy);
> > +
> > +	render_copy(ibb,
> > +		    &src,
> > +		    0, 0, width, height,
> > +		    &dst,
> > +		    0, 0);
> > +
> > +	render_copy(ibb,
> > +		    &dst,
> > +		    0, 0, width, height,
> > +		    &final,
> > +		    0, 0);
> > +
> > +	intel_bb_sync(ibb);
> > +	intel_bb_destroy(ibb);
> > +
> > +	if (write_png) {
> > +		snprintf(name, sizeof(name) - 1,
> > +			 "render_dst_tiling_%d.png", tiling);
> > +		intel_buf_write_to_png(&src, "render_src_tiling_none.png");
> > +		intel_buf_write_to_png(&dst, name);
> > +		intel_buf_write_to_png(&final, "render_final_tiling_none.png");
> > +	}
> > +
> > +	/* We'll fail on src <-> final compare so just warn */
> > +	if (tiling == I915_TILING_NONE) {
> > +		if (compare_bufs(&src, &dst, false) > 0)
> > +			igt_warn("%s: none->none failed!\n", __func__);
> > +	} else {
> > +		if (compare_bufs(&src, &dst, false) == 0)
> > +			igt_warn("%s: none->tiled failed!\n", __func__);
> > +	}
> > +
> > +	fails = compare_bufs(&src, &final, true);
> > +
> > +	if (fails && print_base64) {
> > +		dump_base64("src", &src);
> > +		dump_base64("dst", &dst);
> > +		dump_base64("final", &final);
> > +	}
> > +
> > +	intel_buf_close(bops, &src);
> > +	intel_buf_close(bops, &dst);
> > +	intel_buf_close(bops, &final);
> > +
> > +	igt_assert_f(fails == 0, "%s: (tiling: %d) fails: %d\n",
> > +		     __func__, tiling, fails);
> > +
> > +	return fails;
> > +}
> > +
> > +static int opt_handler(int opt, int opt_index, void *data)
> > +{
> > +	switch (opt) {
> > +	case 'd':
> > +		debug_bb = true;
> > +		break;
> > +	case 'p':
> > +		write_png = true;
> > +		break;
> > +	case 'i':
> > +		buf_info = true;
> > +		break;
> > +	case 'b':
> > +		print_base64 = true;
> > +		break;
> > +	default:
> > +		return IGT_OPT_HANDLER_ERROR;
> > +	}
> > +
> > +	return IGT_OPT_HANDLER_SUCCESS;
> > +}
> > +
> > +const char *help_str =
> > +	"  -d\tDebug bb\n"
> > +	"  -p\tWrite surfaces to png\n"
> > +	"  -i\tPrint buffer info\n"
> > +	"  -b\tDump to base64 (bb and images)\n"
> > +	;
> > +
> > +igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
> > +{
> > +	int xe, i;
> > +	struct buf_ops *bops;
> > +	uint32_t width;
> > +
> > +	struct test {
> > +		uint32_t tiling;
> > +		const char *tiling_name;
> > +	} tests[] = {
> > +		{ I915_TILING_NONE, "none" },
> > +		{ I915_TILING_X, "x" },
> > +		{ I915_TILING_Y, "y" },
> > +	};
> > +
> > +	igt_fixture {
> > +		xe = drm_open_driver(DRIVER_XE);
> > +		bops = buf_ops_create(xe);
> > +		xe_device_get(xe);
> > +	}
> > +
> > +	igt_describe("Ensure reset is possible on fresh bb");
> > +	igt_subtest("reset-bb")
> > +		reset_bb(bops);
> > +
> > +	igt_subtest_f("purge-bb")
> > +		purge_bb(bops);
> > +
> > +	igt_subtest("simple-bb")
> > +		simple_bb(bops, false);
> > +
> > +	igt_subtest("simple-bb-ctx")
> > +		simple_bb(bops, true);
> > +
> > +	igt_subtest("bb-with-allocator")
> > +		bb_with_allocator(bops);
> > +
> > +	igt_subtest("lot-of-buffers")
> > +		lot_of_buffers(bops);
> > +
> > +	igt_subtest("add-remove-objects")
> > +		add_remove_objects(bops);
> > +
> > +	igt_subtest("destroy-bb")
> > +		destroy_bb(bops);
> > +
> > +	igt_subtest_with_dynamic("create-in-region") {
> > +		uint64_t memreg = all_memory_regions(xe), region;
> > +
> > +		xe_for_each_mem_region(fd, memreg, region)
> > +			igt_dynamic_f("region-%s", xe_region_name(region))
> > +				create_in_region(bops, region);
> > +	}
> > +
> > +	igt_subtest("blit-simple")
> > +		blit(bops, INTEL_ALLOCATOR_SIMPLE);
> > +
> > +	igt_subtest("blit-reloc")
> > +		blit(bops, INTEL_ALLOCATOR_RELOC);
> > +
> > +	igt_subtest("intel-bb-blit-none")
> > +		do_intel_bb_blit(bops, 3, I915_TILING_NONE);
> > +
> > +	igt_subtest("intel-bb-blit-x")
> > +		do_intel_bb_blit(bops, 3, I915_TILING_X);
> > +
> > +	igt_subtest("intel-bb-blit-y") {
> > +		igt_require(intel_gen(intel_get_drm_devid(xe)) >= 6);
> > +		do_intel_bb_blit(bops, 3, I915_TILING_Y);
> > +	}
> > +
> > +	igt_subtest("offset-control")
> > +		offset_control(bops);
> > +
> > +	igt_subtest("delta-check")
> > +		delta_check(bops);
> > +
> > +	igt_subtest("full-batch")
> > +		full_batch(bops);
> > +
> > +	igt_subtest_with_dynamic("render") {
> > +		for (i = 0; i < ARRAY_SIZE(tests); i++) {
> > +			const struct test *t = &tests[i];
> > +
> > +			for (width = 512; width <= 1024; width += 512)
> > +				igt_dynamic_f("render-%s-%u", t->tiling_name, width)
> > +					render(bops, t->tiling, width, width);
> > +		}
> > +	}
> > +
> > +	igt_fixture {
> > +		xe_device_put(xe);
> > +		buf_ops_destroy(bops);
> > +		close(xe);
> > +	}
> > +}
> > -- 
> > 2.34.1
> > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb Zbigniew Kempczyński
  2023-04-28  7:53   ` Kamil Konieczny
@ 2023-04-28  8:40   ` Manszewski, Christoph
  2023-04-28  9:20     ` Zbigniew Kempczyński
  1 sibling, 1 reply; 34+ messages in thread
From: Manszewski, Christoph @ 2023-04-28  8:40 UTC (permalink / raw)
  To: Zbigniew Kempczyński, igt-dev

Hi Zbigniew,

On 28.04.2023 08:22, Zbigniew Kempczyński wrote:
> Intention of creating intel-bb was to replace libdrm for i915.
> Due to many code relies on it (kms for example) most rational way
> is to extend and add Xe path to it.
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
> ---
>   lib/intel_batchbuffer.c | 336 ++++++++++++++++++++++++++++++++--------
>   lib/intel_batchbuffer.h |   6 +
>   2 files changed, 281 insertions(+), 61 deletions(-)
> 
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 306b7650e9..38ad792e55 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -28,18 +28,22 @@
>   #include <search.h>
>   #include <glib.h>
>   
> +#include "gpgpu_fill.h"
> +#include "huc_copy.h"
>   #include "i915/gem_create.h"
> +#include "i915/gem_mman.h"
> +#include "i915/i915_blt.h"
> +#include "igt_aux.h"
> +#include "igt_syncobj.h"
>   #include "intel_batchbuffer.h"
>   #include "intel_bufops.h"
>   #include "intel_chipset.h"
>   #include "media_fill.h"
>   #include "media_spin.h"
> -#include "i915/gem_mman.h"
> -#include "veboxcopy.h"
>   #include "sw_sync.h"
> -#include "gpgpu_fill.h"
> -#include "huc_copy.h"
> -#include "i915/i915_blt.h"
> +#include "veboxcopy.h"
> +#include "xe/xe_ioctl.h"
> +#include "xe/xe_query.h"
>   
>   #define BCS_SWCTRL 0x22200
>   #define BCS_SRC_Y (1 << 0)
> @@ -828,9 +832,10 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>   
>   /**
>    * __intel_bb_create:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>    * @ctx: context id
> - * @cfg: intel_ctx configuration, NULL for default context or legacy mode
> + * @cfg: for i915 intel_ctx configuration, NULL for default context or legacy mode,
> + *       unused for xe
>    * @size: size of the batchbuffer
>    * @do_relocs: use relocations or allocator
>    * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
> @@ -842,7 +847,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>    * Before entering into each scenarios generic rule is intel-bb keeps objects
>    * and their offsets in the internal cache and reuses in subsequent execs.
>    *
> - * 1. intel-bb with relocations
> + * 1. intel-bb with relocations (i915 only)
>    *
>    * Creating new intel-bb adds handle to cache implicitly and sets its address
>    * to 0. Objects added to intel-bb later also have address 0 set for first run.
> @@ -850,11 +855,12 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>    * works in reloc mode addresses are only suggestion to the driver and we
>    * cannot be sure they won't change at next exec.
>    *
> - * 2. with allocator
> + * 2. with allocator (i915 or xe)
>    *
>    * This mode is valid only for ppgtt. Addresses are acquired from allocator
> - * and softpinned. intel-bb cache must be then coherent with allocator
> - * (simple is coherent, reloc partially [doesn't support address reservation]).
> + * and softpinned (i915) or vm-binded (xe). intel-bb cache must be then
> + * coherent with allocator (simple is coherent, reloc partially [doesn't
> + * support address reservation]).
>    * When we do intel-bb reset with purging cache it has to reacquire addresses
>    * from allocator (allocator should return same address - what is true for
>    * simple and reloc allocators).
> @@ -883,48 +889,75 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
>   
>   	igt_assert(ibb);
>   
> -	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
>   	ibb->devid = intel_get_drm_devid(fd);
>   	ibb->gen = intel_gen(ibb->devid);
> +	ibb->ctx = ctx;
> +
> +	ibb->fd = fd;
> +	ibb->driver = is_i915_device(fd) ? INTEL_DRIVER_I915 :
> +					   is_xe_device(fd) ? INTEL_DRIVER_XE : 0;
> +	igt_assert(ibb->driver);
>   
>   	/*
>   	 * If we don't have full ppgtt driver can change our addresses
>   	 * so allocator is useless in this case. Just enforce relocations
>   	 * for such gens and don't use allocator at all.
>   	 */
> -	if (!ibb->uses_full_ppgtt)
> -		do_relocs = true;
> +	if (ibb->driver == INTEL_DRIVER_I915) {
> +		ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
> +		ibb->alignment = gem_detect_safe_alignment(fd);
> +		ibb->gtt_size = gem_aperture_size(fd);
> +		ibb->handle = gem_create(fd, size);
>   
> -	/*
> -	 * For softpin mode allocator has full control over offsets allocation
> -	 * so we want kernel to not interfere with this.
> -	 */
> -	if (do_relocs)
> -		ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
> +		if (!ibb->uses_full_ppgtt)
> +			do_relocs = true;
> +
> +		/*
> +		 * For softpin mode allocator has full control over offsets allocation
> +		 * so we want kernel to not interfere with this.
> +		 */
> +		if (do_relocs) {
> +			ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
> +			allocator_type = INTEL_ALLOCATOR_NONE;
> +		} else {
> +			/* Use safe start offset instead assuming 0x0 is safe */
> +			start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
> +
> +			/* if relocs are set we won't use an allocator */
> +			ibb->allocator_handle =
> +				intel_allocator_open_full(fd, ctx, start, end,
> +							  allocator_type,
> +							  strategy, 0);
> +		}
>   
> -	/* Use safe start offset instead assuming 0x0 is safe */
> -	start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
> +		ibb->vm_id = 0;
> +	} else {
> +		igt_assert(!do_relocs);
> +
> +		ibb->alignment = xe_get_default_alignment(fd);
> +		size = ALIGN(size, ibb->alignment);
> +		ibb->handle = xe_bo_create_flags(fd, 0, size, vram_if_possible(fd, 0));
> +		ibb->gtt_size = 1ull << xe_va_bits(fd);
> +
> +		if (!ctx)
> +			ctx = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> +
> +		ibb->uses_full_ppgtt = true;
> +		ibb->allocator_handle =
> +			intel_allocator_open_full(fd, ctx, start, end,
> +						  allocator_type, strategy,
> +						  ibb->alignment);
> +		ibb->vm_id = ctx;
> +		ibb->last_engine = ~0U;
> +	}
>   
> -	/* if relocs are set we won't use an allocator */
> -	if (do_relocs)
> -		allocator_type = INTEL_ALLOCATOR_NONE;
> -	else
> -		ibb->allocator_handle = intel_allocator_open_full(fd, ctx,
> -								  start, end,
> -								  allocator_type,
> -								  strategy, 0);
>   	ibb->allocator_type = allocator_type;
>   	ibb->allocator_strategy = strategy;
>   	ibb->allocator_start = start;
>   	ibb->allocator_end = end;
> -
> -	ibb->fd = fd;
>   	ibb->enforce_relocs = do_relocs;
> -	ibb->handle = gem_create(fd, size);
> +
>   	ibb->size = size;
> -	ibb->alignment = gem_detect_safe_alignment(fd);
> -	ibb->ctx = ctx;
> -	ibb->vm_id = 0;
>   	ibb->batch = calloc(1, size);
>   	igt_assert(ibb->batch);
>   	ibb->ptr = ibb->batch;
> @@ -937,7 +970,6 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
>   		memcpy(ibb->cfg, cfg, sizeof(*cfg));
>   	}
>   
> -	ibb->gtt_size = gem_aperture_size(fd);
>   	if ((ibb->gtt_size - 1) >> 32)
>   		ibb->supports_48b_address = true;
>   
> @@ -961,7 +993,7 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
>   
>   /**
>    * intel_bb_create_full:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>    * @ctx: context
>    * @cfg: intel_ctx configuration, NULL for default context or legacy mode
>    * @size: size of the batchbuffer
> @@ -992,7 +1024,7 @@ struct intel_bb *intel_bb_create_full(int fd, uint32_t ctx,
>   
>   /**
>    * intel_bb_create_with_allocator:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>    * @ctx: context
>    * @cfg: intel_ctx configuration, NULL for default context or legacy mode
>    * @size: size of the batchbuffer
> @@ -1027,7 +1059,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
>   
>   /**
>    * intel_bb_create:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>    * @size: size of the batchbuffer
>    *
>    * Creates bb with default context.
> @@ -1047,7 +1079,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
>    */
>   struct intel_bb *intel_bb_create(int fd, uint32_t size)
>   {
> -	bool relocs = gem_has_relocations(fd);
> +	bool relocs = is_i915_device(fd) && gem_has_relocations(fd);
>   
>   	return __intel_bb_create(fd, 0, NULL, size,
>   				 relocs && !aux_needs_softpin(fd), 0, 0,
> @@ -1057,7 +1089,7 @@ struct intel_bb *intel_bb_create(int fd, uint32_t size)
>   
>   /**
>    * intel_bb_create_with_context:
> - * @fd: drm fd
> + * @fd: drm fd - i915 or xe
>    * @ctx: context id
>    * @cfg: intel_ctx configuration, NULL for default context or legacy mode
>    * @size: size of the batchbuffer
> @@ -1073,7 +1105,7 @@ struct intel_bb *
>   intel_bb_create_with_context(int fd, uint32_t ctx,
>   			     const intel_ctx_cfg_t *cfg, uint32_t size)
>   {
> -	bool relocs = gem_has_relocations(fd);
> +	bool relocs = is_i915_device(fd) && gem_has_relocations(fd);
>   
>   	return __intel_bb_create(fd, ctx, cfg, size,
>   				 relocs && !aux_needs_softpin(fd), 0, 0,
> @@ -1083,7 +1115,7 @@ intel_bb_create_with_context(int fd, uint32_t ctx,
>   
>   /**
>    * intel_bb_create_with_relocs:
> - * @fd: drm fd
> + * @fd: drm fd - i915
>    * @size: size of the batchbuffer
>    *
>    * Creates bb which will disable passing addresses.
> @@ -1095,7 +1127,7 @@ intel_bb_create_with_context(int fd, uint32_t ctx,
>    */
>   struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
>   {
> -	igt_require(gem_has_relocations(fd));
> +	igt_require(is_i915_device(fd) && gem_has_relocations(fd));
>   
>   	return __intel_bb_create(fd, 0, NULL, size, true, 0, 0,
>   				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
> @@ -1103,7 +1135,7 @@ struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
>   
>   /**
>    * intel_bb_create_with_relocs_and_context:
> - * @fd: drm fd
> + * @fd: drm fd - i915
>    * @ctx: context
>    * @cfg: intel_ctx configuration, NULL for default context or legacy mode
>    * @size: size of the batchbuffer
> @@ -1120,7 +1152,7 @@ intel_bb_create_with_relocs_and_context(int fd, uint32_t ctx,
>   					const intel_ctx_cfg_t *cfg,
>   					uint32_t size)
>   {
> -	igt_require(gem_has_relocations(fd));
> +	igt_require(is_i915_device(fd) && gem_has_relocations(fd));
>   
>   	return __intel_bb_create(fd, ctx, cfg, size, true, 0, 0,
>   				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
> @@ -1221,12 +1253,76 @@ void intel_bb_destroy(struct intel_bb *ibb)
>   
>   	if (ibb->fence >= 0)
>   		close(ibb->fence);
> +	if (ibb->engine_syncobj)
> +		syncobj_destroy(ibb->fd, ibb->engine_syncobj);
> +	if (ibb->vm_id && !ibb->ctx)
> +		xe_vm_destroy(ibb->fd, ibb->vm_id);
>   
>   	free(ibb->batch);
>   	free(ibb->cfg);
>   	free(ibb);
>   }
>   
> +static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
> +						   uint32_t op, uint32_t region)
> +{
> +	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
> +	struct drm_xe_vm_bind_op *bind_ops, *ops;
> +	bool set_obj = (op & 0xffff) == XE_VM_BIND_OP_MAP;
> +
> +	bind_ops = calloc(ibb->num_objects, sizeof(*bind_ops));
> +	igt_assert(bind_ops);
> +
> +	igt_debug("bind_ops: %s\n", set_obj ? "MAP" : "UNMAP");
> +	for (int i = 0; i < ibb->num_objects; i++) {
> +		ops = &bind_ops[i];
> +
> +		if (set_obj)
> +			ops->obj = objects[i]->handle;
> +
> +		ops->op = op;
> +		ops->obj_offset = 0;
> +		ops->addr = objects[i]->offset;
> +		ops->range = objects[i]->rsvd1;
> +		ops->region = region;
> +
> +		igt_debug("  [%d]: handle: %u, offset: %llx, size: %llx\n",
> +			  i, ops->obj, (long long)ops->addr, (long long)ops->range);
> +	}
> +
> +	return bind_ops;
> +}
> +
> +static void __unbind_xe_objects(struct intel_bb *ibb)
> +{
> +	struct drm_xe_sync syncs[2] = {
> +		{ .flags = DRM_XE_SYNC_SYNCOBJ },
> +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +	};
> +	int ret;
> +
> +	syncs[0].handle = ibb->engine_syncobj;
> +	syncs[1].handle = syncobj_create(ibb->fd, 0);
> +
> +	if (ibb->num_objects > 1) {
> +		struct drm_xe_vm_bind_op *bind_ops;
> +		uint32_t op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
> +
> +		bind_ops = xe_alloc_bind_ops(ibb, op, 0);
> +		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> +				 ibb->num_objects, syncs, 2);
> +		free(bind_ops);
> +	} else {
> +		xe_vm_unbind_async(ibb->fd, ibb->vm_id, 0, 0,
> +				   ibb->batch_offset, ibb->size, syncs, 2);
> +	}
> +	ret = syncobj_wait_err(ibb->fd, &syncs[1].handle, 1, INT64_MAX, 0);
> +	igt_assert_eq(ret, 0);
> +	syncobj_destroy(ibb->fd, syncs[1].handle);
> +
> +	ibb->xe_bound = false;
> +}
> +
>   /*
>    * intel_bb_reset:
>    * @ibb: pointer to intel_bb
> @@ -1258,6 +1354,9 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
>   	for (i = 0; i < ibb->num_objects; i++)
>   		ibb->objects[i]->flags &= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
>   
> +	if (is_xe_device(ibb->fd) && ibb->xe_bound)

Maybe: 'ibb->driver == INTEL_DRIVER_XE'. Sorry for noticing just now.

Anyway, I uphold:
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>

Christoph


> +		__unbind_xe_objects(ibb);
> +
>   	__intel_bb_destroy_relocations(ibb);
>   	__intel_bb_destroy_objects(ibb);
>   	__reallocate_objects(ibb);
> @@ -1278,7 +1377,11 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
>   				       ibb->size);
>   
>   	gem_close(ibb->fd, ibb->handle);
> -	ibb->handle = gem_create(ibb->fd, ibb->size);
> +	if (ibb->driver == INTEL_DRIVER_I915)
> +		ibb->handle = gem_create(ibb->fd, ibb->size);
> +	else
> +		ibb->handle = xe_bo_create_flags(ibb->fd, 0, ibb->size,
> +						 vram_if_possible(ibb->fd, 0));
>   
>   	/* Reacquire offset for RELOC and SIMPLE */
>   	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
> @@ -1305,13 +1408,19 @@ int intel_bb_sync(struct intel_bb *ibb)
>   {
>   	int ret;
>   
> -	if (ibb->fence < 0)
> +	if (ibb->fence < 0 && !ibb->engine_syncobj)
>   		return 0;
>   
> -	ret = sync_fence_wait(ibb->fence, -1);
> -	if (ret == 0) {
> -		close(ibb->fence);
> -		ibb->fence = -1;
> +	if (ibb->fence >= 0) {
> +		ret = sync_fence_wait(ibb->fence, -1);
> +		if (ret == 0) {
> +			close(ibb->fence);
> +			ibb->fence = -1;
> +		}
> +	} else {
> +		igt_assert_neq(ibb->engine_syncobj, 0);
> +		ret = syncobj_wait_err(ibb->fd, &ibb->engine_syncobj,
> +				       1, INT64_MAX, 0);
>   	}
>   
>   	return ret;
> @@ -1502,7 +1611,7 @@ static void __remove_from_objects(struct intel_bb *ibb,
>   }
>   
>   /**
> - * intel_bb_add_object:
> + * __intel_bb_add_object:
>    * @ibb: pointer to intel_bb
>    * @handle: which handle to add to objects array
>    * @size: object size
> @@ -1514,9 +1623,9 @@ static void __remove_from_objects(struct intel_bb *ibb,
>    * in the object tree. When object is a render target it has to
>    * be marked with EXEC_OBJECT_WRITE flag.
>    */
> -struct drm_i915_gem_exec_object2 *
> -intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> -		    uint64_t offset, uint64_t alignment, bool write)
> +static struct drm_i915_gem_exec_object2 *
> +__intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> +		      uint64_t offset, uint64_t alignment, bool write)
>   {
>   	struct drm_i915_gem_exec_object2 *object;
>   
> @@ -1524,8 +1633,12 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
>   		   || ALIGN(offset, alignment) == offset);
>   	igt_assert(is_power_of_two(alignment));
>   
> +	if (ibb->driver == INTEL_DRIVER_I915)
> +		alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
> +	else
> +		alignment = max_t(uint64_t, ibb->alignment, alignment);
> +
>   	object = __add_to_cache(ibb, handle);
> -	alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
>   	__add_to_objects(ibb, object);
>   
>   	/*
> @@ -1585,9 +1698,27 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
>   	if (ibb->allows_obj_alignment)
>   		object->alignment = alignment;
>   
> +	if (ibb->driver == INTEL_DRIVER_XE) {
> +		object->alignment = alignment;
> +		object->rsvd1 = size;
> +	}
> +
>   	return object;
>   }
>   
> +struct drm_i915_gem_exec_object2 *
> +intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> +		    uint64_t offset, uint64_t alignment, bool write)
> +{
> +	struct drm_i915_gem_exec_object2 *obj = NULL;
> +
> +	obj = __intel_bb_add_object(ibb, handle, size, offset,
> +				    alignment, write);
> +	igt_assert(obj);
> +
> +	return obj;
> +}
> +
>   bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
>   			    uint64_t offset, uint64_t size)
>   {
> @@ -2136,6 +2267,82 @@ static void update_offsets(struct intel_bb *ibb,
>   }
>   
>   #define LINELEN 76
> +
> +static int
> +__xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
> +{
> +	uint32_t engine = flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK);
> +	uint32_t engine_id;
> +	struct drm_xe_sync syncs[2] = {
> +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +	};
> +	struct drm_xe_vm_bind_op *bind_ops;
> +	void *map;
> +
> +	igt_assert_eq(ibb->num_relocs, 0);
> +	igt_assert_eq(ibb->xe_bound, false);
> +
> +	if (ibb->last_engine != engine) {
> +		struct drm_xe_engine_class_instance inst = { };
> +
> +		inst.engine_instance =
> +			(flags & I915_EXEC_BSD_MASK) >> I915_EXEC_BSD_SHIFT;
> +
> +		switch (flags & I915_EXEC_RING_MASK) {
> +		case I915_EXEC_DEFAULT:
> +		case I915_EXEC_BLT:
> +			inst.engine_class = DRM_XE_ENGINE_CLASS_COPY;
> +			break;
> +		case I915_EXEC_BSD:
> +			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_DECODE;
> +			break;
> +		case I915_EXEC_RENDER:
> +			inst.engine_class = DRM_XE_ENGINE_CLASS_RENDER;
> +			break;
> +		case I915_EXEC_VEBOX:
> +			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE;
> +			break;
> +		default:
> +			igt_assert_f(false, "Unknown engine: %x", (uint32_t) flags);
> +		}
> +		igt_debug("Run on %s\n", xe_engine_class_string(inst.engine_class));
> +
> +		ibb->engine_id = engine_id =
> +			xe_engine_create(ibb->fd, ibb->vm_id, &inst, 0);
> +	} else {
> +		engine_id = ibb->engine_id;
> +	}
> +	ibb->last_engine = engine;
> +
> +	map = xe_bo_map(ibb->fd, ibb->handle, ibb->size);
> +	memcpy(map, ibb->batch, ibb->size);
> +	gem_munmap(map, ibb->size);
> +
> +	syncs[0].handle = syncobj_create(ibb->fd, 0);
> +	if (ibb->num_objects > 1) {
> +		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, 0);
> +		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> +				 ibb->num_objects, syncs, 1);
> +		free(bind_ops);
> +	} else {
> +		xe_vm_bind_async(ibb->fd, ibb->vm_id, 0, ibb->handle, 0,
> +				 ibb->batch_offset, ibb->size, syncs, 1);
> +	}
> +	ibb->xe_bound = true;
> +
> +	syncs[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +	ibb->engine_syncobj = syncobj_create(ibb->fd, 0);
> +	syncs[1].handle = ibb->engine_syncobj;
> +
> +	xe_exec_sync(ibb->fd, engine_id, ibb->batch_offset, syncs, 2);
> +
> +	if (sync)
> +		intel_bb_sync(ibb);
> +
> +	return 0;
> +}
> +
>   /*
>    * __intel_bb_exec:
>    * @ibb: pointer to intel_bb
> @@ -2221,7 +2428,7 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
>   /**
>    * intel_bb_exec:
>    * @ibb: pointer to intel_bb
> - * @end_offset: offset of the last instruction in the bb
> + * @end_offset: offset of the last instruction in the bb (for i915)
>    * @flags: flags passed directly to execbuf
>    * @sync: if true wait for execbuf completion, otherwise caller is responsible
>    * to wait for completion
> @@ -2231,7 +2438,13 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
>   void intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
>   		   uint64_t flags, bool sync)
>   {
> -	igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
> +	if (ibb->dump_base64)
> +		intel_bb_dump_base64(ibb, LINELEN);
> +
> +	if (ibb->driver == INTEL_DRIVER_I915)
> +		igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
> +	else
> +		igt_assert_eq(__xe_bb_exec(ibb, flags, sync), 0);
>   }
>   
>   /**
> @@ -2636,7 +2849,8 @@ static void __intel_bb_reinit_alloc(struct intel_bb *ibb)
>   							  ibb->allocator_start, ibb->allocator_end,
>   							  ibb->allocator_type,
>   							  ibb->allocator_strategy,
> -							  0);
> +							  ibb->alignment);
> +
>   	intel_bb_reset(ibb, true);
>   }
>   
> diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
> index 4978b6fb29..9a58fb7809 100644
> --- a/lib/intel_batchbuffer.h
> +++ b/lib/intel_batchbuffer.h
> @@ -246,6 +246,7 @@ struct intel_bb {
>   	uint8_t allocator_type;
>   	enum allocator_strategy allocator_strategy;
>   
> +	enum intel_driver driver;
>   	int fd;
>   	unsigned int gen;
>   	bool debug;
> @@ -268,6 +269,11 @@ struct intel_bb {
>   	uint32_t ctx;
>   	uint32_t vm_id;
>   
> +	bool xe_bound;
> +	uint32_t engine_syncobj;
> +	uint32_t engine_id;
> +	uint32_t last_engine;
> +
>   	/* Context configuration */
>   	intel_ctx_cfg_t *cfg;
>   

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 08/17] lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset path
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 08/17] lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset path Zbigniew Kempczyński
  2023-04-28  7:50   ` Kamil Konieczny
@ 2023-04-28  8:44   ` Manszewski, Christoph
  1 sibling, 0 replies; 34+ messages in thread
From: Manszewski, Christoph @ 2023-04-28  8:44 UTC (permalink / raw)
  To: Zbigniew Kempczyński, igt-dev

On 28.04.2023 08:22, Zbigniew Kempczyński wrote:
> On reset path we recreate bo for batch (to avoid stalls) so we should
> reacquire the offset too. At the moment simple allocator will return
> same offset (so unfortunately we'll stall), but for reloc allocator
> we'll get new one (so we avoid stall).
> 
> I've noticed this is missing during xe_intel_bb test, where on reloc
> I noticed unexpected result (direct consequence of using same offset
> which pointed to old batch, not new one).
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>

Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>


> ---
>   lib/intel_batchbuffer.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 7dbd6dd582..99b0b61585 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -1280,8 +1280,9 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
>   	gem_close(ibb->fd, ibb->handle);
>   	ibb->handle = gem_create(ibb->fd, ibb->size);
>   
> -	/* Keep address for bb in reloc mode and RANDOM allocator */
> -	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
> +	/* Reacquire offset for RELOC and SIMPLE */
> +	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
> +	    ibb->allocator_type == INTEL_ALLOCATOR_RELOC)
>   		ibb->batch_offset = __intel_bb_get_offset(ibb,
>   							  ibb->handle,
>   							  ibb->size,

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 09/17] lib/intel_batchbuffer: Update intel-bb docs
  2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 09/17] lib/intel_batchbuffer: Update intel-bb docs Zbigniew Kempczyński
  2023-04-28  7:51   ` Kamil Konieczny
@ 2023-04-28  8:51   ` Manszewski, Christoph
  1 sibling, 0 replies; 34+ messages in thread
From: Manszewski, Christoph @ 2023-04-28  8:51 UTC (permalink / raw)
  To: Zbigniew Kempczyński, igt-dev

Hi Zbigniew,

On 28.04.2023 08:22, Zbigniew Kempczyński wrote:
> After RANDOM pseudo-allocator was removed and RELOC allocator becomed
> stateful docs stays intact and documents old code. Fix this before
> adding xe code.

It just came to my mind that if you dedicate a patch for 
'__intel_bb_create' docs update, you may as well add missing description 
for 'start', 'end' and 'strategy' parameters. But it's optional, it can 
wait for another day.

Anyway:
Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>

Christoph


> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> ---
>   lib/intel_batchbuffer.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 99b0b61585..306b7650e9 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -836,7 +836,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>    * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
>    *
>    * intel-bb assumes it will work in one of two modes - with relocations or
> - * with using allocator (currently RANDOM and SIMPLE are implemented).
> + * with using allocator (currently RELOC and SIMPLE are implemented).
>    * Some description is required to describe how they maintain the addresses.
>    *
>    * Before entering into each scenarios generic rule is intel-bb keeps objects
> @@ -854,10 +854,10 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
>    *
>    * This mode is valid only for ppgtt. Addresses are acquired from allocator
>    * and softpinned. intel-bb cache must be then coherent with allocator
> - * (simple is coherent, random is not due to fact we don't keep its state).
> + * (simple is coherent, reloc partially [doesn't support address reservation]).
>    * When we do intel-bb reset with purging cache it has to reacquire addresses
>    * from allocator (allocator should return same address - what is true for
> - * simple allocator and false for random as mentioned before).
> + * simple and reloc allocators).
>    *
>    * If we do reset without purging caches we use addresses from intel-bb cache
>    * during execbuf objects construction.
> @@ -967,7 +967,7 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
>    * @size: size of the batchbuffer
>    * @start: allocator vm start address
>    * @end: allocator vm start address
> - * @allocator_type: allocator type, SIMPLE, RANDOM, ...
> + * @allocator_type: allocator type, SIMPLE, RELOC, ...
>    * @strategy: allocation strategy
>    *
>    * Creates bb with context passed in @ctx, size in @size and allocator type

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb
  2023-04-28  8:40   ` Manszewski, Christoph
@ 2023-04-28  9:20     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28  9:20 UTC (permalink / raw)
  To: Manszewski, Christoph; +Cc: igt-dev

On Fri, Apr 28, 2023 at 10:40:50AM +0200, Manszewski, Christoph wrote:
> Hi Zbigniew,
> 
> On 28.04.2023 08:22, Zbigniew Kempczyński wrote:
> > Intention of creating intel-bb was to replace libdrm for i915.
> > Due to many code relies on it (kms for example) most rational way
> > is to extend and add Xe path to it.
> > 
> > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem@intel.com>
> > ---
> >   lib/intel_batchbuffer.c | 336 ++++++++++++++++++++++++++++++++--------
> >   lib/intel_batchbuffer.h |   6 +
> >   2 files changed, 281 insertions(+), 61 deletions(-)
> > 
> > diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> > index 306b7650e9..38ad792e55 100644
> > --- a/lib/intel_batchbuffer.c
> > +++ b/lib/intel_batchbuffer.c
> > @@ -28,18 +28,22 @@
> >   #include <search.h>
> >   #include <glib.h>
> > +#include "gpgpu_fill.h"
> > +#include "huc_copy.h"
> >   #include "i915/gem_create.h"
> > +#include "i915/gem_mman.h"
> > +#include "i915/i915_blt.h"
> > +#include "igt_aux.h"
> > +#include "igt_syncobj.h"
> >   #include "intel_batchbuffer.h"
> >   #include "intel_bufops.h"
> >   #include "intel_chipset.h"
> >   #include "media_fill.h"
> >   #include "media_spin.h"
> > -#include "i915/gem_mman.h"
> > -#include "veboxcopy.h"
> >   #include "sw_sync.h"
> > -#include "gpgpu_fill.h"
> > -#include "huc_copy.h"
> > -#include "i915/i915_blt.h"
> > +#include "veboxcopy.h"
> > +#include "xe/xe_ioctl.h"
> > +#include "xe/xe_query.h"
> >   #define BCS_SWCTRL 0x22200
> >   #define BCS_SRC_Y (1 << 0)
> > @@ -828,9 +832,10 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
> >   /**
> >    * __intel_bb_create:
> > - * @fd: drm fd
> > + * @fd: drm fd - i915 or xe
> >    * @ctx: context id
> > - * @cfg: intel_ctx configuration, NULL for default context or legacy mode
> > + * @cfg: for i915 intel_ctx configuration, NULL for default context or legacy mode,
> > + *       unused for xe
> >    * @size: size of the batchbuffer
> >    * @do_relocs: use relocations or allocator
> >    * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
> > @@ -842,7 +847,7 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
> >    * Before entering into each scenarios generic rule is intel-bb keeps objects
> >    * and their offsets in the internal cache and reuses in subsequent execs.
> >    *
> > - * 1. intel-bb with relocations
> > + * 1. intel-bb with relocations (i915 only)
> >    *
> >    * Creating new intel-bb adds handle to cache implicitly and sets its address
> >    * to 0. Objects added to intel-bb later also have address 0 set for first run.
> > @@ -850,11 +855,12 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
> >    * works in reloc mode addresses are only suggestion to the driver and we
> >    * cannot be sure they won't change at next exec.
> >    *
> > - * 2. with allocator
> > + * 2. with allocator (i915 or xe)
> >    *
> >    * This mode is valid only for ppgtt. Addresses are acquired from allocator
> > - * and softpinned. intel-bb cache must be then coherent with allocator
> > - * (simple is coherent, reloc partially [doesn't support address reservation]).
> > + * and softpinned (i915) or vm-binded (xe). intel-bb cache must be then
> > + * coherent with allocator (simple is coherent, reloc partially [doesn't
> > + * support address reservation]).
> >    * When we do intel-bb reset with purging cache it has to reacquire addresses
> >    * from allocator (allocator should return same address - what is true for
> >    * simple and reloc allocators).
> > @@ -883,48 +889,75 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
> >   	igt_assert(ibb);
> > -	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
> >   	ibb->devid = intel_get_drm_devid(fd);
> >   	ibb->gen = intel_gen(ibb->devid);
> > +	ibb->ctx = ctx;
> > +
> > +	ibb->fd = fd;
> > +	ibb->driver = is_i915_device(fd) ? INTEL_DRIVER_I915 :
> > +					   is_xe_device(fd) ? INTEL_DRIVER_XE : 0;
> > +	igt_assert(ibb->driver);
> >   	/*
> >   	 * If we don't have full ppgtt driver can change our addresses
> >   	 * so allocator is useless in this case. Just enforce relocations
> >   	 * for such gens and don't use allocator at all.
> >   	 */
> > -	if (!ibb->uses_full_ppgtt)
> > -		do_relocs = true;
> > +	if (ibb->driver == INTEL_DRIVER_I915) {
> > +		ibb->uses_full_ppgtt = gem_uses_full_ppgtt(fd);
> > +		ibb->alignment = gem_detect_safe_alignment(fd);
> > +		ibb->gtt_size = gem_aperture_size(fd);
> > +		ibb->handle = gem_create(fd, size);
> > -	/*
> > -	 * For softpin mode allocator has full control over offsets allocation
> > -	 * so we want kernel to not interfere with this.
> > -	 */
> > -	if (do_relocs)
> > -		ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
> > +		if (!ibb->uses_full_ppgtt)
> > +			do_relocs = true;
> > +
> > +		/*
> > +		 * For softpin mode allocator has full control over offsets allocation
> > +		 * so we want kernel to not interfere with this.
> > +		 */
> > +		if (do_relocs) {
> > +			ibb->allows_obj_alignment = gem_allows_obj_alignment(fd);
> > +			allocator_type = INTEL_ALLOCATOR_NONE;
> > +		} else {
> > +			/* Use safe start offset instead assuming 0x0 is safe */
> > +			start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
> > +
> > +			/* if relocs are set we won't use an allocator */
> > +			ibb->allocator_handle =
> > +				intel_allocator_open_full(fd, ctx, start, end,
> > +							  allocator_type,
> > +							  strategy, 0);
> > +		}
> > -	/* Use safe start offset instead assuming 0x0 is safe */
> > -	start = max_t(uint64_t, start, gem_detect_safe_start_offset(fd));
> > +		ibb->vm_id = 0;
> > +	} else {
> > +		igt_assert(!do_relocs);
> > +
> > +		ibb->alignment = xe_get_default_alignment(fd);
> > +		size = ALIGN(size, ibb->alignment);
> > +		ibb->handle = xe_bo_create_flags(fd, 0, size, vram_if_possible(fd, 0));
> > +		ibb->gtt_size = 1ull << xe_va_bits(fd);
> > +
> > +		if (!ctx)
> > +			ctx = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > +
> > +		ibb->uses_full_ppgtt = true;
> > +		ibb->allocator_handle =
> > +			intel_allocator_open_full(fd, ctx, start, end,
> > +						  allocator_type, strategy,
> > +						  ibb->alignment);
> > +		ibb->vm_id = ctx;
> > +		ibb->last_engine = ~0U;
> > +	}
> > -	/* if relocs are set we won't use an allocator */
> > -	if (do_relocs)
> > -		allocator_type = INTEL_ALLOCATOR_NONE;
> > -	else
> > -		ibb->allocator_handle = intel_allocator_open_full(fd, ctx,
> > -								  start, end,
> > -								  allocator_type,
> > -								  strategy, 0);
> >   	ibb->allocator_type = allocator_type;
> >   	ibb->allocator_strategy = strategy;
> >   	ibb->allocator_start = start;
> >   	ibb->allocator_end = end;
> > -
> > -	ibb->fd = fd;
> >   	ibb->enforce_relocs = do_relocs;
> > -	ibb->handle = gem_create(fd, size);
> > +
> >   	ibb->size = size;
> > -	ibb->alignment = gem_detect_safe_alignment(fd);
> > -	ibb->ctx = ctx;
> > -	ibb->vm_id = 0;
> >   	ibb->batch = calloc(1, size);
> >   	igt_assert(ibb->batch);
> >   	ibb->ptr = ibb->batch;
> > @@ -937,7 +970,6 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
> >   		memcpy(ibb->cfg, cfg, sizeof(*cfg));
> >   	}
> > -	ibb->gtt_size = gem_aperture_size(fd);
> >   	if ((ibb->gtt_size - 1) >> 32)
> >   		ibb->supports_48b_address = true;
> > @@ -961,7 +993,7 @@ __intel_bb_create(int fd, uint32_t ctx, const intel_ctx_cfg_t *cfg,
> >   /**
> >    * intel_bb_create_full:
> > - * @fd: drm fd
> > + * @fd: drm fd - i915 or xe
> >    * @ctx: context
> >    * @cfg: intel_ctx configuration, NULL for default context or legacy mode
> >    * @size: size of the batchbuffer
> > @@ -992,7 +1024,7 @@ struct intel_bb *intel_bb_create_full(int fd, uint32_t ctx,
> >   /**
> >    * intel_bb_create_with_allocator:
> > - * @fd: drm fd
> > + * @fd: drm fd - i915 or xe
> >    * @ctx: context
> >    * @cfg: intel_ctx configuration, NULL for default context or legacy mode
> >    * @size: size of the batchbuffer
> > @@ -1027,7 +1059,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
> >   /**
> >    * intel_bb_create:
> > - * @fd: drm fd
> > + * @fd: drm fd - i915 or xe
> >    * @size: size of the batchbuffer
> >    *
> >    * Creates bb with default context.
> > @@ -1047,7 +1079,7 @@ static bool has_ctx_cfg(struct intel_bb *ibb)
> >    */
> >   struct intel_bb *intel_bb_create(int fd, uint32_t size)
> >   {
> > -	bool relocs = gem_has_relocations(fd);
> > +	bool relocs = is_i915_device(fd) && gem_has_relocations(fd);
> >   	return __intel_bb_create(fd, 0, NULL, size,
> >   				 relocs && !aux_needs_softpin(fd), 0, 0,
> > @@ -1057,7 +1089,7 @@ struct intel_bb *intel_bb_create(int fd, uint32_t size)
> >   /**
> >    * intel_bb_create_with_context:
> > - * @fd: drm fd
> > + * @fd: drm fd - i915 or xe
> >    * @ctx: context id
> >    * @cfg: intel_ctx configuration, NULL for default context or legacy mode
> >    * @size: size of the batchbuffer
> > @@ -1073,7 +1105,7 @@ struct intel_bb *
> >   intel_bb_create_with_context(int fd, uint32_t ctx,
> >   			     const intel_ctx_cfg_t *cfg, uint32_t size)
> >   {
> > -	bool relocs = gem_has_relocations(fd);
> > +	bool relocs = is_i915_device(fd) && gem_has_relocations(fd);
> >   	return __intel_bb_create(fd, ctx, cfg, size,
> >   				 relocs && !aux_needs_softpin(fd), 0, 0,
> > @@ -1083,7 +1115,7 @@ intel_bb_create_with_context(int fd, uint32_t ctx,
> >   /**
> >    * intel_bb_create_with_relocs:
> > - * @fd: drm fd
> > + * @fd: drm fd - i915
> >    * @size: size of the batchbuffer
> >    *
> >    * Creates bb which will disable passing addresses.
> > @@ -1095,7 +1127,7 @@ intel_bb_create_with_context(int fd, uint32_t ctx,
> >    */
> >   struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
> >   {
> > -	igt_require(gem_has_relocations(fd));
> > +	igt_require(is_i915_device(fd) && gem_has_relocations(fd));
> >   	return __intel_bb_create(fd, 0, NULL, size, true, 0, 0,
> >   				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
> > @@ -1103,7 +1135,7 @@ struct intel_bb *intel_bb_create_with_relocs(int fd, uint32_t size)
> >   /**
> >    * intel_bb_create_with_relocs_and_context:
> > - * @fd: drm fd
> > + * @fd: drm fd - i915
> >    * @ctx: context
> >    * @cfg: intel_ctx configuration, NULL for default context or legacy mode
> >    * @size: size of the batchbuffer
> > @@ -1120,7 +1152,7 @@ intel_bb_create_with_relocs_and_context(int fd, uint32_t ctx,
> >   					const intel_ctx_cfg_t *cfg,
> >   					uint32_t size)
> >   {
> > -	igt_require(gem_has_relocations(fd));
> > +	igt_require(is_i915_device(fd) && gem_has_relocations(fd));
> >   	return __intel_bb_create(fd, ctx, cfg, size, true, 0, 0,
> >   				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
> > @@ -1221,12 +1253,76 @@ void intel_bb_destroy(struct intel_bb *ibb)
> >   	if (ibb->fence >= 0)
> >   		close(ibb->fence);
> > +	if (ibb->engine_syncobj)
> > +		syncobj_destroy(ibb->fd, ibb->engine_syncobj);
> > +	if (ibb->vm_id && !ibb->ctx)
> > +		xe_vm_destroy(ibb->fd, ibb->vm_id);
> >   	free(ibb->batch);
> >   	free(ibb->cfg);
> >   	free(ibb);
> >   }
> > +static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
> > +						   uint32_t op, uint32_t region)
> > +{
> > +	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
> > +	struct drm_xe_vm_bind_op *bind_ops, *ops;
> > +	bool set_obj = (op & 0xffff) == XE_VM_BIND_OP_MAP;
> > +
> > +	bind_ops = calloc(ibb->num_objects, sizeof(*bind_ops));
> > +	igt_assert(bind_ops);
> > +
> > +	igt_debug("bind_ops: %s\n", set_obj ? "MAP" : "UNMAP");
> > +	for (int i = 0; i < ibb->num_objects; i++) {
> > +		ops = &bind_ops[i];
> > +
> > +		if (set_obj)
> > +			ops->obj = objects[i]->handle;
> > +
> > +		ops->op = op;
> > +		ops->obj_offset = 0;
> > +		ops->addr = objects[i]->offset;
> > +		ops->range = objects[i]->rsvd1;
> > +		ops->region = region;
> > +
> > +		igt_debug("  [%d]: handle: %u, offset: %llx, size: %llx\n",
> > +			  i, ops->obj, (long long)ops->addr, (long long)ops->range);
> > +	}
> > +
> > +	return bind_ops;
> > +}
> > +
> > +static void __unbind_xe_objects(struct intel_bb *ibb)
> > +{
> > +	struct drm_xe_sync syncs[2] = {
> > +		{ .flags = DRM_XE_SYNC_SYNCOBJ },
> > +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> > +	};
> > +	int ret;
> > +
> > +	syncs[0].handle = ibb->engine_syncobj;
> > +	syncs[1].handle = syncobj_create(ibb->fd, 0);
> > +
> > +	if (ibb->num_objects > 1) {
> > +		struct drm_xe_vm_bind_op *bind_ops;
> > +		uint32_t op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
> > +
> > +		bind_ops = xe_alloc_bind_ops(ibb, op, 0);
> > +		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> > +				 ibb->num_objects, syncs, 2);
> > +		free(bind_ops);
> > +	} else {
> > +		xe_vm_unbind_async(ibb->fd, ibb->vm_id, 0, 0,
> > +				   ibb->batch_offset, ibb->size, syncs, 2);
> > +	}
> > +	ret = syncobj_wait_err(ibb->fd, &syncs[1].handle, 1, INT64_MAX, 0);
> > +	igt_assert_eq(ret, 0);
> > +	syncobj_destroy(ibb->fd, syncs[1].handle);
> > +
> > +	ibb->xe_bound = false;
> > +}
> > +
> >   /*
> >    * intel_bb_reset:
> >    * @ibb: pointer to intel_bb
> > @@ -1258,6 +1354,9 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
> >   	for (i = 0; i < ibb->num_objects; i++)
> >   		ibb->objects[i]->flags &= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
> > +	if (is_xe_device(ibb->fd) && ibb->xe_bound)
> 
> Maybe: 'ibb->driver == INTEL_DRIVER_XE'. Sorry for noticing just now.
> 
> Anyway, I uphold:
> Reviewed-by: Christoph Manszewski <christoph.manszewski@intel.com>

Yes, that's trivial change and I'll dare to keep r-b. Thanks for
spotting this.

--
Zbigniew

> 
> Christoph
> 
> 
> > +		__unbind_xe_objects(ibb);
> > +
> >   	__intel_bb_destroy_relocations(ibb);
> >   	__intel_bb_destroy_objects(ibb);
> >   	__reallocate_objects(ibb);
> > @@ -1278,7 +1377,11 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
> >   				       ibb->size);
> >   	gem_close(ibb->fd, ibb->handle);
> > -	ibb->handle = gem_create(ibb->fd, ibb->size);
> > +	if (ibb->driver == INTEL_DRIVER_I915)
> > +		ibb->handle = gem_create(ibb->fd, ibb->size);
> > +	else
> > +		ibb->handle = xe_bo_create_flags(ibb->fd, 0, ibb->size,
> > +						 vram_if_possible(ibb->fd, 0));
> >   	/* Reacquire offset for RELOC and SIMPLE */
> >   	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE ||
> > @@ -1305,13 +1408,19 @@ int intel_bb_sync(struct intel_bb *ibb)
> >   {
> >   	int ret;
> > -	if (ibb->fence < 0)
> > +	if (ibb->fence < 0 && !ibb->engine_syncobj)
> >   		return 0;
> > -	ret = sync_fence_wait(ibb->fence, -1);
> > -	if (ret == 0) {
> > -		close(ibb->fence);
> > -		ibb->fence = -1;
> > +	if (ibb->fence >= 0) {
> > +		ret = sync_fence_wait(ibb->fence, -1);
> > +		if (ret == 0) {
> > +			close(ibb->fence);
> > +			ibb->fence = -1;
> > +		}
> > +	} else {
> > +		igt_assert_neq(ibb->engine_syncobj, 0);
> > +		ret = syncobj_wait_err(ibb->fd, &ibb->engine_syncobj,
> > +				       1, INT64_MAX, 0);
> >   	}
> >   	return ret;
> > @@ -1502,7 +1611,7 @@ static void __remove_from_objects(struct intel_bb *ibb,
> >   }
> >   /**
> > - * intel_bb_add_object:
> > + * __intel_bb_add_object:
> >    * @ibb: pointer to intel_bb
> >    * @handle: which handle to add to objects array
> >    * @size: object size
> > @@ -1514,9 +1623,9 @@ static void __remove_from_objects(struct intel_bb *ibb,
> >    * in the object tree. When object is a render target it has to
> >    * be marked with EXEC_OBJECT_WRITE flag.
> >    */
> > -struct drm_i915_gem_exec_object2 *
> > -intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> > -		    uint64_t offset, uint64_t alignment, bool write)
> > +static struct drm_i915_gem_exec_object2 *
> > +__intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> > +		      uint64_t offset, uint64_t alignment, bool write)
> >   {
> >   	struct drm_i915_gem_exec_object2 *object;
> > @@ -1524,8 +1633,12 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> >   		   || ALIGN(offset, alignment) == offset);
> >   	igt_assert(is_power_of_two(alignment));
> > +	if (ibb->driver == INTEL_DRIVER_I915)
> > +		alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
> > +	else
> > +		alignment = max_t(uint64_t, ibb->alignment, alignment);
> > +
> >   	object = __add_to_cache(ibb, handle);
> > -	alignment = max_t(uint64_t, alignment, gem_detect_safe_alignment(ibb->fd));
> >   	__add_to_objects(ibb, object);
> >   	/*
> > @@ -1585,9 +1698,27 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> >   	if (ibb->allows_obj_alignment)
> >   		object->alignment = alignment;
> > +	if (ibb->driver == INTEL_DRIVER_XE) {
> > +		object->alignment = alignment;
> > +		object->rsvd1 = size;
> > +	}
> > +
> >   	return object;
> >   }
> > +struct drm_i915_gem_exec_object2 *
> > +intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
> > +		    uint64_t offset, uint64_t alignment, bool write)
> > +{
> > +	struct drm_i915_gem_exec_object2 *obj = NULL;
> > +
> > +	obj = __intel_bb_add_object(ibb, handle, size, offset,
> > +				    alignment, write);
> > +	igt_assert(obj);
> > +
> > +	return obj;
> > +}
> > +
> >   bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
> >   			    uint64_t offset, uint64_t size)
> >   {
> > @@ -2136,6 +2267,82 @@ static void update_offsets(struct intel_bb *ibb,
> >   }
> >   #define LINELEN 76
> > +
> > +static int
> > +__xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
> > +{
> > +	uint32_t engine = flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK);
> > +	uint32_t engine_id;
> > +	struct drm_xe_sync syncs[2] = {
> > +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> > +		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> > +	};
> > +	struct drm_xe_vm_bind_op *bind_ops;
> > +	void *map;
> > +
> > +	igt_assert_eq(ibb->num_relocs, 0);
> > +	igt_assert_eq(ibb->xe_bound, false);
> > +
> > +	if (ibb->last_engine != engine) {
> > +		struct drm_xe_engine_class_instance inst = { };
> > +
> > +		inst.engine_instance =
> > +			(flags & I915_EXEC_BSD_MASK) >> I915_EXEC_BSD_SHIFT;
> > +
> > +		switch (flags & I915_EXEC_RING_MASK) {
> > +		case I915_EXEC_DEFAULT:
> > +		case I915_EXEC_BLT:
> > +			inst.engine_class = DRM_XE_ENGINE_CLASS_COPY;
> > +			break;
> > +		case I915_EXEC_BSD:
> > +			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_DECODE;
> > +			break;
> > +		case I915_EXEC_RENDER:
> > +			inst.engine_class = DRM_XE_ENGINE_CLASS_RENDER;
> > +			break;
> > +		case I915_EXEC_VEBOX:
> > +			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE;
> > +			break;
> > +		default:
> > +			igt_assert_f(false, "Unknown engine: %x", (uint32_t) flags);
> > +		}
> > +		igt_debug("Run on %s\n", xe_engine_class_string(inst.engine_class));
> > +
> > +		ibb->engine_id = engine_id =
> > +			xe_engine_create(ibb->fd, ibb->vm_id, &inst, 0);
> > +	} else {
> > +		engine_id = ibb->engine_id;
> > +	}
> > +	ibb->last_engine = engine;
> > +
> > +	map = xe_bo_map(ibb->fd, ibb->handle, ibb->size);
> > +	memcpy(map, ibb->batch, ibb->size);
> > +	gem_munmap(map, ibb->size);
> > +
> > +	syncs[0].handle = syncobj_create(ibb->fd, 0);
> > +	if (ibb->num_objects > 1) {
> > +		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, 0);
> > +		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> > +				 ibb->num_objects, syncs, 1);
> > +		free(bind_ops);
> > +	} else {
> > +		xe_vm_bind_async(ibb->fd, ibb->vm_id, 0, ibb->handle, 0,
> > +				 ibb->batch_offset, ibb->size, syncs, 1);
> > +	}
> > +	ibb->xe_bound = true;
> > +
> > +	syncs[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> > +	ibb->engine_syncobj = syncobj_create(ibb->fd, 0);
> > +	syncs[1].handle = ibb->engine_syncobj;
> > +
> > +	xe_exec_sync(ibb->fd, engine_id, ibb->batch_offset, syncs, 2);
> > +
> > +	if (sync)
> > +		intel_bb_sync(ibb);
> > +
> > +	return 0;
> > +}
> > +
> >   /*
> >    * __intel_bb_exec:
> >    * @ibb: pointer to intel_bb
> > @@ -2221,7 +2428,7 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
> >   /**
> >    * intel_bb_exec:
> >    * @ibb: pointer to intel_bb
> > - * @end_offset: offset of the last instruction in the bb
> > + * @end_offset: offset of the last instruction in the bb (for i915)
> >    * @flags: flags passed directly to execbuf
> >    * @sync: if true wait for execbuf completion, otherwise caller is responsible
> >    * to wait for completion
> > @@ -2231,7 +2438,13 @@ int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
> >   void intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
> >   		   uint64_t flags, bool sync)
> >   {
> > -	igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
> > +	if (ibb->dump_base64)
> > +		intel_bb_dump_base64(ibb, LINELEN);
> > +
> > +	if (ibb->driver == INTEL_DRIVER_I915)
> > +		igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
> > +	else
> > +		igt_assert_eq(__xe_bb_exec(ibb, flags, sync), 0);
> >   }
> >   /**
> > @@ -2636,7 +2849,8 @@ static void __intel_bb_reinit_alloc(struct intel_bb *ibb)
> >   							  ibb->allocator_start, ibb->allocator_end,
> >   							  ibb->allocator_type,
> >   							  ibb->allocator_strategy,
> > -							  0);
> > +							  ibb->alignment);
> > +
> >   	intel_bb_reset(ibb, true);
> >   }
> > diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
> > index 4978b6fb29..9a58fb7809 100644
> > --- a/lib/intel_batchbuffer.h
> > +++ b/lib/intel_batchbuffer.h
> > @@ -246,6 +246,7 @@ struct intel_bb {
> >   	uint8_t allocator_type;
> >   	enum allocator_strategy allocator_strategy;
> > +	enum intel_driver driver;
> >   	int fd;
> >   	unsigned int gen;
> >   	bool debug;
> > @@ -268,6 +269,11 @@ struct intel_bb {
> >   	uint32_t ctx;
> >   	uint32_t vm_id;
> > +	bool xe_bound;
> > +	uint32_t engine_syncobj;
> > +	uint32_t engine_id;
> > +	uint32_t last_engine;
> > +
> >   	/* Context configuration */
> >   	intel_ctx_cfg_t *cfg;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [igt-dev] ✗ Fi.CI.IGT: failure for Integrate intel-bb with Xe (rev11)
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (17 preceding siblings ...)
  2023-04-28  7:48 ` [igt-dev] ✓ Fi.CI.BAT: success for Integrate intel-bb with Xe (rev11) Patchwork
@ 2023-04-28 10:05 ` Patchwork
  2023-04-28 10:21   ` Zbigniew Kempczyński
  2023-04-28 12:52 ` [igt-dev] ✓ Fi.CI.IGT: success " Patchwork
  19 siblings, 1 reply; 34+ messages in thread
From: Patchwork @ 2023-04-28 10:05 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 17664 bytes --]

== Series Details ==

Series: Integrate intel-bb with Xe (rev11)
URL   : https://patchwork.freedesktop.org/series/116578/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_13071_full -> IGTPW_8881_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with IGTPW_8881_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in IGTPW_8881_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html

Participating hosts (7 -> 7)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_8881_full:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_flip@flip-vs-suspend-interruptible@b-dp1:
    - shard-apl:          [PASS][1] -> [ABORT][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl1/igt@kms_flip@flip-vs-suspend-interruptible@b-dp1.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@b-dp1.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@api_intel_bb@blit-reloc-purge-cache:
    - {shard-dg1}:        [SKIP][3] ([i915#3281]) -> [SKIP][4] +3 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-dg1-16/igt@api_intel_bb@blit-reloc-purge-cache.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-dg1-15/igt@api_intel_bb@blit-reloc-purge-cache.html

  * igt@api_intel_bb@object-reloc-keep-cache:
    - {shard-rkl}:        [SKIP][5] ([i915#3281]) -> [SKIP][6] +6 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-rkl-7/igt@api_intel_bb@object-reloc-keep-cache.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-rkl-2/igt@api_intel_bb@object-reloc-keep-cache.html

  * igt@gem_ctx_create@basic-files:
    - {shard-tglu}:       [PASS][7] -> [ABORT][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-tglu-4/igt@gem_ctx_create@basic-files.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-tglu-8/igt@gem_ctx_create@basic-files.html

  
Known issues
------------

  Here are the changes found in IGTPW_8881_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_fair@basic-deadline:
    - shard-glk:          [PASS][9] -> [FAIL][10] ([i915#2846])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-glk9/igt@gem_exec_fair@basic-deadline.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk6/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-apl:          [PASS][11] -> [FAIL][12] ([i915#2842]) +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl4/igt@gem_exec_fair@basic-none-solo@rcs0.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl1/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][13] ([fdo#109271] / [i915#2190])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl4/igt@gem_huc_copy@huc-copy.html

  * igt@gem_mmap_gtt@fault-concurrent-x:
    - shard-snb:          [PASS][14] -> [ABORT][15] ([i915#5161])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-snb7/igt@gem_mmap_gtt@fault-concurrent-x.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb6/igt@gem_mmap_gtt@fault-concurrent-x.html

  * igt@gem_ppgtt@blt-vs-render-ctxn:
    - shard-snb:          [PASS][16] -> [FAIL][17] ([i915#8295])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-snb5/igt@gem_ppgtt@blt-vs-render-ctxn.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb6/igt@gem_ppgtt@blt-vs-render-ctxn.html

  * igt@i915_pm_dc@dc9-dpms:
    - shard-apl:          [PASS][18] -> [SKIP][19] ([fdo#109271])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl2/igt@i915_pm_dc@dc9-dpms.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl2/igt@i915_pm_dc@dc9-dpms.html

  * igt@i915_pm_rps@reset:
    - shard-snb:          [PASS][20] -> [FAIL][21] ([i915#8400])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-snb6/igt@i915_pm_rps@reset.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb2/igt@i915_pm_rps@reset.html

  * igt@i915_selftest@live@gt_heartbeat:
    - shard-apl:          [PASS][22] -> [FAIL][23] ([i915#7916])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl2/igt@i915_selftest@live@gt_heartbeat.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl1/igt@i915_selftest@live@gt_heartbeat.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
    - shard-apl:          NOTRUN -> [SKIP][24] ([fdo#109271]) +25 similar issues
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl7/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html

  * igt@kms_cdclk@mode-transition:
    - shard-glk:          NOTRUN -> [SKIP][25] ([fdo#109271]) +6 similar issues
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk9/igt@kms_cdclk@mode-transition.html

  * igt@kms_content_protection@atomic@pipe-a-dp-1:
    - shard-apl:          NOTRUN -> [TIMEOUT][26] ([i915#7173])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl6/igt@kms_content_protection@atomic@pipe-a-dp-1.html

  * igt@kms_plane_scaling@plane-upscale-with-modifiers-factor-0-25@pipe-a-vga-1:
    - shard-snb:          NOTRUN -> [SKIP][27] ([fdo#109271]) +25 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb5/igt@kms_plane_scaling@plane-upscale-with-modifiers-factor-0-25@pipe-a-vga-1.html

  * igt@kms_psr2_sf@overlay-plane-move-continuous-sf:
    - shard-glk:          NOTRUN -> [SKIP][28] ([fdo#109271] / [i915#658])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk6/igt@kms_psr2_sf@overlay-plane-move-continuous-sf.html

  * igt@kms_setmode@basic@pipe-a-hdmi-a-1:
    - shard-snb:          NOTRUN -> [FAIL][29] ([i915#5465]) +1 similar issue
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb1/igt@kms_setmode@basic@pipe-a-hdmi-a-1.html

  * igt@perf@oa-exponents@0-rcs0:
    - shard-glk:          [PASS][30] -> [ABORT][31] ([i915#7941])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-glk2/igt@perf@oa-exponents@0-rcs0.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk2/igt@perf@oa-exponents@0-rcs0.html

  
#### Possible fixes ####

  * igt@gem_barrier_race@remote-request@rcs0:
    - {shard-dg1}:        [ABORT][32] ([i915#7461] / [i915#8234]) -> [PASS][33]
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-dg1-16/igt@gem_barrier_race@remote-request@rcs0.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-dg1-18/igt@gem_barrier_race@remote-request@rcs0.html

  * igt@gem_ctx_exec@basic-nohangcheck:
    - {shard-tglu}:       [FAIL][34] ([i915#6268]) -> [PASS][35]
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-tglu-5/igt@gem_ctx_exec@basic-nohangcheck.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-tglu-3/igt@gem_ctx_exec@basic-nohangcheck.html

  * igt@gem_eio@hibernate:
    - shard-apl:          [ABORT][36] ([i915#8213]) -> [PASS][37]
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl3/igt@gem_eio@hibernate.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl4/igt@gem_eio@hibernate.html

  * igt@gem_eio@unwedge-stress:
    - {shard-dg1}:        [FAIL][38] ([i915#5784]) -> [PASS][39]
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-dg1-14/igt@gem_eio@unwedge-stress.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-dg1-12/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - {shard-rkl}:        [FAIL][40] ([i915#2846]) -> [PASS][41]
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-rkl-7/igt@gem_exec_fair@basic-deadline.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-rkl-2/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - {shard-rkl}:        [FAIL][42] ([i915#2842]) -> [PASS][43]
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-rkl-7/igt@gem_exec_fair@basic-pace@rcs0.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-rkl-4/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_lmem_swapping@smem-oom@lmem0:
    - {shard-dg1}:        [TIMEOUT][44] ([i915#5493]) -> [PASS][45]
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-dg1-18/igt@gem_lmem_swapping@smem-oom@lmem0.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-dg1-14/igt@gem_lmem_swapping@smem-oom@lmem0.html

  * igt@i915_pm_rpm@dpms-mode-unset-lpsp:
    - {shard-rkl}:        [SKIP][46] ([i915#1397]) -> [PASS][47] +1 similar issue
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-rkl-6/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-rkl-7/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
    - shard-glk:          [FAIL][48] ([i915#2346]) -> [PASS][49]
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-glk5/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk8/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html

  * igt@kms_plane@pixel-format@pipe-a-planes:
    - shard-glk:          [FAIL][50] ([i915#1623]) -> [PASS][51]
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-glk9/igt@kms_plane@pixel-format@pipe-a-planes.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk2/igt@kms_plane@pixel-format@pipe-a-planes.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
    - {shard-tglu}:       [ABORT][52] ([i915#5122]) -> [PASS][53]
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-tglu-5/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-tglu-7/igt@kms_vblank@pipe-a-ts-continuation-suspend.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109295]: https://bugs.freedesktop.org/show_bug.cgi?id=109295
  [fdo#109300]: https://bugs.freedesktop.org/show_bug.cgi?id=109300
  [fdo#109309]: https://bugs.freedesktop.org/show_bug.cgi?id=109309
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
  [fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1397]: https://gitlab.freedesktop.org/drm/intel/issues/1397
  [i915#1623]: https://gitlab.freedesktop.org/drm/intel/issues/1623
  [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
  [i915#1839]: https://gitlab.freedesktop.org/drm/intel/issues/1839
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2681]: https://gitlab.freedesktop.org/drm/intel/issues/2681
  [i915#2705]: https://gitlab.freedesktop.org/drm/intel/issues/2705
  [i915#280]: https://gitlab.freedesktop.org/drm/intel/issues/280
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2846]: https://gitlab.freedesktop.org/drm/intel/issues/2846
  [i915#3023]: https://gitlab.freedesktop.org/drm/intel/issues/3023
  [i915#3116]: https://gitlab.freedesktop.org/drm/intel/issues/3116
  [i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
  [i915#3359]: https://gitlab.freedesktop.org/drm/intel/issues/3359
  [i915#3458]: https://gitlab.freedesktop.org/drm/intel/issues/3458
  [i915#3539]: https://gitlab.freedesktop.org/drm/intel/issues/3539
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3591]: https://gitlab.freedesktop.org/drm/intel/issues/3591
  [i915#3689]: https://gitlab.freedesktop.org/drm/intel/issues/3689
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3734]: https://gitlab.freedesktop.org/drm/intel/issues/3734
  [i915#3743]: https://gitlab.freedesktop.org/drm/intel/issues/3743
  [i915#3804]: https://gitlab.freedesktop.org/drm/intel/issues/3804
  [i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
  [i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4078]: https://gitlab.freedesktop.org/drm/intel/issues/4078
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4538]: https://gitlab.freedesktop.org/drm/intel/issues/4538
  [i915#4579]: https://gitlab.freedesktop.org/drm/intel/issues/4579
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4833]: https://gitlab.freedesktop.org/drm/intel/issues/4833
  [i915#4860]: https://gitlab.freedesktop.org/drm/intel/issues/4860
  [i915#5122]: https://gitlab.freedesktop.org/drm/intel/issues/5122
  [i915#5161]: https://gitlab.freedesktop.org/drm/intel/issues/5161
  [i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
  [i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
  [i915#5465]: https://gitlab.freedesktop.org/drm/intel/issues/5465
  [i915#5493]: https://gitlab.freedesktop.org/drm/intel/issues/5493
  [i915#5784]: https://gitlab.freedesktop.org/drm/intel/issues/5784
  [i915#6095]: https://gitlab.freedesktop.org/drm/intel/issues/6095
  [i915#6268]: https://gitlab.freedesktop.org/drm/intel/issues/6268
  [i915#6433]: https://gitlab.freedesktop.org/drm/intel/issues/6433
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#6768]: https://gitlab.freedesktop.org/drm/intel/issues/6768
  [i915#6944]: https://gitlab.freedesktop.org/drm/intel/issues/6944
  [i915#7116]: https://gitlab.freedesktop.org/drm/intel/issues/7116
  [i915#7118]: https://gitlab.freedesktop.org/drm/intel/issues/7118
  [i915#7173]: https://gitlab.freedesktop.org/drm/intel/issues/7173
  [i915#7443]: https://gitlab.freedesktop.org/drm/intel/issues/7443
  [i915#7456]: https://gitlab.freedesktop.org/drm/intel/issues/7456
  [i915#7461]: https://gitlab.freedesktop.org/drm/intel/issues/7461
  [i915#7711]: https://gitlab.freedesktop.org/drm/intel/issues/7711
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7916]: https://gitlab.freedesktop.org/drm/intel/issues/7916
  [i915#7941]: https://gitlab.freedesktop.org/drm/intel/issues/7941
  [i915#8011]: https://gitlab.freedesktop.org/drm/intel/issues/8011
  [i915#8102]: https://gitlab.freedesktop.org/drm/intel/issues/8102
  [i915#8213]: https://gitlab.freedesktop.org/drm/intel/issues/8213
  [i915#8234]: https://gitlab.freedesktop.org/drm/intel/issues/8234
  [i915#8292]: https://gitlab.freedesktop.org/drm/intel/issues/8292
  [i915#8295]: https://gitlab.freedesktop.org/drm/intel/issues/8295
  [i915#8381]: https://gitlab.freedesktop.org/drm/intel/issues/8381
  [i915#8399]: https://gitlab.freedesktop.org/drm/intel/issues/8399
  [i915#8400]: https://gitlab.freedesktop.org/drm/intel/issues/8400


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_7273 -> IGTPW_8881
  * Piglit: piglit_4509 -> None

  CI-20190529: 20190529
  CI_DRM_13071: b9458e7075652669ec0e04abe039a5ed001701fe @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_8881: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html
  IGT_7273: f40ef4b058466219968b7792d22ff0648b82396b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html

[-- Attachment #2: Type: text/html, Size: 15178 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] ✗ Fi.CI.IGT: failure for Integrate intel-bb with Xe (rev11)
  2023-04-28 10:05 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
@ 2023-04-28 10:21   ` Zbigniew Kempczyński
  2023-04-28 12:56     ` Yedireswarapu, SaiX Nandan
  0 siblings, 1 reply; 34+ messages in thread
From: Zbigniew Kempczyński @ 2023-04-28 10:21 UTC (permalink / raw)
  To: igt-dev; +Cc: SaiX Nandan Yedireswarapu

On Fri, Apr 28, 2023 at 10:05:28AM +0000, Patchwork wrote:
>    Patch Details
> 
>    Series:  Integrate intel-bb with Xe (rev11)                             
>    URL:     https://patchwork.freedesktop.org/series/116578/               
>    State:   failure                                                        
>    Details: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html 
> 
>          CI Bug Log - changes from CI_DRM_13071_full -> IGTPW_8881_full
> 
> Summary
> 
>    FAILURE
> 
>    Serious unknown changes coming with IGTPW_8881_full absolutely need to be
>    verified manually.
> 
>    If you think the reported changes have nothing to do with the changes
>    introduced in IGTPW_8881_full, please notify your bug team to allow them
>    to document this new failure mode, which will reduce false positives in
>    CI.
> 
>    External URL:
>    https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html
> 
> Participating hosts (7 -> 7)
> 
>    No changes in participating hosts
> 
> Possible new issues
> 
>    Here are the unknown changes that may have been introduced in
>    IGTPW_8881_full:
> 
>   IGT changes
> 
>     Possible regressions
> 
>      * igt@kms_flip@flip-vs-suspend-interruptible@b-dp1:
>           * shard-apl: PASS -> ABORT

Failure is not relate to the change.

--
Zbigniew


> 
>     Suppressed
> 
>    The following results come from untrusted machines, tests, or statuses.
>    They do not affect the overall result.
> 
>      * igt@api_intel_bb@blit-reloc-purge-cache:
> 
>           * {shard-dg1}: SKIP (i915#3281) -> SKIP +3 similar issues
>      * igt@api_intel_bb@object-reloc-keep-cache:
> 
>           * {shard-rkl}: SKIP (i915#3281) -> SKIP +6 similar issues
>      * igt@gem_ctx_create@basic-files:
> 
>           * {shard-tglu}: PASS -> ABORT
> 
> Known issues
> 
>    Here are the changes found in IGTPW_8881_full that come from known issues:
> 
>   IGT changes
> 
>     Issues hit
> 
>      * igt@gem_exec_fair@basic-deadline:
> 
>           * shard-glk: PASS -> FAIL (i915#2846)
>      * igt@gem_exec_fair@basic-none-solo@rcs0:
> 
>           * shard-apl: PASS -> FAIL (i915#2842) +1 similar issue
>      * igt@gem_huc_copy@huc-copy:
> 
>           * shard-apl: NOTRUN -> SKIP (fdo#109271 / i915#2190)
>      * igt@gem_mmap_gtt@fault-concurrent-x:
> 
>           * shard-snb: PASS -> ABORT (i915#5161)
>      * igt@gem_ppgtt@blt-vs-render-ctxn:
> 
>           * shard-snb: PASS -> FAIL (i915#8295)
>      * igt@i915_pm_dc@dc9-dpms:
> 
>           * shard-apl: PASS -> SKIP (fdo#109271)
>      * igt@i915_pm_rps@reset:
> 
>           * shard-snb: PASS -> FAIL (i915#8400)
>      * igt@i915_selftest@live@gt_heartbeat:
> 
>           * shard-apl: PASS -> FAIL (i915#7916)
>      * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
> 
>           * shard-apl: NOTRUN -> SKIP (fdo#109271) +25 similar issues
>      * igt@kms_cdclk@mode-transition:
> 
>           * shard-glk: NOTRUN -> SKIP (fdo#109271) +6 similar issues
>      * igt@kms_content_protection@atomic@pipe-a-dp-1:
> 
>           * shard-apl: NOTRUN -> TIMEOUT (i915#7173)
>      * igt@kms_plane_scaling@plane-upscale-with-modifiers-factor-0-25@pipe-a-vga-1:
> 
>           * shard-snb: NOTRUN -> SKIP (fdo#109271) +25 similar issues
>      * igt@kms_psr2_sf@overlay-plane-move-continuous-sf:
> 
>           * shard-glk: NOTRUN -> SKIP (fdo#109271 / i915#658)
>      * igt@kms_setmode@basic@pipe-a-hdmi-a-1:
> 
>           * shard-snb: NOTRUN -> FAIL (i915#5465) +1 similar issue
>      * igt@perf@oa-exponents@0-rcs0:
> 
>           * shard-glk: PASS -> ABORT (i915#7941)
> 
>     Possible fixes
> 
>      * igt@gem_barrier_race@remote-request@rcs0:
> 
>           * {shard-dg1}: ABORT (i915#7461 / i915#8234) -> PASS
>      * igt@gem_ctx_exec@basic-nohangcheck:
> 
>           * {shard-tglu}: FAIL (i915#6268) -> PASS
>      * igt@gem_eio@hibernate:
> 
>           * shard-apl: ABORT (i915#8213) -> PASS
>      * igt@gem_eio@unwedge-stress:
> 
>           * {shard-dg1}: FAIL (i915#5784) -> PASS
>      * igt@gem_exec_fair@basic-deadline:
> 
>           * {shard-rkl}: FAIL (i915#2846) -> PASS
>      * igt@gem_exec_fair@basic-pace@rcs0:
> 
>           * {shard-rkl}: FAIL (i915#2842) -> PASS
>      * igt@gem_lmem_swapping@smem-oom@lmem0:
> 
>           * {shard-dg1}: TIMEOUT (i915#5493) -> PASS
>      * igt@i915_pm_rpm@dpms-mode-unset-lpsp:
> 
>           * {shard-rkl}: SKIP (i915#1397) -> PASS +1 similar issue
>      * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
> 
>           * shard-glk: FAIL (i915#2346) -> PASS
>      * igt@kms_plane@pixel-format@pipe-a-planes:
> 
>           * shard-glk: FAIL (i915#1623) -> PASS
>      * igt@kms_vblank@pipe-a-ts-continuation-suspend:
> 
>           * {shard-tglu}: ABORT (i915#5122) -> PASS
> 
>    {name}: This element is suppressed. This means it is ignored when
>    computing
>    the status of the difference (SUCCESS, WARNING, or FAILURE).
> 
> Build changes
> 
>      * CI: CI-20190529 -> None
>      * IGT: IGT_7273 -> IGTPW_8881
>      * Piglit: piglit_4509 -> None
> 
>    CI-20190529: 20190529
>    CI_DRM_13071: b9458e7075652669ec0e04abe039a5ed001701fe @
>    git://anongit.freedesktop.org/gfx-ci/linux
>    IGTPW_8881: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html
>    IGT_7273: f40ef4b058466219968b7792d22ff0648b82396b @
>    https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
>    piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @
>    git://anongit.freedesktop.org/piglit

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [igt-dev] ✓ Fi.CI.IGT: success for Integrate intel-bb with Xe (rev11)
  2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
                   ` (18 preceding siblings ...)
  2023-04-28 10:05 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
@ 2023-04-28 12:52 ` Patchwork
  19 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2023-04-28 12:52 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 17410 bytes --]

== Series Details ==

Series: Integrate intel-bb with Xe (rev11)
URL   : https://patchwork.freedesktop.org/series/116578/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_13071_full -> IGTPW_8881_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html

Participating hosts (7 -> 7)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_8881_full:

### IGT changes ###

#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@api_intel_bb@blit-reloc-purge-cache:
    - {shard-dg1}:        [SKIP][1] ([i915#3281]) -> [SKIP][2] +3 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-dg1-16/igt@api_intel_bb@blit-reloc-purge-cache.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-dg1-15/igt@api_intel_bb@blit-reloc-purge-cache.html

  * igt@api_intel_bb@object-reloc-keep-cache:
    - {shard-rkl}:        [SKIP][3] ([i915#3281]) -> [SKIP][4] +6 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-rkl-7/igt@api_intel_bb@object-reloc-keep-cache.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-rkl-2/igt@api_intel_bb@object-reloc-keep-cache.html

  * igt@gem_ctx_create@basic-files:
    - {shard-tglu}:       [PASS][5] -> [ABORT][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-tglu-4/igt@gem_ctx_create@basic-files.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-tglu-8/igt@gem_ctx_create@basic-files.html

  
Known issues
------------

  Here are the changes found in IGTPW_8881_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_fair@basic-deadline:
    - shard-glk:          [PASS][7] -> [FAIL][8] ([i915#2846])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-glk9/igt@gem_exec_fair@basic-deadline.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk6/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-apl:          [PASS][9] -> [FAIL][10] ([i915#2842]) +1 similar issue
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl4/igt@gem_exec_fair@basic-none-solo@rcs0.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl1/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][11] ([fdo#109271] / [i915#2190])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl4/igt@gem_huc_copy@huc-copy.html

  * igt@gem_mmap_gtt@fault-concurrent-x:
    - shard-snb:          [PASS][12] -> [ABORT][13] ([i915#5161])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-snb7/igt@gem_mmap_gtt@fault-concurrent-x.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb6/igt@gem_mmap_gtt@fault-concurrent-x.html

  * igt@gem_ppgtt@blt-vs-render-ctxn:
    - shard-snb:          [PASS][14] -> [FAIL][15] ([i915#8295])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-snb5/igt@gem_ppgtt@blt-vs-render-ctxn.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb6/igt@gem_ppgtt@blt-vs-render-ctxn.html

  * igt@i915_pm_dc@dc9-dpms:
    - shard-apl:          [PASS][16] -> [SKIP][17] ([fdo#109271])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl2/igt@i915_pm_dc@dc9-dpms.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl2/igt@i915_pm_dc@dc9-dpms.html

  * igt@i915_pm_rps@reset:
    - shard-snb:          [PASS][18] -> [FAIL][19] ([i915#8400])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-snb6/igt@i915_pm_rps@reset.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb2/igt@i915_pm_rps@reset.html

  * igt@i915_selftest@live@gt_heartbeat:
    - shard-apl:          [PASS][20] -> [FAIL][21] ([i915#7916])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl2/igt@i915_selftest@live@gt_heartbeat.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl1/igt@i915_selftest@live@gt_heartbeat.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
    - shard-apl:          NOTRUN -> [SKIP][22] ([fdo#109271]) +25 similar issues
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl7/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html

  * igt@kms_cdclk@mode-transition:
    - shard-glk:          NOTRUN -> [SKIP][23] ([fdo#109271]) +6 similar issues
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk9/igt@kms_cdclk@mode-transition.html

  * igt@kms_content_protection@atomic@pipe-a-dp-1:
    - shard-apl:          NOTRUN -> [TIMEOUT][24] ([i915#7173])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl6/igt@kms_content_protection@atomic@pipe-a-dp-1.html

  * igt@kms_flip@flip-vs-suspend-interruptible@b-dp1:
    - shard-apl:          [PASS][25] -> [ABORT][26] ([i915#8408])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl1/igt@kms_flip@flip-vs-suspend-interruptible@b-dp1.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@b-dp1.html

  * igt@kms_plane_scaling@plane-upscale-with-modifiers-factor-0-25@pipe-a-vga-1:
    - shard-snb:          NOTRUN -> [SKIP][27] ([fdo#109271]) +25 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb5/igt@kms_plane_scaling@plane-upscale-with-modifiers-factor-0-25@pipe-a-vga-1.html

  * igt@kms_psr2_sf@overlay-plane-move-continuous-sf:
    - shard-glk:          NOTRUN -> [SKIP][28] ([fdo#109271] / [i915#658])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk6/igt@kms_psr2_sf@overlay-plane-move-continuous-sf.html

  * igt@kms_setmode@basic@pipe-a-hdmi-a-1:
    - shard-snb:          NOTRUN -> [FAIL][29] ([i915#5465]) +1 similar issue
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-snb1/igt@kms_setmode@basic@pipe-a-hdmi-a-1.html

  * igt@perf@oa-exponents@0-rcs0:
    - shard-glk:          [PASS][30] -> [ABORT][31] ([i915#7941])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-glk2/igt@perf@oa-exponents@0-rcs0.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk2/igt@perf@oa-exponents@0-rcs0.html

  
#### Possible fixes ####

  * igt@gem_barrier_race@remote-request@rcs0:
    - {shard-dg1}:        [ABORT][32] ([i915#7461] / [i915#8234]) -> [PASS][33]
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-dg1-16/igt@gem_barrier_race@remote-request@rcs0.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-dg1-18/igt@gem_barrier_race@remote-request@rcs0.html

  * igt@gem_ctx_exec@basic-nohangcheck:
    - {shard-tglu}:       [FAIL][34] ([i915#6268]) -> [PASS][35]
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-tglu-5/igt@gem_ctx_exec@basic-nohangcheck.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-tglu-3/igt@gem_ctx_exec@basic-nohangcheck.html

  * igt@gem_eio@hibernate:
    - shard-apl:          [ABORT][36] ([i915#8213]) -> [PASS][37]
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-apl3/igt@gem_eio@hibernate.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-apl4/igt@gem_eio@hibernate.html

  * igt@gem_eio@unwedge-stress:
    - {shard-dg1}:        [FAIL][38] ([i915#5784]) -> [PASS][39]
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-dg1-14/igt@gem_eio@unwedge-stress.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-dg1-12/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - {shard-rkl}:        [FAIL][40] ([i915#2846]) -> [PASS][41]
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-rkl-7/igt@gem_exec_fair@basic-deadline.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-rkl-2/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - {shard-rkl}:        [FAIL][42] ([i915#2842]) -> [PASS][43]
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-rkl-7/igt@gem_exec_fair@basic-pace@rcs0.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-rkl-4/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_lmem_swapping@smem-oom@lmem0:
    - {shard-dg1}:        [TIMEOUT][44] ([i915#5493]) -> [PASS][45]
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-dg1-18/igt@gem_lmem_swapping@smem-oom@lmem0.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-dg1-14/igt@gem_lmem_swapping@smem-oom@lmem0.html

  * igt@i915_pm_rpm@dpms-mode-unset-lpsp:
    - {shard-rkl}:        [SKIP][46] ([i915#1397]) -> [PASS][47] +1 similar issue
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-rkl-6/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-rkl-7/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
    - shard-glk:          [FAIL][48] ([i915#2346]) -> [PASS][49]
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-glk5/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk8/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html

  * igt@kms_plane@pixel-format@pipe-a-planes:
    - shard-glk:          [FAIL][50] ([i915#1623]) -> [PASS][51]
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-glk9/igt@kms_plane@pixel-format@pipe-a-planes.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-glk2/igt@kms_plane@pixel-format@pipe-a-planes.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
    - {shard-tglu}:       [ABORT][52] ([i915#5122]) -> [PASS][53]
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13071/shard-tglu-5/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/shard-tglu-7/igt@kms_vblank@pipe-a-ts-continuation-suspend.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109295]: https://bugs.freedesktop.org/show_bug.cgi?id=109295
  [fdo#109300]: https://bugs.freedesktop.org/show_bug.cgi?id=109300
  [fdo#109309]: https://bugs.freedesktop.org/show_bug.cgi?id=109309
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
  [fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1397]: https://gitlab.freedesktop.org/drm/intel/issues/1397
  [i915#1623]: https://gitlab.freedesktop.org/drm/intel/issues/1623
  [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
  [i915#1839]: https://gitlab.freedesktop.org/drm/intel/issues/1839
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2681]: https://gitlab.freedesktop.org/drm/intel/issues/2681
  [i915#2705]: https://gitlab.freedesktop.org/drm/intel/issues/2705
  [i915#280]: https://gitlab.freedesktop.org/drm/intel/issues/280
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2846]: https://gitlab.freedesktop.org/drm/intel/issues/2846
  [i915#3023]: https://gitlab.freedesktop.org/drm/intel/issues/3023
  [i915#3116]: https://gitlab.freedesktop.org/drm/intel/issues/3116
  [i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
  [i915#3359]: https://gitlab.freedesktop.org/drm/intel/issues/3359
  [i915#3458]: https://gitlab.freedesktop.org/drm/intel/issues/3458
  [i915#3539]: https://gitlab.freedesktop.org/drm/intel/issues/3539
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3591]: https://gitlab.freedesktop.org/drm/intel/issues/3591
  [i915#3689]: https://gitlab.freedesktop.org/drm/intel/issues/3689
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3734]: https://gitlab.freedesktop.org/drm/intel/issues/3734
  [i915#3743]: https://gitlab.freedesktop.org/drm/intel/issues/3743
  [i915#3804]: https://gitlab.freedesktop.org/drm/intel/issues/3804
  [i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
  [i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4078]: https://gitlab.freedesktop.org/drm/intel/issues/4078
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4538]: https://gitlab.freedesktop.org/drm/intel/issues/4538
  [i915#4579]: https://gitlab.freedesktop.org/drm/intel/issues/4579
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4833]: https://gitlab.freedesktop.org/drm/intel/issues/4833
  [i915#4860]: https://gitlab.freedesktop.org/drm/intel/issues/4860
  [i915#5122]: https://gitlab.freedesktop.org/drm/intel/issues/5122
  [i915#5161]: https://gitlab.freedesktop.org/drm/intel/issues/5161
  [i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
  [i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
  [i915#5465]: https://gitlab.freedesktop.org/drm/intel/issues/5465
  [i915#5493]: https://gitlab.freedesktop.org/drm/intel/issues/5493
  [i915#5784]: https://gitlab.freedesktop.org/drm/intel/issues/5784
  [i915#6095]: https://gitlab.freedesktop.org/drm/intel/issues/6095
  [i915#6268]: https://gitlab.freedesktop.org/drm/intel/issues/6268
  [i915#6433]: https://gitlab.freedesktop.org/drm/intel/issues/6433
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#6768]: https://gitlab.freedesktop.org/drm/intel/issues/6768
  [i915#6944]: https://gitlab.freedesktop.org/drm/intel/issues/6944
  [i915#7116]: https://gitlab.freedesktop.org/drm/intel/issues/7116
  [i915#7118]: https://gitlab.freedesktop.org/drm/intel/issues/7118
  [i915#7173]: https://gitlab.freedesktop.org/drm/intel/issues/7173
  [i915#7443]: https://gitlab.freedesktop.org/drm/intel/issues/7443
  [i915#7456]: https://gitlab.freedesktop.org/drm/intel/issues/7456
  [i915#7461]: https://gitlab.freedesktop.org/drm/intel/issues/7461
  [i915#7711]: https://gitlab.freedesktop.org/drm/intel/issues/7711
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7916]: https://gitlab.freedesktop.org/drm/intel/issues/7916
  [i915#7941]: https://gitlab.freedesktop.org/drm/intel/issues/7941
  [i915#8011]: https://gitlab.freedesktop.org/drm/intel/issues/8011
  [i915#8102]: https://gitlab.freedesktop.org/drm/intel/issues/8102
  [i915#8213]: https://gitlab.freedesktop.org/drm/intel/issues/8213
  [i915#8234]: https://gitlab.freedesktop.org/drm/intel/issues/8234
  [i915#8292]: https://gitlab.freedesktop.org/drm/intel/issues/8292
  [i915#8295]: https://gitlab.freedesktop.org/drm/intel/issues/8295
  [i915#8381]: https://gitlab.freedesktop.org/drm/intel/issues/8381
  [i915#8399]: https://gitlab.freedesktop.org/drm/intel/issues/8399
  [i915#8400]: https://gitlab.freedesktop.org/drm/intel/issues/8400
  [i915#8408]: https://gitlab.freedesktop.org/drm/intel/issues/8408


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_7273 -> IGTPW_8881
  * Piglit: piglit_4509 -> None

  CI-20190529: 20190529
  CI_DRM_13071: b9458e7075652669ec0e04abe039a5ed001701fe @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_8881: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html
  IGT_7273: f40ef4b058466219968b7792d22ff0648b82396b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html

[-- Attachment #2: Type: text/html, Size: 14904 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [igt-dev] ✗ Fi.CI.IGT: failure for Integrate intel-bb with Xe (rev11)
  2023-04-28 10:21   ` Zbigniew Kempczyński
@ 2023-04-28 12:56     ` Yedireswarapu, SaiX Nandan
  0 siblings, 0 replies; 34+ messages in thread
From: Yedireswarapu, SaiX Nandan @ 2023-04-28 12:56 UTC (permalink / raw)
  To: Kempczynski, Zbigniew, igt-dev

Hi,

Issue re-reported, https://patchwork.freedesktop.org/series/116578/

Thanks,
Y Sai Nandan

-----Original Message-----
From: Kempczynski, Zbigniew <zbigniew.kempczynski@intel.com> 
Sent: Friday, April 28, 2023 3:51 PM
To: igt-dev@lists.freedesktop.org
Cc: Yedireswarapu, SaiX Nandan <saix.nandan.yedireswarapu@intel.com>
Subject: Re: ✗ Fi.CI.IGT: failure for Integrate intel-bb with Xe (rev11)

On Fri, Apr 28, 2023 at 10:05:28AM +0000, Patchwork wrote:
>    Patch Details
> 
>    Series:  Integrate intel-bb with Xe (rev11)                             
>    URL:     https://patchwork.freedesktop.org/series/116578/               
>    State:   failure                                                        
>    Details: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html 
> 
>          CI Bug Log - changes from CI_DRM_13071_full -> IGTPW_8881_full
> 
> Summary
> 
>    FAILURE
> 
>    Serious unknown changes coming with IGTPW_8881_full absolutely need to be
>    verified manually.
> 
>    If you think the reported changes have nothing to do with the changes
>    introduced in IGTPW_8881_full, please notify your bug team to allow them
>    to document this new failure mode, which will reduce false positives in
>    CI.
> 
>    External URL:
>    https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html
> 
> Participating hosts (7 -> 7)
> 
>    No changes in participating hosts
> 
> Possible new issues
> 
>    Here are the unknown changes that may have been introduced in
>    IGTPW_8881_full:
> 
>   IGT changes
> 
>     Possible regressions
> 
>      * igt@kms_flip@flip-vs-suspend-interruptible@b-dp1:
>           * shard-apl: PASS -> ABORT

Failure is not relate to the change.

--
Zbigniew


> 
>     Suppressed
> 
>    The following results come from untrusted machines, tests, or statuses.
>    They do not affect the overall result.
> 
>      * igt@api_intel_bb@blit-reloc-purge-cache:
> 
>           * {shard-dg1}: SKIP (i915#3281) -> SKIP +3 similar issues
>      * igt@api_intel_bb@object-reloc-keep-cache:
> 
>           * {shard-rkl}: SKIP (i915#3281) -> SKIP +6 similar issues
>      * igt@gem_ctx_create@basic-files:
> 
>           * {shard-tglu}: PASS -> ABORT
> 
> Known issues
> 
>    Here are the changes found in IGTPW_8881_full that come from known issues:
> 
>   IGT changes
> 
>     Issues hit
> 
>      * igt@gem_exec_fair@basic-deadline:
> 
>           * shard-glk: PASS -> FAIL (i915#2846)
>      * igt@gem_exec_fair@basic-none-solo@rcs0:
> 
>           * shard-apl: PASS -> FAIL (i915#2842) +1 similar issue
>      * igt@gem_huc_copy@huc-copy:
> 
>           * shard-apl: NOTRUN -> SKIP (fdo#109271 / i915#2190)
>      * igt@gem_mmap_gtt@fault-concurrent-x:
> 
>           * shard-snb: PASS -> ABORT (i915#5161)
>      * igt@gem_ppgtt@blt-vs-render-ctxn:
> 
>           * shard-snb: PASS -> FAIL (i915#8295)
>      * igt@i915_pm_dc@dc9-dpms:
> 
>           * shard-apl: PASS -> SKIP (fdo#109271)
>      * igt@i915_pm_rps@reset:
> 
>           * shard-snb: PASS -> FAIL (i915#8400)
>      * igt@i915_selftest@live@gt_heartbeat:
> 
>           * shard-apl: PASS -> FAIL (i915#7916)
>      * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
> 
>           * shard-apl: NOTRUN -> SKIP (fdo#109271) +25 similar issues
>      * igt@kms_cdclk@mode-transition:
> 
>           * shard-glk: NOTRUN -> SKIP (fdo#109271) +6 similar issues
>      * igt@kms_content_protection@atomic@pipe-a-dp-1:
> 
>           * shard-apl: NOTRUN -> TIMEOUT (i915#7173)
>      * igt@kms_plane_scaling@plane-upscale-with-modifiers-factor-0-25@pipe-a-vga-1:
> 
>           * shard-snb: NOTRUN -> SKIP (fdo#109271) +25 similar issues
>      * igt@kms_psr2_sf@overlay-plane-move-continuous-sf:
> 
>           * shard-glk: NOTRUN -> SKIP (fdo#109271 / i915#658)
>      * igt@kms_setmode@basic@pipe-a-hdmi-a-1:
> 
>           * shard-snb: NOTRUN -> FAIL (i915#5465) +1 similar issue
>      * igt@perf@oa-exponents@0-rcs0:
> 
>           * shard-glk: PASS -> ABORT (i915#7941)
> 
>     Possible fixes
> 
>      * igt@gem_barrier_race@remote-request@rcs0:
> 
>           * {shard-dg1}: ABORT (i915#7461 / i915#8234) -> PASS
>      * igt@gem_ctx_exec@basic-nohangcheck:
> 
>           * {shard-tglu}: FAIL (i915#6268) -> PASS
>      * igt@gem_eio@hibernate:
> 
>           * shard-apl: ABORT (i915#8213) -> PASS
>      * igt@gem_eio@unwedge-stress:
> 
>           * {shard-dg1}: FAIL (i915#5784) -> PASS
>      * igt@gem_exec_fair@basic-deadline:
> 
>           * {shard-rkl}: FAIL (i915#2846) -> PASS
>      * igt@gem_exec_fair@basic-pace@rcs0:
> 
>           * {shard-rkl}: FAIL (i915#2842) -> PASS
>      * igt@gem_lmem_swapping@smem-oom@lmem0:
> 
>           * {shard-dg1}: TIMEOUT (i915#5493) -> PASS
>      * igt@i915_pm_rpm@dpms-mode-unset-lpsp:
> 
>           * {shard-rkl}: SKIP (i915#1397) -> PASS +1 similar issue
>      * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
> 
>           * shard-glk: FAIL (i915#2346) -> PASS
>      * igt@kms_plane@pixel-format@pipe-a-planes:
> 
>           * shard-glk: FAIL (i915#1623) -> PASS
>      * igt@kms_vblank@pipe-a-ts-continuation-suspend:
> 
>           * {shard-tglu}: ABORT (i915#5122) -> PASS
> 
>    {name}: This element is suppressed. This means it is ignored when
>    computing
>    the status of the difference (SUCCESS, WARNING, or FAILURE).
> 
> Build changes
> 
>      * CI: CI-20190529 -> None
>      * IGT: IGT_7273 -> IGTPW_8881
>      * Piglit: piglit_4509 -> None
> 
>    CI-20190529: 20190529
>    CI_DRM_13071: b9458e7075652669ec0e04abe039a5ed001701fe @
>    git://anongit.freedesktop.org/gfx-ci/linux
>    IGTPW_8881: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_8881/index.html
>    IGT_7273: f40ef4b058466219968b7792d22ff0648b82396b @
>    https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
>    piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @
>    git://anongit.freedesktop.org/piglit

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2023-04-28 12:56 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-28  6:22 [igt-dev] [PATCH i-g-t v8 00/17] Integrate intel-bb with Xe Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 01/17] lib/xe_ioctl: Add missing header for direct resolving Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 02/17] lib/xe_query: Add region helpers and missing doc Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 03/17] lib/xe_query: Remove commented out function prototype Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 04/17] lib/intel_allocator: Add allocator support for Xe Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 05/17] lib/drmtest: Add driver enum for i915/xe Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 06/17] lib/intel_bufops: Add Xe support in bufops Zbigniew Kempczyński
2023-04-28  7:49   ` Kamil Konieczny
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 07/17] lib/intel_batchbuffer: Rename i915 -> fd as preparation step for xe Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 08/17] lib/intel_batchbuffer: Reacquire offset for reloc allocator in reset path Zbigniew Kempczyński
2023-04-28  7:50   ` Kamil Konieczny
2023-04-28  8:44   ` Manszewski, Christoph
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 09/17] lib/intel_batchbuffer: Update intel-bb docs Zbigniew Kempczyński
2023-04-28  7:51   ` Kamil Konieczny
2023-04-28  8:51   ` Manszewski, Christoph
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 10/17] lib/intel_batchbuffer: Add Xe support in intel-bb Zbigniew Kempczyński
2023-04-28  7:53   ` Kamil Konieczny
2023-04-28  8:40   ` Manszewski, Christoph
2023-04-28  9:20     ` Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 11/17] tests/xe_intel_bb: Check if intel-bb Xe support correctness Zbigniew Kempczyński
2023-04-28  7:58   ` Kamil Konieczny
2023-04-28  8:18     ` Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 12/17] tests/xe-fast-feedback: Add xe_intel_bb test to BAT Zbigniew Kempczyński
2023-04-28  7:59   ` Kamil Konieczny
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 13/17] lib/gpgpu_fill: Use RENDER engine flag to work on Xe Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 14/17] tests/xe_gpgpu_fill: Exercise gpgpu fill " Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 15/17] lib/igt_fb: For xe assume vram is used on discrete Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 16/17] lib/igt_draw: Pass region while building intel_buf from flink Zbigniew Kempczyński
2023-04-28  6:22 ` [igt-dev] [PATCH i-g-t v8 17/17] tests/kms_big_fb: Deduce region for xe framebuffer Zbigniew Kempczyński
2023-04-28  7:48 ` [igt-dev] ✓ Fi.CI.BAT: success for Integrate intel-bb with Xe (rev11) Patchwork
2023-04-28 10:05 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
2023-04-28 10:21   ` Zbigniew Kempczyński
2023-04-28 12:56     ` Yedireswarapu, SaiX Nandan
2023-04-28 12:52 ` [igt-dev] ✓ Fi.CI.IGT: success " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.