All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator
@ 2021-04-02  9:38 Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 01/39] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
                   ` (40 more replies)
  0 siblings, 41 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

This series introduce intel-allocator inside IGT.

...

v24: address review comment:
     - change gem_linear_blits to show the way we can use during 
       rewriting other tests to use/not use relocations
v25: changes:
     - fix bug introduced during last review refactor in gem_linear_blits
     - remove api_intel_bb@last-page test to not to hang gpu, we can decide
       to get this test later
v26: resend for review (Jason)
v27: addressing review comments:
     - fix calculation of size in simple allocator (Jason)
     - check buffer size is enough to handle WxHxBPP in intel_buf (Jason)
     - skip collecting relocations in intel-bb for non-reloc mode (Zbigniew)
v28: addressing review comments:
     - replace igt_map with hash table taken from Eric Anholt and used 
       in Mesa (Dominik)
     - adopt new igt_map, requires some rework in intel_allocator (Zbigniew)
v29: - remove igt_hlist (current hashmap doesn't use it, Dominik)
     - add non-asserting __intel_allocator_alloc() for future use (Zbigniew)
v30: - remove kill_children() which unfortunatelly also kills helpers what
       is not we want
     - add alloc_with_strategy() to support rewriting spinners

Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>

Dominik Grzegorzek (5):
  lib/igt_list: Add igt_list_for_each_entry_safe_reverse
  lib/igt_map: Adopt Mesa hash table
  lib/intel_allocator_simple: Add simple allocator
  tests/api_intel_allocator: Simple allocator test suite
  tests/gem_linear_blits: Use intel allocator

Zbigniew Kempczyński (34):
  lib/igt_list: Add igt_list_del_init()
  lib/igt_core: Track child process pid and tid
  lib/intel_allocator_reloc: Add reloc allocator
  lib/intel_allocator_random: Add random allocator
  lib/intel_allocator: Add intel_allocator core
  lib/intel_allocator: Try to stop smoothly instead of deinit
  lib/intel_allocator_msgchannel: Scale to 4k of parallel clients
  lib/intel_allocator: Separate allocator multiprocess start
  lib/intel_bufops: Change size from 32->64 bit
  lib/intel_bufops: Add init with handle and size function
  lib/intel_batchbuffer: Integrate intel_bb with allocator
  lib/intel_batchbuffer: Use relocations in intel-bb up to gen12
  lib/intel_batchbuffer: Create bb with strategy / vm ranges
  lib/intel_batchbuffer: Add tracking intel_buf to intel_bb
  lib/intel_batchbuffer: Don't collect relocations for newer gens
  lib/igt_fb: Initialize intel_buf with same size as fb
  tests/api_intel_bb: Remove check-canonical test
  tests/api_intel_bb: Modify test to verify intel_bb with allocator
  tests/api_intel_bb: Add compressed->compressed copy
  tests/api_intel_bb: Add purge-bb test
  tests/api_intel_bb: Add simple intel-bb which uses allocator
  tests/api_intel_bb: Use allocator in delta-check test
  tests/api_intel_bb: Check switching vm in intel-bb
  tests/api_intel_allocator: Add execbuf with allocator example
  tests/api_intel_allocator: Verify child can use its standalone
    allocator
  tests/gem_softpin: Verify allocator and execbuf pair work together
  tests/gem|kms: Remove intel_bb from fixture
  tests/gem_mmap_offset: Use intel_buf wrapper code instead direct
  tests/gem_ppgtt: Adopt test to use intel_bb with allocator
  tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator
  tests/perf.c: Remove buffer from batch
  lib/intel_allocator: drop kill_children()
  lib/intel_allocator: Add alloc function which allows passing strategy
    argument
  tests/api_intel_allocator: Check alloc with strategy API

 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |    2 +
 lib/Makefile.sources                          |    8 +
 lib/igt_core.c                                |   20 +
 lib/igt_fb.c                                  |   10 +-
 lib/igt_list.c                                |    6 +
 lib/igt_list.h                                |    7 +
 lib/igt_map.c                                 |  502 ++++++
 lib/igt_map.h                                 |  174 ++
 lib/intel_allocator.c                         | 1415 +++++++++++++++++
 lib/intel_allocator.h                         |  231 +++
 lib/intel_allocator_msgchannel.c              |  195 +++
 lib/intel_allocator_msgchannel.h              |  157 ++
 lib/intel_allocator_random.c                  |  190 +++
 lib/intel_allocator_reloc.c                   |  192 +++
 lib/intel_allocator_simple.c                  |  807 ++++++++++
 lib/intel_aux_pgtable.c                       |   26 +-
 lib/intel_batchbuffer.c                       |  737 ++++++---
 lib/intel_batchbuffer.h                       |   54 +-
 lib/intel_bufops.c                            |   64 +-
 lib/intel_bufops.h                            |   20 +-
 lib/media_spin.c                              |    2 -
 lib/meson.build                               |    6 +
 tests/i915/api_intel_allocator.c              |  715 +++++++++
 tests/i915/api_intel_bb.c                     |  741 +++++++--
 tests/i915/gem_caching.c                      |   14 +-
 tests/i915/gem_linear_blits.c                 |   90 +-
 tests/i915/gem_mmap_offset.c                  |    4 +-
 tests/i915/gem_partial_pwrite_pread.c         |   40 +-
 tests/i915/gem_ppgtt.c                        |    7 +-
 tests/i915/gem_render_copy.c                  |   31 +-
 tests/i915/gem_render_copy_redux.c            |   24 +-
 tests/i915/gem_softpin.c                      |  194 +++
 tests/i915/perf.c                             |    9 +
 tests/intel-ci/fast-feedback.testlist         |    2 +
 tests/kms_big_fb.c                            |   12 +-
 tests/meson.build                             |    1 +
 36 files changed, 6224 insertions(+), 485 deletions(-)
 create mode 100644 lib/igt_map.c
 create mode 100644 lib/igt_map.h
 create mode 100644 lib/intel_allocator.c
 create mode 100644 lib/intel_allocator.h
 create mode 100644 lib/intel_allocator_msgchannel.c
 create mode 100644 lib/intel_allocator_msgchannel.h
 create mode 100644 lib/intel_allocator_random.c
 create mode 100644 lib/intel_allocator_reloc.c
 create mode 100644 lib/intel_allocator_simple.c
 create mode 100644 tests/i915/api_intel_allocator.c

-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 01/39] lib/igt_list: Add igt_list_del_init()
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 02/39] lib/igt_list: Add igt_list_for_each_entry_safe_reverse Zbigniew Kempczyński
                   ` (39 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Add helper function for delete and reinitialize list element.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
---
 lib/igt_list.c | 6 ++++++
 lib/igt_list.h | 1 +
 2 files changed, 7 insertions(+)

diff --git a/lib/igt_list.c b/lib/igt_list.c
index 5e30b19b6..37ae139c4 100644
--- a/lib/igt_list.c
+++ b/lib/igt_list.c
@@ -46,6 +46,12 @@ void igt_list_del(struct igt_list_head *elem)
 	elem->prev = NULL;
 }
 
+void igt_list_del_init(struct igt_list_head *elem)
+{
+	igt_list_del(elem);
+	IGT_INIT_LIST_HEAD(elem);
+}
+
 void igt_list_move(struct igt_list_head *elem, struct igt_list_head *list)
 {
 	igt_list_del(elem);
diff --git a/lib/igt_list.h b/lib/igt_list.h
index dbf5f802c..cc93d7a0d 100644
--- a/lib/igt_list.h
+++ b/lib/igt_list.h
@@ -75,6 +75,7 @@ struct igt_list_head {
 void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
 void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
 void igt_list_del(struct igt_list_head *elem);
+void igt_list_del_init(struct igt_list_head *elem);
 void igt_list_move(struct igt_list_head *elem, struct igt_list_head *list);
 void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
 int igt_list_length(const struct igt_list_head *head);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 02/39] lib/igt_list: Add igt_list_for_each_entry_safe_reverse
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 01/39] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-08  7:44   ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 03/39] lib/igt_map: Adopt Mesa hash table Zbigniew Kempczyński
                   ` (38 subsequent siblings)
  40 siblings, 1 reply; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Added safe version of reversed iteration over igt_list.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_list.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/igt_list.h b/lib/igt_list.h
index cc93d7a0d..be63fd806 100644
--- a/lib/igt_list.h
+++ b/lib/igt_list.h
@@ -108,6 +108,12 @@ bool igt_list_empty(const struct igt_list_head *head);
 	     &pos->member != (head);					\
 	     pos = igt_container_of((pos)->member.prev, pos, member))
 
+#define igt_list_for_each_entry_safe_reverse(pos, tmp, head, member)	\
+	for (pos = igt_container_of((head)->prev, pos, member),		\
+	     tmp = igt_container_of((pos)->member.prev, tmp, member);	\
+	     &pos->member != (head);					\
+	     pos = tmp,							\
+	     tmp = igt_container_of((pos)->member.prev, tmp, member))
 
 /* IGT custom helpers */
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 03/39] lib/igt_map: Adopt Mesa hash table
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 01/39] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 02/39] lib/igt_list: Add igt_list_for_each_entry_safe_reverse Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 04/39] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
                   ` (37 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

The _search function has been changed to return a pointer to
the stored data instead of the entry struct. The _search_entry function,
which acts as the original search has been added. Additionally _remove function
has an optional delete_function param, to make it more usable.

For more information, see:
http://cgit.freedesktop.org/~anholt/hash_table/tree/README

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |   1 +
 lib/Makefile.sources                          |   2 +
 lib/igt_map.c                                 | 502 ++++++++++++++++++
 lib/igt_map.h                                 | 174 ++++++
 lib/meson.build                               |   1 +
 5 files changed, 680 insertions(+)
 create mode 100644 lib/igt_map.c
 create mode 100644 lib/igt_map.h

diff --git a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
index 9c9aa8f1d..bf5ac5428 100644
--- a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
+++ b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
@@ -33,6 +33,7 @@
     <xi:include href="xml/igt_kmod.xml"/>
     <xi:include href="xml/igt_kms.xml"/>
     <xi:include href="xml/igt_list.xml"/>
+    <xi:include href="xml/igt_map.xml"/>
     <xi:include href="xml/igt_pm.xml"/>
     <xi:include href="xml/igt_primes.xml"/>
     <xi:include href="xml/igt_rand.xml"/>
diff --git a/lib/Makefile.sources b/lib/Makefile.sources
index 4f6389f8a..84fd7b49c 100644
--- a/lib/Makefile.sources
+++ b/lib/Makefile.sources
@@ -48,6 +48,8 @@ lib_source_list =	 	\
 	igt_infoframe.h		\
 	igt_list.c		\
 	igt_list.h		\
+	igt_map.c		\
+	igt_map.h		\
 	igt_matrix.c		\
 	igt_matrix.h		\
 	igt_params.c		\
diff --git a/lib/igt_map.c b/lib/igt_map.c
new file mode 100644
index 000000000..da8713a18
--- /dev/null
+++ b/lib/igt_map.c
@@ -0,0 +1,502 @@
+/*
+ * Copyright © 2009, 2021 Intel Corporation
+ * Copyright © 1988-2004 Keith Packard and Bart Massey.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Except as contained in this notice, the names of the authors
+ * or their institutions shall not be used in advertising or
+ * otherwise to promote the sale, use or other dealings in this
+ * Software without prior written authorization from the
+ * authors.
+ *
+ * Authors:
+ *    Eric Anholt <eric@anholt.net>
+ *    Keith Packard <keithp@keithp.com>
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+
+#include "igt_map.h"
+
+#define ARRAY_SIZE(array) (sizeof(array) / sizeof(array[0]))
+
+/*
+ * From Knuth -- a good choice for hash/rehash values is p, p-2 where
+ * p and p-2 are both prime.  These tables are sized to have an extra 10%
+ * free to avoid exponential performance degradation as the hash table fills
+ */
+
+static const uint32_t deleted_key_value;
+static const void *deleted_key = &deleted_key_value;
+
+static const struct {
+	uint32_t max_entries, size, rehash;
+} hash_sizes[] = {
+	{ 2,		5,		3	  },
+	{ 4,		7,		5	  },
+	{ 8,		13,		11	  },
+	{ 16,		19,		17	  },
+	{ 32,		43,		41        },
+	{ 64,		73,		71        },
+	{ 128,		151,		149       },
+	{ 256,		283,		281       },
+	{ 512,		571,		569       },
+	{ 1024,		1153,		1151      },
+	{ 2048,		2269,		2267      },
+	{ 4096,		4519,		4517      },
+	{ 8192,		9013,		9011      },
+	{ 16384,	18043,		18041     },
+	{ 32768,	36109,		36107     },
+	{ 65536,	72091,		72089     },
+	{ 131072,	144409,		144407    },
+	{ 262144,	288361,		288359    },
+	{ 524288,	576883,		576881    },
+	{ 1048576,	1153459,	1153457   },
+	{ 2097152,	2307163,	2307161   },
+	{ 4194304,	4613893,	4613891   },
+	{ 8388608,	9227641,	9227639   },
+	{ 16777216,	18455029,	18455027  },
+	{ 33554432,	36911011,	36911009  },
+	{ 67108864,	73819861,	73819859  },
+	{ 134217728,	147639589,	147639587 },
+	{ 268435456,	295279081,	295279079 },
+	{ 536870912,	590559793,	590559791 },
+	{ 1073741824,	1181116273,	1181116271},
+	{ 2147483648ul,	2362232233ul,	2362232231ul}
+};
+
+static int
+entry_is_free(const struct igt_map_entry *entry)
+{
+	return entry->key == NULL;
+}
+
+static int
+entry_is_deleted(const struct igt_map_entry *entry)
+{
+	return entry->key == deleted_key;
+}
+
+static int
+entry_is_present(const struct igt_map_entry *entry)
+{
+	return entry->key != NULL && entry->key != deleted_key;
+}
+
+/**
+ * igt_map_create:
+ * @hash_function: function that maps key to 32b hash
+ * @key_equals_function: function that compares given hashes
+ *
+ * Function creates a map and initializes it with given @hash_function and
+ * @key_equals_function.
+ *
+ * Returns: pointer to just created map
+ */
+struct igt_map *
+igt_map_create(uint32_t (*hash_function)(const void *key),
+	       int (*key_equals_function)(const void *a, const void *b))
+{
+	struct igt_map *map;
+
+	map = malloc(sizeof(*map));
+	if (map == NULL)
+		return NULL;
+
+	map->size_index = 0;
+	map->size = hash_sizes[map->size_index].size;
+	map->rehash = hash_sizes[map->size_index].rehash;
+	map->max_entries = hash_sizes[map->size_index].max_entries;
+	map->hash_function = hash_function;
+	map->key_equals_function = key_equals_function;
+	map->table = calloc(map->size, sizeof(*map->table));
+	map->entries = 0;
+	map->deleted_entries = 0;
+
+	if (map->table == NULL) {
+		free(map);
+		return NULL;
+	}
+
+	return map;
+}
+
+/**
+ * igt_map_destroy:
+ * @map: igt_map pointer
+ * @delete_function: function that frees data in igt_map_entry
+ *
+ * Frees the given hash table. If @delete_function is passed, it gets called
+ * on each entry present before freeing.
+ */
+void
+igt_map_destroy(struct igt_map *map,
+		void (*delete_function)(struct igt_map_entry *entry))
+{
+	if (!map)
+		return;
+
+	if (delete_function) {
+		struct igt_map_entry *entry;
+
+		igt_map_foreach(map, entry) {
+			delete_function(entry);
+		}
+	}
+	free(map->table);
+	free(map);
+}
+
+/**
+ * igt_map_search:
+ * @map: igt_map pointer
+ * @key: pointer to searched key
+ *
+ * Finds a map entry's data with the given @key.
+ *
+ * Returns: data pointer if the entry was found, %NULL otherwise.
+ * Note that it may be modified by the user.
+ */
+void *
+igt_map_search(struct igt_map *map, const void *key)
+{
+	uint32_t hash = map->hash_function(key);
+	struct igt_map_entry *entry;
+
+	entry = igt_map_search_pre_hashed(map, hash, key);
+	return entry ? entry->data : NULL;
+}
+
+/**
+ * igt_map_search_entry:
+ * @map: igt_map pointer
+ * @key: pointer to searched key
+ *
+ * Finds a map entry with the given @key.
+ *
+ * Returns: map entry or %NULL if no entry is found.
+ * Note that the data pointer may be modified by the user.
+ */
+struct igt_map_entry *
+igt_map_search_entry(struct igt_map *map, const void *key)
+{
+	uint32_t hash = map->hash_function(key);
+
+	return igt_map_search_pre_hashed(map, hash, key);
+}
+
+/**
+ * igt_map_search_pre_hashed:
+ * @map: igt_map pointer
+ * @hash: hash of @key
+ * @key: pointer to searched key
+ *
+ * Finds a map entry with the given @key and @hash of that key.
+ *
+ * Returns: map entry or %NULL if no entry is found.
+ * Note that the data pointer may be modified by the user.
+ */
+struct igt_map_entry *
+igt_map_search_pre_hashed(struct igt_map *map, uint32_t hash,
+			  const void *key)
+{
+	uint32_t start_hash_address = hash % map->size;
+	uint32_t hash_address = start_hash_address;
+
+	do {
+		uint32_t double_hash;
+
+		struct igt_map_entry *entry = map->table + hash_address;
+
+		if (entry_is_free(entry)) {
+			return NULL;
+		} else if (entry_is_present(entry) && entry->hash == hash) {
+			if (map->key_equals_function(key, entry->key)) {
+				return entry;
+			}
+		}
+
+		double_hash = 1 + hash % map->rehash;
+
+		hash_address = (hash_address + double_hash) % map->size;
+	} while (hash_address != start_hash_address);
+
+	return NULL;
+}
+
+static void
+igt_map_rehash(struct igt_map *map, int new_size_index)
+{
+	struct igt_map old_map;
+	struct igt_map_entry *table, *entry;
+
+	if (new_size_index >= ARRAY_SIZE(hash_sizes))
+		return;
+
+	table = calloc(hash_sizes[new_size_index].size, sizeof(*map->table));
+	if (table == NULL)
+		return;
+
+	old_map = *map;
+
+	map->table = table;
+	map->size_index = new_size_index;
+	map->size = hash_sizes[map->size_index].size;
+	map->rehash = hash_sizes[map->size_index].rehash;
+	map->max_entries = hash_sizes[map->size_index].max_entries;
+	map->entries = 0;
+	map->deleted_entries = 0;
+
+	igt_map_foreach(&old_map, entry) {
+		igt_map_insert_pre_hashed(map, entry->hash,
+					     entry->key, entry->data);
+	}
+
+	free(old_map.table);
+}
+
+/**
+ * igt_map_insert:
+ * @map: igt_map pointer
+ * @data: data to be store
+ * @key: pointer to searched key
+ *
+ * Inserts the @data indexed by given @key into the map. If the @map already
+ * contains an entry with the @key, it will be replaced. To avoid memory leaks,
+ * perform a search before inserting.
+ *
+ * Note that insertion may rearrange the table on a resize or rehash,
+ * so previously found hash entries are no longer valid after this function.
+ *
+ * Returns: pointer to just inserted entry
+ */
+struct igt_map_entry *
+igt_map_insert(struct igt_map *map, const void *key, void *data)
+{
+	uint32_t hash = map->hash_function(key);
+
+	/* Make sure nobody tries to add one of the magic values as a
+	 * key. If you need to do so, either do so in a wrapper, or
+	 * store keys with the magic values separately in the struct
+	 * igt_map.
+	 */
+	assert(key != NULL);
+
+	return igt_map_insert_pre_hashed(map, hash, key, data);
+}
+
+/**
+ * igt_map_insert_pre_hashed:
+ * @map: igt_map pointer
+ * @hash: hash of @key
+ * @data: data to be store
+ * @key: pointer to searched key
+ *
+ * Inserts the @data indexed by given @key and @hash of that @key into the map.
+ * If the @map already contains an entry with the @key, it will be replaced.
+ * To avoid memory leaks, perform a search before inserting.
+ *
+ * Note that insertion may rearrange the table on a resize or rehash,
+ * so previously found hash entries are no longer valid after this function.
+ *
+ * Returns: pointer to just inserted entry
+ */
+struct igt_map_entry *
+igt_map_insert_pre_hashed(struct igt_map *map, uint32_t hash,
+			  const void *key, void *data)
+{
+	uint32_t start_hash_address, hash_address;
+	struct igt_map_entry *available_entry = NULL;
+
+	if (map->entries >= map->max_entries) {
+		igt_map_rehash(map, map->size_index + 1);
+	} else if (map->deleted_entries + map->entries >= map->max_entries) {
+		igt_map_rehash(map, map->size_index);
+	}
+
+	start_hash_address = hash % map->size;
+	hash_address = start_hash_address;
+	do {
+		struct igt_map_entry *entry = map->table + hash_address;
+		uint32_t double_hash;
+
+		if (!entry_is_present(entry)) {
+			/* Stash the first available entry we find */
+			if (available_entry == NULL)
+				available_entry = entry;
+			if (entry_is_free(entry))
+				break;
+		}
+
+		/* Implement replacement when another insert happens
+		 * with a matching key.  This is a relatively common
+		 * feature of hash tables, with the alternative
+		 * generally being "insert the new value as well, and
+		 * return it first when the key is searched for".
+		 *
+		 * Note that the hash table doesn't have a delete
+		 * callback.  If freeing of old data pointers is
+		 * required to avoid memory leaks, perform a search
+		 * before inserting.
+		 */
+		if (!entry_is_deleted(entry) &&
+		    entry->hash == hash &&
+		    map->key_equals_function(key, entry->key)) {
+			entry->key = key;
+			entry->data = data;
+			return entry;
+		}
+
+
+		double_hash = 1 + hash % map->rehash;
+
+		hash_address = (hash_address + double_hash) % map->size;
+	} while (hash_address != start_hash_address);
+
+	if (available_entry) {
+		if (entry_is_deleted(available_entry))
+			map->deleted_entries--;
+		available_entry->hash = hash;
+		available_entry->key = key;
+		available_entry->data = data;
+		map->entries++;
+		return available_entry;
+	}
+
+	/* We could hit here if a required resize failed. An unchecked-malloc
+	 * application could ignore this result.
+	 */
+	return NULL;
+}
+
+/**
+ * igt_map_remove:
+ * @map: igt_map pointer
+ * @key: pointer to searched key
+ * @delete_function: function that frees data in igt_map_entry
+ *
+ * Function searches for an entry with a given @key, and removes it from
+ * the map. If @delete_function is passed, it will be called on removed entry.
+ *
+ * If the caller has previously found a struct igt_map_entry pointer,
+ * (from calling igt_map_search() or remembering it from igt_map_insert()),
+ * then igt_map_remove_entry() can be called instead to avoid an extra search.
+ */
+void
+igt_map_remove(struct igt_map *map, const void *key,
+		void (*delete_function)(struct igt_map_entry *entry))
+{
+	struct igt_map_entry *entry;
+
+	entry = igt_map_search_entry(map, key);
+	if (delete_function)
+		delete_function(entry);
+
+	igt_map_remove_entry(map, entry);
+}
+
+/**
+ * igt_map_remove_entry:
+ * @map: igt_map pointer
+ * @entry: pointer to map entry
+ *
+ * Function deletes the given hash entry.
+ *
+ * Note that deletion doesn't otherwise modify the table, so an iteration over
+ * the map deleting entries is safe.
+ */
+void
+igt_map_remove_entry(struct igt_map *map, struct igt_map_entry *entry)
+{
+	if (!entry)
+		return;
+
+	entry->key = deleted_key;
+	map->entries--;
+	map->deleted_entries++;
+}
+
+/**
+ * igt_map_next_entry:
+ * @map: igt_map pointer
+ * @entry: pointer to map entry, %NULL for the first map entry
+ *
+ * This function is an iterator over the hash table.
+ * Note that an iteration over the table is O(table_size) not O(entries).
+ *
+ * Returns: pointer to the next entry
+ */
+struct igt_map_entry *
+igt_map_next_entry(struct igt_map *map, struct igt_map_entry *entry)
+{
+	if (entry == NULL)
+		entry = map->table;
+	else
+		entry = entry + 1;
+
+	for (; entry != map->table + map->size; entry++) {
+		if (entry_is_present(entry)) {
+			return entry;
+		}
+	}
+
+	return NULL;
+}
+
+/**
+ * igt_map_random_entry:
+ * @map: igt_map pointer
+ * @predicate: filtering entries function
+ *
+ * Functions returns random entry from the map. This may be useful in
+ * implementing random replacement (as opposed to just removing everything)
+ * in caches based on this hash table implementation. @predicate may be used to
+ * filter entries, or may be set to %NULL for no filtering.
+ *
+ * Returns: pointer to the randomly chosen map entry
+ */
+struct igt_map_entry *
+igt_map_random_entry(struct igt_map *map,
+		     int (*predicate)(struct igt_map_entry *entry))
+{
+	struct igt_map_entry *entry;
+	uint32_t i = random() % map->size;
+
+	if (map->entries == 0)
+		return NULL;
+
+	for (entry = map->table + i; entry != map->table + map->size; entry++) {
+		if (entry_is_present(entry) &&
+		    (!predicate || predicate(entry))) {
+			return entry;
+		}
+	}
+
+	for (entry = map->table; entry != map->table + i; entry++) {
+		if (entry_is_present(entry) &&
+		    (!predicate || predicate(entry))) {
+			return entry;
+		}
+	}
+
+	return NULL;
+}
diff --git a/lib/igt_map.h b/lib/igt_map.h
new file mode 100644
index 000000000..cadcd6e35
--- /dev/null
+++ b/lib/igt_map.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright © 2009,2021 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Eric Anholt <eric@anholt.net>
+ *
+ */
+
+#ifndef IGT_MAP_H
+#define IGT_MAP_H
+
+#include <inttypes.h>
+
+/**
+ * SECTION:igt_map
+ * @short_description: a linear-reprobing hashmap implementation
+ * @title: IGT Map
+ * @include: igt_map.h
+ *
+ * Implements an open-addressing, linear-reprobing hash table.
+ *
+ * For more information, see:
+ * http://cgit.freedesktop.org/~anholt/hash_table/tree/README
+ *
+ * Example usage:
+ *
+ *|[<!-- language="C" -->
+ * #define GOLDEN_RATIO_PRIME_32 0x9e370001UL
+ *
+ * static inline uint32_t hash_identifier(const void *val)
+ * {
+ *	uint32_t hash = *(uint32_t *) val;
+ *
+ *	hash = hash * GOLDEN_RATIO_PRIME_32;
+ *	return hash;
+ * }
+ *
+ * static int equal_identifiers(const void *a, const void *b)
+ * {
+ *	uint32_t *key1 = (uint32_t *) a, *key2 = (uint32_t *) b;
+ *
+ *	return *key1 == *key2;
+ * }
+ *
+ * static void free_func(struct igt_map_entry *entry)
+ * {
+ * 	free(entry->data);
+ * }
+ *
+ * struct igt_map *map;
+ *
+ * struct record {
+ *      int foo;
+ *      uint32_t unique_identifier;
+ * };
+ *
+ * struct record *r1, r2, *record;
+ * struct igt_map_entry *entry;
+ *
+ * r1 = malloc(sizeof(struct record));
+ * map = igt_map_create(hash_identifier, equal_identifiers);
+ * igt_map_insert(map, &r1->unique_identifier, r1);
+ * igt_map_insert(map, &r2.unique_identifier, &r2);
+ *
+ * igt_map_foreach(map, entry) {
+ * 	record = entry->data;
+ * 	printf("key: %u, foo: %d\n", *(uint32_t *) entry->key, record->foo);
+ * }
+ *
+ * record = igt_map_search(map, &r1->unique_identifier);
+ * entry = igt_map_search_entry(map, &r2.unique_identifier);
+ *
+ * igt_map_remove(map, &r1->unique_identifier, free_func);
+ * igt_map_remove_entry(map, entry);
+ *
+ *  igt_map_destroy(map, NULL);
+ * ]|
+ */
+
+struct igt_map_entry {
+	uint32_t hash;
+	const void *key;
+	void *data;
+};
+
+struct igt_map {
+	struct igt_map_entry *table;
+	uint32_t (*hash_function)(const void *key);
+	int (*key_equals_function)(const void *a, const void *b);
+	uint32_t size;
+	uint32_t rehash;
+	uint32_t max_entries;
+	uint32_t size_index;
+	uint32_t entries;
+	uint32_t deleted_entries;
+};
+
+struct igt_map *
+igt_map_create(uint32_t (*hash_function)(const void *key),
+	       int (*key_equals_function)(const void *a, const void *b));
+void
+igt_map_destroy(struct igt_map *map,
+		void (*delete_function)(struct igt_map_entry *entry));
+
+struct igt_map_entry *
+igt_map_insert(struct igt_map *map, const void *key, void *data);
+
+void *
+igt_map_search(struct igt_map *map, const void *key);
+
+struct igt_map_entry *
+igt_map_search_entry(struct igt_map *map, const void *key);
+
+void
+igt_map_remove(struct igt_map *map, const void *key,
+	       void (*delete_function)(struct igt_map_entry *entry));
+
+void
+igt_map_remove_entry(struct igt_map *map, struct igt_map_entry *entry);
+
+struct igt_map_entry *
+igt_map_next_entry(struct igt_map *map, struct igt_map_entry *entry);
+
+struct igt_map_entry *
+igt_map_random_entry(struct igt_map *map,
+		     int (*predicate)(struct igt_map_entry *entry));
+/**
+ * igt_map_foreach
+ * @map: igt_map pointer
+ * @entry: igt_map_entry pointer
+ *
+ * Macro is a loop, which iterates through each map entry. Inside a
+ * loop block current element is accessible by the @entry pointer.
+ *
+ * This foreach function is safe against deletion (which just replaces
+ * an entry's data with the deleted marker), but not against insertion
+ * (which may rehash the table, making entry a dangling pointer).
+ */
+#define igt_map_foreach(map, entry)				\
+	for (entry = igt_map_next_entry(map, NULL);		\
+	     entry != NULL;					\
+	     entry = igt_map_next_entry(map, entry))
+
+/* Alternate interfaces to reduce repeated calls to hash function. */
+struct igt_map_entry *
+igt_map_search_pre_hashed(struct igt_map *map,
+			     uint32_t hash,
+			     const void *key);
+
+struct igt_map_entry *
+igt_map_insert_pre_hashed(struct igt_map *map,
+			     uint32_t hash,
+			     const void *key, void *data);
+
+#endif
diff --git a/lib/meson.build b/lib/meson.build
index 672b42062..7254faeac 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -61,6 +61,7 @@ lib_sources = [
 	'igt_core.c',
 	'igt_draw.c',
 	'igt_list.c',
+	'igt_map.c',
 	'igt_pm.c',
 	'igt_dummyload.c',
 	'uwildmat/uwildmat.c',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 04/39] lib/igt_core: Track child process pid and tid
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (2 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 03/39] lib/igt_map: Adopt Mesa hash table Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 05/39] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
                   ` (36 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Introduce variables which can decrease number of getpid()/gettid()
calls, especially for allocator which must be aware of method
acquiring addresses.

When child is spawned using igt_fork() we can control its initialization
and prepare child_pid implicitly. Tracking child_tid requires our
intervention in the code and do something like this:

if (child_tid == -1)
	child_tid = gettid()

Variable was created for using in TLS so each thread is created
with variable set to -1. This will give each thread it's own "copy"
and there's no risk to use other thread tid. For each forked child
we reassign -1 to child_tid to avoid using already set variable.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/igt_core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/igt_core.c b/lib/igt_core.c
index f9dfaa0dd..2b4182f16 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -306,6 +306,10 @@ int num_test_children;
 int test_children_sz;
 bool test_child;
 
+/* For allocator purposes */
+pid_t child_pid  = -1;
+__thread pid_t child_tid  = -1;
+
 enum {
 	/*
 	 * Let the first values be used by individual tests so options don't
@@ -2302,6 +2306,8 @@ bool __igt_fork(void)
 	case 0:
 		test_child = true;
 		pthread_mutex_init(&print_mutex, NULL);
+		child_pid = getpid();
+		child_tid = -1;
 		exit_handler_count = 0;
 		reset_helper_process_list();
 		oom_adjust_for_doom();
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 05/39] lib/intel_allocator_simple: Add simple allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (3 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 04/39] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 06/39] lib/intel_allocator_reloc: Add reloc allocator Zbigniew Kempczyński
                   ` (35 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Simple allocator borrowed from Mesa adopted for IGT use.

From default we prefer an allocation from the top of vm address space
(we can catch addressing issues pro-actively). When function
intel_allocator_simple_create() is used we exclude last page as HW
tends to hang on the render engine when full 3D pipeline is executed from
the last page. For more control of vm range user can specify range using
intel_allocator_simple_create_full() (with the respect of the gtt size).

v2: fix size calculation (don't allow to subtract number which can be
    negative - as we got for canonical addresses with 47 bit set - Jason)

v3: change to new igt_map implementation, add refcounts in allocator
    (according to Jason review comments)

v4: return ALLOC_INVALID_ADDRESS instead of assert for failed allocation

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_allocator_simple.c | 804 +++++++++++++++++++++++++++++++++++
 1 file changed, 804 insertions(+)
 create mode 100644 lib/intel_allocator_simple.c

diff --git a/lib/intel_allocator_simple.c b/lib/intel_allocator_simple.c
new file mode 100644
index 000000000..a419955af
--- /dev/null
+++ b/lib/intel_allocator_simple.c
@@ -0,0 +1,804 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt.h"
+#include "igt_x86.h"
+#include "intel_allocator.h"
+#include "intel_bufops.h"
+#include "igt_map.h"
+
+/*
+ * We limit allocator space to avoid hang when batch would be
+ * pinned in the last page.
+ */
+#define RESERVED 4096
+
+/* Avoid compilation warning */
+struct intel_allocator *intel_allocator_simple_create(int fd);
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy);
+
+struct simple_vma_heap {
+	struct igt_list_head holes;
+
+	/* If true, simple_vma_heap_alloc will prefer high addresses
+	 *
+	 * Default is true.
+	 */
+	bool alloc_high;
+};
+
+struct simple_vma_hole {
+	struct igt_list_head link;
+	uint64_t offset;
+	uint64_t size;
+};
+
+struct intel_allocator_simple {
+	struct igt_map *objects;
+	struct igt_map *reserved;
+	struct simple_vma_heap heap;
+
+	uint64_t start;
+	uint64_t end;
+
+	/* statistics */
+	uint64_t total_size;
+	uint64_t allocated_size;
+	uint64_t allocated_objects;
+	uint64_t reserved_size;
+	uint64_t reserved_areas;
+};
+
+struct intel_allocator_record {
+	uint32_t handle;
+	uint64_t offset;
+	uint64_t size;
+};
+
+#define simple_vma_foreach_hole(_hole, _heap) \
+	igt_list_for_each_entry(_hole, &(_heap)->holes, link)
+
+#define simple_vma_foreach_hole_safe(_hole, _heap, _tmp) \
+	igt_list_for_each_entry_safe(_hole, _tmp,  &(_heap)->holes, link)
+
+#define simple_vma_foreach_hole_safe_rev(_hole, _heap, _tmp) \
+	igt_list_for_each_entry_safe_reverse(_hole, _tmp,  &(_heap)->holes, link)
+
+/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */
+#define GOLDEN_RATIO_PRIME_32 0x9e370001UL
+
+/*  2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
+#define GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001ULL
+
+static inline uint32_t hash_handles(const void *val)
+{
+	uint32_t hash = *(uint32_t *) val;
+
+	hash = hash * GOLDEN_RATIO_PRIME_32;
+	return hash;
+}
+
+static int equal_handles(const void *a, const void *b)
+{
+	uint32_t *key1 = (uint32_t *) a, *key2 = (uint32_t *) b;
+
+	return *key1 == *key2;
+}
+
+static inline uint32_t hash_offsets(const void *val)
+{
+	uint64_t hash = *(uint64_t *) val;
+
+	hash = hash * GOLDEN_RATIO_PRIME_64;
+	/* High bits are more random, so use them. */
+	return hash >> 32;
+}
+
+static int equal_offsets(const void *a, const void *b)
+{
+	uint64_t *key1 = (uint64_t *) a, *key2 = (uint64_t *) b;
+
+	return *key1 == *key2;
+}
+
+static void map_entry_free_func(struct igt_map_entry *entry)
+{
+	free(entry->data);
+}
+
+#define GEN8_GTT_ADDRESS_WIDTH 48
+#define DECANONICAL(offset) (offset & ((1ull << GEN8_GTT_ADDRESS_WIDTH) - 1))
+
+static void simple_vma_heap_validate(struct simple_vma_heap *heap)
+{
+	uint64_t prev_offset = 0;
+	struct simple_vma_hole *hole;
+
+	simple_vma_foreach_hole(hole, heap) {
+		igt_assert(hole->size > 0);
+
+		if (&hole->link == heap->holes.next) {
+			/*
+			 * This must be the top-most hole.  Assert that,
+			 * if it overflows, it overflows to 0, i.e. 2^64.
+			 */
+			igt_assert(hole->size + hole->offset == 0 ||
+				   hole->size + hole->offset > hole->offset);
+		} else {
+			/*
+			 * This is not the top-most hole so it must not overflow and,
+			 * in fact, must be strictly lower than the top-most hole.  If
+			 * hole->size + hole->offset == prev_offset, then we failed to
+			 * join holes during a simple_vma_heap_free.
+			 */
+			igt_assert(hole->size + hole->offset > hole->offset &&
+				   hole->size + hole->offset < prev_offset);
+		}
+		prev_offset = hole->offset;
+	}
+}
+
+
+static void simple_vma_heap_free(struct simple_vma_heap *heap,
+				 uint64_t offset, uint64_t size)
+{
+	struct simple_vma_hole *high_hole = NULL, *low_hole = NULL, *hole;
+	bool high_adjacent, low_adjacent;
+
+	/* Freeing something with a size of 0 is not valid. */
+	igt_assert(size > 0);
+
+	/*
+	 * It's possible for offset + size to wrap around if we touch the top of
+	 * the 64-bit address space, but we cannot go any higher than 2^64.
+	 */
+	igt_assert(offset + size == 0 || offset + size > offset);
+
+	simple_vma_heap_validate(heap);
+
+	/* Find immediately higher and lower holes if they exist. */
+	simple_vma_foreach_hole(hole, heap) {
+		if (hole->offset <= offset) {
+			low_hole = hole;
+			break;
+		}
+		high_hole = hole;
+	}
+
+	if (high_hole)
+		igt_assert(offset + size <= high_hole->offset);
+	high_adjacent = high_hole && offset + size == high_hole->offset;
+
+	if (low_hole) {
+		igt_assert(low_hole->offset + low_hole->size > low_hole->offset);
+		igt_assert(low_hole->offset + low_hole->size <= offset);
+	}
+	low_adjacent = low_hole && low_hole->offset + low_hole->size == offset;
+
+	if (low_adjacent && high_adjacent) {
+		/* Merge the two holes */
+		low_hole->size += size + high_hole->size;
+		igt_list_del(&high_hole->link);
+		free(high_hole);
+	} else if (low_adjacent) {
+		/* Merge into the low hole */
+		low_hole->size += size;
+	} else if (high_adjacent) {
+		/* Merge into the high hole */
+		high_hole->offset = offset;
+		high_hole->size += size;
+	} else {
+		/* Neither hole is adjacent; make a new one */
+		hole = calloc(1, sizeof(*hole));
+		igt_assert(hole);
+
+		hole->offset = offset;
+		hole->size = size;
+		/*
+		 * Add it after the high hole so we maintain high-to-low
+		 * ordering
+		 */
+		if (high_hole)
+			igt_list_add(&hole->link, &high_hole->link);
+		else
+			igt_list_add(&hole->link, &heap->holes);
+	}
+
+	simple_vma_heap_validate(heap);
+}
+
+static void simple_vma_heap_init(struct simple_vma_heap *heap,
+				 uint64_t start, uint64_t size,
+				 enum allocator_strategy strategy)
+{
+	IGT_INIT_LIST_HEAD(&heap->holes);
+	simple_vma_heap_free(heap, start, size);
+
+	switch (strategy) {
+	case ALLOC_STRATEGY_LOW_TO_HIGH:
+		heap->alloc_high = false;
+		break;
+	case ALLOC_STRATEGY_HIGH_TO_LOW:
+	default:
+		heap->alloc_high = true;
+	}
+}
+
+static void simple_vma_heap_finish(struct simple_vma_heap *heap)
+{
+	struct simple_vma_hole *hole, *tmp;
+
+	simple_vma_foreach_hole_safe(hole, heap, tmp)
+		free(hole);
+}
+
+static void simple_vma_hole_alloc(struct simple_vma_hole *hole,
+				  uint64_t offset, uint64_t size)
+{
+	struct simple_vma_hole *high_hole;
+	uint64_t waste;
+
+	igt_assert(hole->offset <= offset);
+	igt_assert(hole->size >= offset - hole->offset + size);
+
+	if (offset == hole->offset && size == hole->size) {
+		/* Just get rid of the hole. */
+		igt_list_del(&hole->link);
+		free(hole);
+		return;
+	}
+
+	igt_assert(offset - hole->offset <= hole->size - size);
+	waste = (hole->size - size) - (offset - hole->offset);
+	if (waste == 0) {
+		/* We allocated at the top->  Shrink the hole down. */
+		hole->size -= size;
+		return;
+	}
+
+	if (offset == hole->offset) {
+		/* We allocated at the bottom. Shrink the hole up-> */
+		hole->offset += size;
+		hole->size -= size;
+		return;
+	}
+
+	/*
+	 * We allocated in the middle.  We need to split the old hole into two
+	 * holes, one high and one low.
+	 */
+	high_hole = calloc(1, sizeof(*hole));
+	igt_assert(high_hole);
+
+	high_hole->offset = offset + size;
+	high_hole->size = waste;
+
+	/*
+	 * Adjust the hole to be the amount of space left at he bottom of the
+	 * original hole.
+	 */
+	hole->size = offset - hole->offset;
+
+	/*
+	 * Place the new hole before the old hole so that the list is in order
+	 * from high to low.
+	 */
+	igt_list_add_tail(&high_hole->link, &hole->link);
+}
+
+static bool simple_vma_heap_alloc(struct simple_vma_heap *heap,
+				  uint64_t *offset, uint64_t size,
+				  uint64_t alignment)
+{
+	struct simple_vma_hole *hole, *tmp;
+	uint64_t misalign;
+
+	/* The caller is expected to reject zero-size allocations */
+	igt_assert(size > 0);
+	igt_assert(alignment > 0);
+
+	simple_vma_heap_validate(heap);
+
+	if (heap->alloc_high) {
+		simple_vma_foreach_hole_safe(hole, heap, tmp) {
+			if (size > hole->size)
+				continue;
+			/*
+			 * Compute the offset as the highest address where a chunk of the
+			 * given size can be without going over the top of the hole.
+			 *
+			 * This calculation is known to not overflow because we know that
+			 * hole->size + hole->offset can only overflow to 0 and size > 0.
+			 */
+			*offset = (hole->size - size) + hole->offset;
+
+			/*
+			 * Align the offset.  We align down and not up because we are
+			 *
+			 * allocating from the top of the hole and not the bottom.
+			 */
+			*offset = (*offset / alignment) * alignment;
+
+			if (*offset < hole->offset)
+				continue;
+
+			simple_vma_hole_alloc(hole, *offset, size);
+			simple_vma_heap_validate(heap);
+			return true;
+		}
+	} else {
+		simple_vma_foreach_hole_safe_rev(hole, heap, tmp) {
+			if (size > hole->size)
+				continue;
+
+			*offset = hole->offset;
+
+			/* Align the offset */
+			misalign = *offset % alignment;
+			if (misalign) {
+				uint64_t pad = alignment - misalign;
+
+				if (pad > hole->size - size)
+					continue;
+
+				*offset += pad;
+			}
+
+			simple_vma_hole_alloc(hole, *offset, size);
+			simple_vma_heap_validate(heap);
+			return true;
+		}
+	}
+
+	/* Failed to allocate */
+	return false;
+}
+
+static void intel_allocator_simple_get_address_range(struct intel_allocator *ial,
+						     uint64_t *startp,
+						     uint64_t *endp)
+{
+	struct intel_allocator_simple *ials = ial->priv;
+
+	if (startp)
+		*startp = ials->start;
+
+	if (endp)
+		*endp = ials->end;
+}
+
+static bool simple_vma_heap_alloc_addr(struct intel_allocator_simple *ials,
+				       uint64_t offset, uint64_t size)
+{
+	struct simple_vma_heap *heap = &ials->heap;
+	struct simple_vma_hole *hole, *tmp;
+
+	/* Allocating something with a size of 0 is not valid. */
+	igt_assert(size > 0);
+
+	/*
+	 * It's possible for offset + size to wrap around if we touch the top of
+	 * the 64-bit address space, but we cannot go any higher than 2^64.
+	 */
+	igt_assert(offset + size == 0 || offset + size > offset);
+
+	/* Find the hole if one exists. */
+	simple_vma_foreach_hole_safe(hole, heap, tmp) {
+		if (hole->offset > offset)
+			continue;
+
+		/*
+		 * Holes are ordered high-to-low so the first hole we find with
+		 * hole->offset <= is our hole.  If it's not big enough to contain the
+		 * requested range, then the allocation fails.
+		 */
+		igt_assert(hole->offset <= offset);
+		if (hole->size < offset - hole->offset + size)
+			return false;
+
+		simple_vma_hole_alloc(hole, offset, size);
+		return true;
+	}
+
+	/* We didn't find a suitable hole */
+	return false;
+}
+
+static uint64_t intel_allocator_simple_alloc(struct intel_allocator *ial,
+					     uint32_t handle, uint64_t size,
+					     uint64_t alignment)
+{
+	struct intel_allocator_record *rec;
+	struct intel_allocator_simple *ials;
+	uint64_t offset;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	igt_assert(handle);
+	alignment = alignment > 0 ? alignment : 1;
+
+	rec = igt_map_search(ials->objects, &handle);
+	if (rec) {
+		offset = rec->offset;
+		igt_assert(rec->size == size);
+	} else {
+		if (!simple_vma_heap_alloc(&ials->heap, &offset,
+					   size, alignment))
+			return ALLOC_INVALID_ADDRESS;
+
+		rec = malloc(sizeof(*rec));
+		rec->handle = handle;
+		rec->offset = offset;
+		rec->size = size;
+
+		igt_map_insert(ials->objects, &rec->handle, rec);
+		ials->allocated_objects++;
+		ials->allocated_size += size;
+	}
+
+	return offset;
+}
+
+static bool intel_allocator_simple_free(struct intel_allocator *ial, uint32_t handle)
+{
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+	struct igt_map_entry *entry;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	entry = igt_map_search_entry(ials->objects, &handle);
+	if (entry) {
+		igt_map_remove_entry(ials->objects, entry);
+		if (entry->data) {
+			rec = (struct intel_allocator_record *) entry->data;
+			simple_vma_heap_free(&ials->heap, rec->offset, rec->size);
+			ials->allocated_objects--;
+			ials->allocated_size -= rec->size;
+			free(rec);
+
+			return true;
+		}
+	}
+
+	return false;
+}
+
+static inline bool __same(const struct intel_allocator_record *rec,
+			  uint32_t handle, uint64_t size, uint64_t offset)
+{
+	return rec->handle == handle && rec->size == size &&
+			DECANONICAL(rec->offset) == DECANONICAL(offset);
+}
+
+static bool intel_allocator_simple_is_allocated(struct intel_allocator *ial,
+						uint32_t handle, uint64_t size,
+						uint64_t offset)
+{
+	struct intel_allocator_record *rec;
+	struct intel_allocator_simple *ials;
+	bool same = false;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	igt_assert(handle);
+
+	rec = igt_map_search(ials->objects, &handle);
+	if (rec && __same(rec, handle, size, offset))
+		same = true;
+
+	return same;
+}
+
+static uint64_t get_size(uint64_t start, uint64_t end)
+{
+	end = end ? end : 1ull << GEN8_GTT_ADDRESS_WIDTH;
+
+	return end - start;
+}
+
+static bool intel_allocator_simple_reserve(struct intel_allocator *ial,
+					   uint32_t handle,
+					   uint64_t start, uint64_t end)
+{
+	uint64_t size;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* don't allow end equal to 0 before decanonical */
+	igt_assert(end);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+	igt_assert(end > start || end == 0);
+	size = get_size(start, end);
+
+	if (simple_vma_heap_alloc_addr(ials, start, size)) {
+		rec = malloc(sizeof(*rec));
+		rec->handle = handle;
+		rec->offset = start;
+		rec->size = size;
+
+		igt_map_insert(ials->reserved, &rec->offset, rec);
+
+		ials->reserved_areas++;
+		ials->reserved_size += rec->size;
+		return true;
+	}
+
+	igt_debug("Failed to reserve %llx + %llx\n", (long long)start, (long long)size);
+	return false;
+}
+
+static bool intel_allocator_simple_unreserve(struct intel_allocator *ial,
+					     uint32_t handle,
+					     uint64_t start, uint64_t end)
+{
+	uint64_t size;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+	struct igt_map_entry *entry;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* don't allow end equal to 0 before decanonical */
+	igt_assert(end);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+	igt_assert(end > start || end == 0);
+	size = get_size(start, end);
+
+	entry = igt_map_search_entry(ials->reserved, &start);
+
+	if (!entry || !entry->data) {
+		igt_debug("Only reserved blocks can be unreserved\n");
+		return false;
+	}
+	rec = entry->data;
+
+	if (rec->size != size) {
+		igt_debug("Only the whole block unreservation allowed\n");
+		return false;
+	}
+
+	if (rec->handle != handle) {
+		igt_debug("Handle %u doesn't match reservation handle: %u\n",
+			 rec->handle, handle);
+		return false;
+	}
+
+	igt_map_remove_entry(ials->reserved, entry);
+	ials->reserved_areas--;
+	ials->reserved_size -= rec->size;
+	free(rec);
+	simple_vma_heap_free(&ials->heap, start, size);
+
+	return true;
+}
+
+static bool intel_allocator_simple_is_reserved(struct intel_allocator *ial,
+					       uint64_t start, uint64_t end)
+{
+	uint64_t size;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* don't allow end equal to 0 before decanonical */
+	igt_assert(end);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+	igt_assert(end > start || end == 0);
+	size = get_size(start, end);
+
+	rec = igt_map_search(ials->reserved, &start);
+
+	if (!rec)
+		return false;
+
+	if (rec->offset == start && rec->size == size)
+		return true;
+
+	return false;
+}
+
+static void intel_allocator_simple_destroy(struct intel_allocator *ial)
+{
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	simple_vma_heap_finish(&ials->heap);
+
+	igt_map_destroy(ials->objects, map_entry_free_func);
+
+	igt_map_destroy(ials->reserved, map_entry_free_func);
+
+	free(ial->priv);
+	free(ial);
+}
+
+static bool intel_allocator_simple_is_empty(struct intel_allocator *ial)
+{
+	struct intel_allocator_simple *ials = ial->priv;
+
+	igt_debug("<ial: %p, fd: %d> objects: %" PRId64
+		  ", reserved_areas: %" PRId64 "\n",
+		  ial, ial->fd,
+		  ials->allocated_objects, ials->reserved_areas);
+
+	return !ials->allocated_objects && !ials->reserved_areas;
+}
+
+static void intel_allocator_simple_print(struct intel_allocator *ial, bool full)
+{
+	struct intel_allocator_simple *ials;
+	struct simple_vma_hole *hole;
+	struct simple_vma_heap *heap;
+	struct igt_map_entry *pos;
+	uint64_t total_free = 0, allocated_size = 0, allocated_objects = 0;
+	uint64_t reserved_size = 0, reserved_areas = 0;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	heap = &ials->heap;
+
+	igt_info("intel_allocator_simple <ial: %p, fd: %d> on "
+		 "[0x%"PRIx64" : 0x%"PRIx64"]:\n", ial, ial->fd,
+		 ials->start, ials->end);
+
+	if (full) {
+		igt_info("holes:\n");
+		simple_vma_foreach_hole(hole, heap) {
+			igt_info("offset = %"PRIu64" (0x%"PRIx64", "
+				 "size = %"PRIu64" (0x%"PRIx64")\n",
+				 hole->offset, hole->offset, hole->size,
+				 hole->size);
+			total_free += hole->size;
+		}
+		igt_assert(total_free <= ials->total_size);
+		igt_info("total_free: %" PRIx64
+			 ", total_size: %" PRIx64
+			 ", allocated_size: %" PRIx64
+			 ", reserved_size: %" PRIx64 "\n",
+			 total_free, ials->total_size, ials->allocated_size,
+			 ials->reserved_size);
+		igt_assert(total_free ==
+			   ials->total_size - ials->allocated_size - ials->reserved_size);
+
+		igt_info("objects:\n");
+		igt_map_foreach(ials->objects, pos) {
+			struct intel_allocator_record *rec = pos->data;
+
+			igt_info("handle = %d, offset = %"PRIu64" "
+				"(0x%"PRIx64", size = %"PRIu64" (0x%"PRIx64")\n",
+				 rec->handle, rec->offset, rec->offset,
+				 rec->size, rec->size);
+			allocated_objects++;
+			allocated_size += rec->size;
+		}
+		igt_assert(ials->allocated_size == allocated_size);
+		igt_assert(ials->allocated_objects == allocated_objects);
+
+		igt_info("reserved areas:\n");
+		igt_map_foreach(ials->reserved, pos) {
+			struct intel_allocator_record *rec = pos->data;
+
+			igt_info("offset = %"PRIu64" (0x%"PRIx64", "
+				 "size = %"PRIu64" (0x%"PRIx64")\n",
+				 rec->offset, rec->offset,
+				 rec->size, rec->size);
+			reserved_areas++;
+			reserved_size += rec->size;
+		}
+		igt_assert(ials->reserved_areas == reserved_areas);
+		igt_assert(ials->reserved_size == reserved_size);
+	} else {
+		simple_vma_foreach_hole(hole, heap)
+			total_free += hole->size;
+	}
+
+	igt_info("free space: %"PRIu64"B (0x%"PRIx64") (%.2f%% full)\n"
+		 "allocated objects: %"PRIu64", reserved areas: %"PRIu64"\n",
+		 total_free, total_free,
+		 ((double) (ials->total_size - total_free) /
+		  (double) ials->total_size) * 100,
+		 ials->allocated_objects, ials->reserved_areas);
+}
+
+static struct intel_allocator *
+__intel_allocator_simple_create(int fd, uint64_t start, uint64_t end,
+				enum allocator_strategy strategy)
+{
+	struct intel_allocator *ial;
+	struct intel_allocator_simple *ials;
+
+	igt_debug("Using simple allocator\n");
+
+	ial = calloc(1, sizeof(*ial));
+	igt_assert(ial);
+
+	ial->fd = fd;
+	ial->get_address_range = intel_allocator_simple_get_address_range;
+	ial->alloc = intel_allocator_simple_alloc;
+	ial->free = intel_allocator_simple_free;
+	ial->is_allocated = intel_allocator_simple_is_allocated;
+	ial->reserve = intel_allocator_simple_reserve;
+	ial->unreserve = intel_allocator_simple_unreserve;
+	ial->is_reserved = intel_allocator_simple_is_reserved;
+	ial->destroy = intel_allocator_simple_destroy;
+	ial->is_empty = intel_allocator_simple_is_empty;
+	ial->print = intel_allocator_simple_print;
+	ials = ial->priv = malloc(sizeof(struct intel_allocator_simple));
+	igt_assert(ials);
+
+	ials->objects = igt_map_create(hash_handles, equal_handles);
+	ials->reserved = igt_map_create(hash_offsets, equal_offsets);
+	igt_assert(ials->objects && ials->reserved);
+
+	ials->start = start;
+	ials->end = end;
+	ials->total_size = end - start;
+	simple_vma_heap_init(&ials->heap, ials->start, ials->total_size,
+			     strategy);
+
+	ials->allocated_size = 0;
+	ials->allocated_objects = 0;
+	ials->reserved_size = 0;
+	ials->reserved_areas = 0;
+
+	return ial;
+}
+
+struct intel_allocator *
+intel_allocator_simple_create(int fd)
+{
+	uint64_t gtt_size = gem_aperture_size(fd);
+
+	if (!gem_uses_full_ppgtt(fd))
+		gtt_size /= 2;
+	else
+		gtt_size -= RESERVED;
+
+	return __intel_allocator_simple_create(fd, 0, gtt_size,
+					       ALLOC_STRATEGY_HIGH_TO_LOW);
+}
+
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy)
+{
+	uint64_t gtt_size = gem_aperture_size(fd);
+
+	igt_assert(end <= gtt_size);
+	if (!gem_uses_full_ppgtt(fd))
+		gtt_size /= 2;
+	igt_assert(end - start <= gtt_size);
+
+	return __intel_allocator_simple_create(fd, start, end, strategy);
+}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 06/39] lib/intel_allocator_reloc: Add reloc allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (4 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 05/39] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 07/39] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
                   ` (34 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

As relocations won't be accessible on discrete we need to support
IGT with an allocator. So tests which have to cover all generations
have to diverge the code and use conditional constructs.
On the long term this can be cumbersome and confusing. So we can
try to avoid such and use pseudo-reloc allocator which main task
is to return incremented offsets. This way we can skip mentioned
conditionals and just acquire offsets from an allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_allocator_reloc.c | 190 ++++++++++++++++++++++++++++++++++++
 1 file changed, 190 insertions(+)
 create mode 100644 lib/intel_allocator_reloc.c

diff --git a/lib/intel_allocator_reloc.c b/lib/intel_allocator_reloc.c
new file mode 100644
index 000000000..abf9c30cd
--- /dev/null
+++ b/lib/intel_allocator_reloc.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt.h"
+#include "igt_x86.h"
+#include "igt_rand.h"
+#include "intel_allocator.h"
+
+struct intel_allocator *intel_allocator_reloc_create(int fd);
+
+struct intel_allocator_reloc {
+	uint64_t bias;
+	uint32_t prng;
+	uint64_t gtt_size;
+	uint64_t start;
+	uint64_t end;
+	uint64_t offset;
+
+	/* statistics */
+	uint64_t allocated_objects;
+};
+
+static uint64_t get_bias(int fd)
+{
+	(void) fd;
+
+	return 256 << 10;
+}
+
+static void intel_allocator_reloc_get_address_range(struct intel_allocator *ial,
+						    uint64_t *startp,
+						    uint64_t *endp)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+
+	if (startp)
+		*startp = ialr->start;
+
+	if (endp)
+		*endp = ialr->end;
+}
+
+static uint64_t intel_allocator_reloc_alloc(struct intel_allocator *ial,
+					    uint32_t handle, uint64_t size,
+					    uint64_t alignment)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+	uint64_t offset, aligned_offset;
+
+	(void) handle;
+
+	alignment = max(alignment, 4096);
+	aligned_offset = ALIGN(ialr->offset, alignment);
+
+	/* Check we won't exceed end */
+	if (aligned_offset + size > ialr->end)
+		aligned_offset = ALIGN(ialr->start, alignment);
+
+	offset = aligned_offset;
+	ialr->offset = offset + size;
+	ialr->allocated_objects++;
+
+	return offset;
+}
+
+static bool intel_allocator_reloc_free(struct intel_allocator *ial,
+				       uint32_t handle)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+
+	(void) handle;
+
+	ialr->allocated_objects--;
+
+	return false;
+}
+
+static bool intel_allocator_reloc_is_allocated(struct intel_allocator *ial,
+					       uint32_t handle, uint64_t size,
+					       uint64_t offset)
+{
+	(void) ial;
+	(void) handle;
+	(void) size;
+	(void) offset;
+
+	return false;
+}
+
+static void intel_allocator_reloc_destroy(struct intel_allocator *ial)
+{
+	igt_assert(ial);
+
+	free(ial->priv);
+	free(ial);
+}
+
+static bool intel_allocator_reloc_reserve(struct intel_allocator *ial,
+					  uint32_t handle,
+					  uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_reloc_unreserve(struct intel_allocator *ial,
+					    uint32_t handle,
+					    uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_reloc_is_reserved(struct intel_allocator *ial,
+					      uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static void intel_allocator_reloc_print(struct intel_allocator *ial, bool full)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+
+	(void) full;
+
+	igt_info("<ial: %p, fd: %d> allocated objects: %" PRIx64 "\n",
+		 ial, ial->fd, ialr->allocated_objects);
+}
+
+static bool intel_allocator_reloc_is_empty(struct intel_allocator *ial)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+
+	return !ialr->allocated_objects;
+}
+
+#define RESERVED 4096
+struct intel_allocator *intel_allocator_reloc_create(int fd)
+{
+	struct intel_allocator *ial;
+	struct intel_allocator_reloc *ialr;
+
+	igt_debug("Using reloc allocator\n");
+	ial = calloc(1, sizeof(*ial));
+	igt_assert(ial);
+
+	ial->fd = fd;
+	ial->get_address_range = intel_allocator_reloc_get_address_range;
+	ial->alloc = intel_allocator_reloc_alloc;
+	ial->free = intel_allocator_reloc_free;
+	ial->is_allocated = intel_allocator_reloc_is_allocated;
+	ial->reserve = intel_allocator_reloc_reserve;
+	ial->unreserve = intel_allocator_reloc_unreserve;
+	ial->is_reserved = intel_allocator_reloc_is_reserved;
+	ial->destroy = intel_allocator_reloc_destroy;
+	ial->print = intel_allocator_reloc_print;
+	ial->is_empty = intel_allocator_reloc_is_empty;
+
+	ialr = ial->priv = calloc(1, sizeof(*ialr));
+	igt_assert(ial->priv);
+	ialr->prng = (uint32_t) to_user_pointer(ial);
+	ialr->gtt_size = gem_aperture_size(fd);
+	igt_debug("Gtt size: %" PRId64 "\n", ialr->gtt_size);
+	if (!gem_uses_full_ppgtt(fd))
+		ialr->gtt_size /= 2;
+
+	ialr->bias = ialr->offset = get_bias(fd);
+	ialr->start = ialr->bias;
+	ialr->end = ialr->gtt_size - RESERVED;
+
+	ialr->allocated_objects = 0;
+
+	return ial;
+}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 07/39] lib/intel_allocator_random: Add random allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (5 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 06/39] lib/intel_allocator_reloc: Add reloc allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 08/39] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
                   ` (33 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Sometimes we want to experiment with addresses so randomizing can help
us a little.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_allocator_random.c | 188 +++++++++++++++++++++++++++++++++++
 1 file changed, 188 insertions(+)
 create mode 100644 lib/intel_allocator_random.c

diff --git a/lib/intel_allocator_random.c b/lib/intel_allocator_random.c
new file mode 100644
index 000000000..d804e3318
--- /dev/null
+++ b/lib/intel_allocator_random.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt.h"
+#include "igt_x86.h"
+#include "igt_rand.h"
+#include "intel_allocator.h"
+
+struct intel_allocator *intel_allocator_random_create(int fd);
+
+struct intel_allocator_random {
+	uint64_t bias;
+	uint32_t prng;
+	uint64_t gtt_size;
+	uint64_t start;
+	uint64_t end;
+
+	/* statistics */
+	uint64_t allocated_objects;
+};
+
+static uint64_t get_bias(int fd)
+{
+	(void) fd;
+
+	return 256 << 10;
+}
+
+static void intel_allocator_random_get_address_range(struct intel_allocator *ial,
+						     uint64_t *startp,
+						     uint64_t *endp)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	if (startp)
+		*startp = ialr->start;
+
+	if (endp)
+		*endp = ialr->end;
+}
+
+static uint64_t intel_allocator_random_alloc(struct intel_allocator *ial,
+					     uint32_t handle, uint64_t size,
+					     uint64_t alignment)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+	uint64_t offset;
+
+	(void) handle;
+
+	/* randomize the address, we try to avoid relocations */
+	do {
+		offset = hars_petruska_f54_1_random64(&ialr->prng);
+		offset += ialr->bias; /* Keep the low 256k clear, for negative deltas */
+		offset &= ialr->gtt_size - 1;
+		offset &= ~(alignment - 1);
+	} while (offset + size > ialr->end);
+
+	ialr->allocated_objects++;
+
+	return offset;
+}
+
+static bool intel_allocator_random_free(struct intel_allocator *ial,
+					uint32_t handle)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	(void) handle;
+
+	ialr->allocated_objects--;
+
+	return false;
+}
+
+static bool intel_allocator_random_is_allocated(struct intel_allocator *ial,
+						uint32_t handle, uint64_t size,
+						uint64_t offset)
+{
+	(void) ial;
+	(void) handle;
+	(void) size;
+	(void) offset;
+
+	return false;
+}
+
+static void intel_allocator_random_destroy(struct intel_allocator *ial)
+{
+	igt_assert(ial);
+
+	free(ial->priv);
+	free(ial);
+}
+
+static bool intel_allocator_random_reserve(struct intel_allocator *ial,
+					   uint32_t handle,
+					   uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_random_unreserve(struct intel_allocator *ial,
+					     uint32_t handle,
+					     uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_random_is_reserved(struct intel_allocator *ial,
+					       uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static void intel_allocator_random_print(struct intel_allocator *ial, bool full)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	(void) full;
+
+	igt_info("<ial: %p, fd: %d> allocated objects: %" PRIx64 "\n",
+		 ial, ial->fd, ialr->allocated_objects);
+}
+
+static bool intel_allocator_random_is_empty(struct intel_allocator *ial)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	return !ialr->allocated_objects;
+}
+
+#define RESERVED 4096
+struct intel_allocator *intel_allocator_random_create(int fd)
+{
+	struct intel_allocator *ial;
+	struct intel_allocator_random *ialr;
+
+	igt_debug("Using random allocator\n");
+	ial = calloc(1, sizeof(*ial));
+	igt_assert(ial);
+
+	ial->fd = fd;
+	ial->get_address_range = intel_allocator_random_get_address_range;
+	ial->alloc = intel_allocator_random_alloc;
+	ial->free = intel_allocator_random_free;
+	ial->is_allocated = intel_allocator_random_is_allocated;
+	ial->reserve = intel_allocator_random_reserve;
+	ial->unreserve = intel_allocator_random_unreserve;
+	ial->is_reserved = intel_allocator_random_is_reserved;
+	ial->destroy = intel_allocator_random_destroy;
+	ial->print = intel_allocator_random_print;
+	ial->is_empty = intel_allocator_random_is_empty;
+
+	ialr = ial->priv = calloc(1, sizeof(*ialr));
+	igt_assert(ial->priv);
+	ialr->prng = (uint32_t) to_user_pointer(ial);
+	ialr->gtt_size = gem_aperture_size(fd);
+	igt_debug("Gtt size: %" PRId64 "\n", ialr->gtt_size);
+	if (!gem_uses_full_ppgtt(fd))
+		ialr->gtt_size /= 2;
+
+	ialr->bias = get_bias(fd);
+	ialr->start = ialr->bias;
+	ialr->end = ialr->gtt_size - RESERVED;
+
+	ialr->allocated_objects = 0;
+
+	return ial;
+}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 08/39] lib/intel_allocator: Add intel_allocator core
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (6 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 07/39] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 09/39] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
                   ` (32 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

For discrete gens we have to cease of using relocations when batch
buffers are submitted to GPU. On cards which have ppgtt we can use
softpin establishing addresses on our own.

We added simple allocator (taken from Mesa; works on lists) and
random allocator to exercise batches with different addresses. All
of that works for single VM (context) so we have to add additional
layer (intel_allocator) to support multiprocessing / multithreading.

For main IGT process (also for threads created in it) intel_allocator
resolves addresses "locally", just by mutexing access to global
allocator data (ctx/vm map). When fork() is in use children cannot
establish addresses on they own and have to contact to the thread
spawned within main IGT process. Currently SysV IPC message queue was
chosen as a communication channel between children and allocator thread.
Child calls same functions as main IGT process, only communication path
will be chosen instead of acquiring addresses locally.

v2:

Add intel_allocator_open_full() to allow user pass vm range.
Add strategy: NONE, LOW_TO_HIGH, HIGH_TO_LOW passed to allocator backend.

v3:

Child is now able to use allocator directly as standalone. It only need
to call intel_allocator_init() to reinitialize appropriate structures.

v4:

Add pseudo allocator - INTEL_ALLOCATOR_RELOC which just increments
offsets to avoid unnecessary conditional code.

v5:

Alter allocator core according to igt_map changes.

v6:

Add internal version __intel_allocator_alloc() to return
ALLOC_INVALID_ADDRESS without assertion.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |    1 +
 lib/Makefile.sources                          |    6 +
 lib/igt_core.c                                |   14 +
 lib/intel_allocator.c                         | 1352 +++++++++++++++++
 lib/intel_allocator.h                         |  223 +++
 lib/intel_allocator_msgchannel.c              |  187 +++
 lib/intel_allocator_msgchannel.h              |  156 ++
 lib/meson.build                               |    5 +
 8 files changed, 1944 insertions(+)
 create mode 100644 lib/intel_allocator.c
 create mode 100644 lib/intel_allocator.h
 create mode 100644 lib/intel_allocator_msgchannel.c
 create mode 100644 lib/intel_allocator_msgchannel.h

diff --git a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
index bf5ac5428..192d1df7a 100644
--- a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
+++ b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
@@ -43,6 +43,7 @@
     <xi:include href="xml/igt_vc4.xml"/>
     <xi:include href="xml/igt_vgem.xml"/>
     <xi:include href="xml/igt_x86.xml"/>
+    <xi:include href="xml/intel_allocator.xml"/>
     <xi:include href="xml/intel_batchbuffer.xml"/>
     <xi:include href="xml/intel_bufops.xml"/>
     <xi:include href="xml/intel_chipset.xml"/>
diff --git a/lib/Makefile.sources b/lib/Makefile.sources
index 84fd7b49c..d11876cce 100644
--- a/lib/Makefile.sources
+++ b/lib/Makefile.sources
@@ -121,6 +121,12 @@ lib_source_list =	 	\
 	surfaceformat.h		\
 	sw_sync.c		\
 	sw_sync.h		\
+	intel_allocator.c	\
+	intel_allocator.h	\
+	intel_allocator_random.c	\
+	intel_allocator_simple.c	\
+	intel_allocator_msgchannel.c	\
+	intel_allocator_msgchannel.h	\
 	intel_aux_pgtable.c	\
 	intel_reg_map.c		\
 	intel_iosf.c		\
diff --git a/lib/igt_core.c b/lib/igt_core.c
index 2b4182f16..6597acfaa 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -58,6 +58,7 @@
 #include <glib.h>
 
 #include "drmtest.h"
+#include "intel_allocator.h"
 #include "intel_chipset.h"
 #include "intel_io.h"
 #include "igt_debugfs.h"
@@ -1412,6 +1413,19 @@ __noreturn static void exit_subtest(const char *result)
 	}
 	num_test_children = 0;
 
+	/*
+	 * When test completes - mostly in fail state it can leave allocated
+	 * objects. An allocator is not an exception as it is global IGT
+	 * entity and when test will allocate some ranges and then it will
+	 * fail no free/close likely will be called (controling potential
+	 * fails and clearing before assertions in IGT is not common).
+	 *
+	 * We call intel_allocator_init() then to prepare the allocator
+	 * infrastructure from scratch for each test. Init also removes
+	 * remnants from previous allocator run (if any).
+	 */
+	intel_allocator_init();
+
 	if (!in_dynamic_subtest)
 		_igt_dynamic_tests_executed = -1;
 
diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
new file mode 100644
index 000000000..b1ec69e45
--- /dev/null
+++ b/lib/intel_allocator.c
@@ -0,0 +1,1352 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <fcntl.h>
+#include <pthread.h>
+#include <signal.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include "igt.h"
+#include "igt_map.h"
+#include "intel_allocator.h"
+#include "intel_allocator_msgchannel.h"
+
+//#define ALLOCDBG
+#ifdef ALLOCDBG
+#define alloc_info igt_info
+#define alloc_debug igt_debug
+static const char *reqtype_str[] = {
+	[REQ_STOP]		= "stop",
+	[REQ_OPEN]		= "open",
+	[REQ_OPEN_AS]		= "open as",
+	[REQ_CLOSE]		= "close",
+	[REQ_ADDRESS_RANGE]	= "address range",
+	[REQ_ALLOC]		= "alloc",
+	[REQ_FREE]		= "free",
+	[REQ_IS_ALLOCATED]	= "is allocated",
+	[REQ_RESERVE]		= "reserve",
+	[REQ_UNRESERVE]		= "unreserve",
+	[REQ_RESERVE_IF_NOT_ALLOCATED] = "reserve-ina",
+	[REQ_IS_RESERVED]	= "is reserved",
+};
+static inline const char *reqstr(enum reqtype request_type)
+{
+	igt_assert(request_type >= REQ_STOP && request_type <= REQ_IS_RESERVED);
+	return reqtype_str[request_type];
+}
+#else
+#define alloc_info(...) {}
+#define alloc_debug(...) {}
+#endif
+
+struct allocator {
+	int fd;
+	uint32_t ctx;
+	uint32_t vm;
+	_Atomic(int32_t) refcount;
+	struct intel_allocator *ial;
+};
+
+struct handle_entry {
+	uint64_t handle;
+	struct allocator *al;
+};
+
+struct intel_allocator *intel_allocator_reloc_create(int fd);
+struct intel_allocator *intel_allocator_random_create(int fd);
+struct intel_allocator *intel_allocator_simple_create(int fd);
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy);
+
+/*
+ * Instead of trying to find first empty handle just get new one. Assuming
+ * our counter is incremented 2^32 times per second (4GHz clock and handle
+ * assignment takes single clock) 64-bit counter would wrap around after
+ * ~68 years.
+ *
+ *                   allocator
+ * handles           <fd, ctx>           intel allocator
+ * +-----+           +--------+          +-------------+
+ * |  1  +---------->+  fd: 3 +--------->+ data: ...   |
+ * +-----+     +---->+ ctx: 1 |          | refcount: 2 |
+ * |  2  +-----+     | ref: 2 |          +-------------+
+ * +-----+           +--------+
+ * |  3  +--+        +--------+          intel allocator
+ * +-----+  |        |  fd: 3 |          +-------------+
+ * | ... |  +------->| ctx: 2 +--------->+ data: ...   |
+ * +-----+           | ref: 1 |          | refcount: 1 |
+ * |  n  +--------+  +--------+          +-------------+
+ * +-----+        |
+ * | ... +-----+  |  allocator
+ * +-----+     |  |  <fd, vm>            intel allocator
+ * | ... +--+  |  |  +--------+          +-------------+
+ * +     +  |  |  +->+  fd: 3 +-----+--->+ data: ...   |
+ *          |  +---->+  vm: 1 |     |    | refcount: 3 |
+ *          |        | ref: 2 |     |    +-------------+
+ *          |        +--------+     |
+ *          |        +--------+     |
+ *          |        |  fd: 3 |     |
+ *          +------->+  vm: 2 +-----+
+ *                   | ref: 1 |
+ *                   +--------+
+ */
+static _Atomic(uint64_t) next_handle;
+static struct igt_map *handles;
+static struct igt_map *ctx_map;
+static struct igt_map *vm_map;
+static pthread_mutex_t map_mutex = PTHREAD_MUTEX_INITIALIZER;
+#define GET_MAP(vm) ((vm) ? vm_map : ctx_map)
+
+static bool multiprocess;
+static pthread_t allocator_thread;
+
+static bool warn_if_not_empty;
+
+/* For allocator purposes we need to track pid/tid */
+static pid_t allocator_pid = -1;
+extern pid_t child_pid;
+extern __thread pid_t child_tid;
+
+/*
+ * - for parent process we have child_pid == -1
+ * - for child which calls intel_allocator_init() allocator_pid == child_pid
+ */
+static inline bool is_same_process(void)
+{
+	return child_pid == -1 || allocator_pid == child_pid;
+}
+
+static struct msg_channel *channel;
+
+static int send_alloc_stop(struct msg_channel *msgchan)
+{
+	struct alloc_req req = {0};
+
+	req.request_type = REQ_STOP;
+
+	return msgchan->send_req(msgchan, &req);
+}
+
+static int send_req(struct msg_channel *msgchan, pid_t tid,
+		    struct alloc_req *request)
+{
+	request->tid = tid;
+	return msgchan->send_req(msgchan, request);
+}
+
+static int recv_req(struct msg_channel *msgchan, struct alloc_req *request)
+{
+	return msgchan->recv_req(msgchan, request);
+}
+
+static int send_resp(struct msg_channel *msgchan,
+		     pid_t tid, struct alloc_resp *response)
+{
+	response->tid = tid;
+	return msgchan->send_resp(msgchan, response);
+}
+
+static int recv_resp(struct msg_channel *msgchan,
+		     pid_t tid, struct alloc_resp *response)
+{
+	response->tid = tid;
+	return msgchan->recv_resp(msgchan, response);
+}
+
+static inline void map_entry_free_func(struct igt_map_entry *entry)
+{
+	free(entry->data);
+}
+
+static uint64_t __handle_create(struct allocator *al)
+{
+	struct handle_entry *h = malloc(sizeof(*h));
+
+	igt_assert(h);
+	h->handle = atomic_fetch_add(&next_handle, 1);
+	h->al = al;
+	igt_map_insert(handles, h, h);
+
+	return h->handle;
+}
+
+static void __handle_destroy(uint64_t handle)
+{
+	struct handle_entry he = { .handle = handle };
+
+	igt_map_remove(handles, &he, map_entry_free_func);
+}
+
+static struct allocator *__allocator_find(int fd, uint32_t ctx, uint32_t vm)
+{
+	struct allocator al = { .fd = fd, .ctx = ctx, .vm = vm };
+	struct igt_map *map = GET_MAP(vm);
+
+	return igt_map_search(map, &al);
+}
+
+static struct allocator *__allocator_find_by_handle(uint64_t handle)
+{
+	struct handle_entry *h, he = { .handle = handle };
+
+	h = igt_map_search(handles, &he);
+	if (!h)
+		return NULL;
+
+	return h->al;
+}
+
+static struct allocator *__allocator_create(int fd, uint32_t ctx, uint32_t vm,
+					    struct intel_allocator *ial)
+{
+	struct igt_map *map = GET_MAP(vm);
+	struct allocator *al = malloc(sizeof(*al));
+
+	igt_assert(al);
+	igt_assert(fd == ial->fd);
+	al->fd = fd;
+	al->ctx = ctx;
+	al->vm = vm;
+	atomic_init(&al->refcount, 0);
+	al->ial = ial;
+
+	igt_map_insert(map, al, al);
+
+	return al;
+}
+
+static void __allocator_destroy(struct allocator *al)
+{
+	struct igt_map *map = GET_MAP(al->vm);
+
+	igt_map_remove(map, al, map_entry_free_func);
+}
+
+static int __allocator_get(struct allocator *al)
+{
+	struct intel_allocator *ial = al->ial;
+	int refcount;
+
+	atomic_fetch_add(&al->refcount, 1);
+	refcount = atomic_fetch_add(&ial->refcount, 1);
+	igt_assert(refcount >= 0);
+
+	return refcount;
+}
+
+static bool __allocator_put(struct allocator *al)
+{
+	struct intel_allocator *ial = al->ial;
+	bool released = false;
+	int refcount, al_refcount;
+
+	al_refcount = atomic_fetch_sub(&al->refcount, 1);
+	refcount = atomic_fetch_sub(&ial->refcount, 1);
+	igt_assert(refcount >= 1);
+	if (refcount == 1) {
+		if (!ial->is_empty(ial) && warn_if_not_empty)
+			igt_warn("Allocator not clear before destroy!\n");
+
+		/* Check allocator has also refcount == 1 */
+		igt_assert_eq(al_refcount, 1);
+
+		released = true;
+	}
+
+	return released;
+}
+
+static struct intel_allocator *intel_allocator_create(int fd,
+						      uint64_t start, uint64_t end,
+						      uint8_t allocator_type,
+						      uint8_t allocator_strategy)
+{
+	struct intel_allocator *ial = NULL;
+
+	switch (allocator_type) {
+	/*
+	 * Few words of explanation is required here.
+	 *
+	 * INTEL_ALLOCATOR_NONE allows keeping information in the code (intel-bb
+	 * is an example) we're not using IGT allocator itself and likely
+	 * we rely on relocations.
+	 * So trying to create NONE allocator doesn't makes sense and below
+	 * assertion catches such invalid usage.
+	 */
+	case INTEL_ALLOCATOR_NONE:
+		igt_assert_f(allocator_type != INTEL_ALLOCATOR_NONE,
+			     "We cannot use NONE allocator\n");
+		break;
+	case INTEL_ALLOCATOR_RELOC:
+		ial = intel_allocator_reloc_create(fd);
+		break;
+	case INTEL_ALLOCATOR_RANDOM:
+		ial = intel_allocator_random_create(fd);
+		break;
+	case INTEL_ALLOCATOR_SIMPLE:
+		if (!start && !end)
+			ial = intel_allocator_simple_create(fd);
+		else
+			ial = intel_allocator_simple_create_full(fd, start, end,
+								 allocator_strategy);
+		break;
+	default:
+		igt_assert_f(ial, "Allocator type %d not implemented\n",
+			     allocator_type);
+		break;
+	}
+
+	igt_assert(ial);
+
+	ial->type = allocator_type;
+	ial->strategy = allocator_strategy;
+	pthread_mutex_init(&ial->mutex, NULL);
+
+	return ial;
+}
+
+static void intel_allocator_destroy(struct intel_allocator *ial)
+{
+	alloc_info("Destroying allocator (empty: %d)\n", ial->is_empty(ial));
+
+	ial->destroy(ial);
+}
+
+static struct allocator *allocator_open(int fd, uint32_t ctx, uint32_t vm,
+					uint64_t start, uint64_t end,
+					uint8_t allocator_type,
+					uint8_t allocator_strategy,
+					uint64_t *ahndp)
+{
+	struct intel_allocator *ial;
+	struct allocator *al;
+	const char *idstr = vm ? "vm" : "ctx";
+
+	igt_assert(ahndp);
+
+	al = __allocator_find(fd, ctx, vm);
+	if (!al) {
+		alloc_info("Allocator fd: %d, ctx: %u, vm: %u, <0x%llx : 0x%llx> "
+			    "not found, creating one\n",
+			    fd, ctx, vm, (long long) start, (long long) end);
+		ial = intel_allocator_create(fd, start, end, allocator_type,
+					     allocator_strategy);
+		al = __allocator_create(fd, ctx, vm, ial);
+	}
+
+	ial = al->ial;
+
+	igt_assert_f(ial->type == allocator_type,
+		     "Allocator type must be same for fd/%s\n", idstr);
+
+	igt_assert_f(ial->strategy == allocator_strategy,
+		     "Allocator strategy must be same or fd/%s\n", idstr);
+
+	__allocator_get(al);
+	*ahndp = __handle_create(al);
+
+	return al;
+}
+
+static struct allocator *allocator_open_as(struct allocator *base,
+					   uint32_t new_vm, uint64_t *ahndp)
+{
+	struct allocator *al;
+
+	igt_assert(ahndp);
+	al = __allocator_create(base->fd, base->ctx, new_vm, base->ial);
+	__allocator_get(al);
+	*ahndp = __handle_create(al);
+
+	return al;
+}
+
+static bool allocator_close(uint64_t ahnd)
+{
+	struct allocator *al;
+	bool released, is_empty = false;
+
+	al = __allocator_find_by_handle(ahnd);
+	if (!al) {
+		igt_warn("Cannot find handle: %llx\n", (long long) ahnd);
+		return false;
+	}
+
+	released = __allocator_put(al);
+	if (released) {
+		is_empty = al->ial->is_empty(al->ial);
+		intel_allocator_destroy(al->ial);
+	}
+
+	if (!atomic_load(&al->refcount))
+		__allocator_destroy(al);
+
+	__handle_destroy(ahnd);
+
+	return is_empty;
+}
+
+static int send_req_recv_resp(struct msg_channel *msgchan,
+			      struct alloc_req *request,
+			      struct alloc_resp *response)
+{
+	int ret;
+
+	ret = send_req(msgchan, child_tid, request);
+	if (ret < 0) {
+		igt_warn("Error sending request [type: %d]: err = %d [%s]\n",
+			 request->request_type, errno, strerror(errno));
+
+		return ret;
+	}
+
+	ret = recv_resp(msgchan, child_tid, response);
+	if (ret < 0)
+		igt_warn("Error receiving response [type: %d]: err = %d [%s]\n",
+			 request->request_type, errno, strerror(errno));
+
+	/*
+	 * This is main assumption - we receive message which size must be > 0.
+	 * If this is fulfilled we return 0 as a success.
+	 */
+	if (ret > 0)
+		ret = 0;
+
+	return ret;
+}
+
+static int handle_request(struct alloc_req *req, struct alloc_resp *resp)
+{
+	int ret;
+	long refcnt;
+
+	memset(resp, 0, sizeof(*resp));
+
+	if (is_same_process()) {
+		struct intel_allocator *ial;
+		struct allocator *al;
+		uint64_t start, end, size, ahnd;
+		uint32_t ctx, vm;
+		bool allocated, reserved, unreserved;
+		/* Used when debug is on, so avoid compilation warnings */
+		(void) ctx;
+		(void) vm;
+		(void) refcnt;
+
+		/*
+		 * Mutex only work on allocator instance, not stop/open/close
+		 */
+		if (req->request_type > REQ_CLOSE) {
+			/*
+			 * We have to lock map mutex because concurrent open
+			 * can lead to resizing the map.
+			 */
+			pthread_mutex_lock(&map_mutex);
+			al = __allocator_find_by_handle(req->allocator_handle);
+			pthread_mutex_unlock(&map_mutex);
+			igt_assert(al);
+
+			ial = al->ial;
+			igt_assert(ial);
+			pthread_mutex_lock(&ial->mutex);
+		}
+
+		switch (req->request_type) {
+		case REQ_STOP:
+			alloc_info("<stop>\n");
+			break;
+
+		case REQ_OPEN:
+			pthread_mutex_lock(&map_mutex);
+			al = allocator_open(req->open.fd,
+					    req->open.ctx, req->open.vm,
+					    req->open.start, req->open.end,
+					    req->open.allocator_type,
+					    req->open.allocator_strategy,
+					    &ahnd);
+			refcnt = atomic_load(&al->refcount);
+			ret = atomic_load(&al->ial->refcount);
+			pthread_mutex_unlock(&map_mutex);
+
+			resp->response_type = RESP_OPEN;
+			resp->open.allocator_handle = ahnd;
+
+			alloc_info("<open> [tid: %ld] fd: %d, ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", alloc_type: %u, al->refcnt: %ld->%ld"
+				   ", refcnt: %d->%d\n",
+				   (long) req->tid, req->open.fd, ahnd,
+				   req->open.ctx,
+				   req->open.vm, req->open.allocator_type,
+				   refcnt - 1, refcnt, ret - 1, ret);
+			break;
+
+		case REQ_OPEN_AS:
+			/* lock first to avoid concurrent close */
+			pthread_mutex_lock(&map_mutex);
+
+			al = __allocator_find_by_handle(req->allocator_handle);
+			resp->response_type = RESP_OPEN_AS;
+
+			if (!al) {
+				alloc_info("<open as> [tid: %ld] ahnd: %" PRIx64
+					   " -> no handle\n",
+					   (long) req->tid, req->allocator_handle);
+				pthread_mutex_unlock(&map_mutex);
+				break;
+			}
+
+			if (!al->vm) {
+				alloc_info("<open as> [tid: %ld] ahnd: %" PRIx64
+					   " -> only open as for <fd, vm> is possible\n",
+					   (long) req->tid, req->allocator_handle);
+				pthread_mutex_unlock(&map_mutex);
+				break;
+			}
+
+
+			al = allocator_open_as(al, req->open_as.new_vm, &ahnd);
+			refcnt = atomic_load(&al->refcount);
+			ret = atomic_load(&al->ial->refcount);
+			pthread_mutex_unlock(&map_mutex);
+
+			resp->response_type = RESP_OPEN_AS;
+			resp->open.allocator_handle = ahnd;
+
+			alloc_info("<open as> [tid: %ld] fd: %d, ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", alloc_type: %u, al->refcnt: %ld->%ld"
+				   ", refcnt: %d->%d\n",
+				   (long) req->tid, al->fd, ahnd,
+				   al->ctx, al->vm, al->ial->type,
+				   refcnt - 1, refcnt, ret - 1, ret);
+			break;
+
+		case REQ_CLOSE:
+			pthread_mutex_lock(&map_mutex);
+			al = __allocator_find_by_handle(req->allocator_handle);
+			resp->response_type = RESP_CLOSE;
+
+			if (!al) {
+				alloc_info("<close> [tid: %ld] ahnd: %" PRIx64
+					   " -> no handle\n",
+					   (long) req->tid, req->allocator_handle);
+				pthread_mutex_unlock(&map_mutex);
+				break;
+			}
+
+			resp->response_type = RESP_CLOSE;
+			ctx = al->ctx;
+			vm = al->vm;
+
+			refcnt = atomic_load(&al->refcount);
+			ret = atomic_load(&al->ial->refcount);
+			resp->close.is_empty = allocator_close(req->allocator_handle);
+			pthread_mutex_unlock(&map_mutex);
+
+			alloc_info("<close> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", is_empty: %d, al->refcount: %ld->%ld"
+				   ", refcnt: %d->%d\n",
+				   (long) req->tid, req->allocator_handle,
+				   ctx, vm, resp->close.is_empty,
+				   refcnt, refcnt - 1, ret, ret - 1);
+
+			break;
+
+		case REQ_ADDRESS_RANGE:
+			resp->response_type = RESP_ADDRESS_RANGE;
+			ial->get_address_range(ial, &start, &end);
+			resp->address_range.start = start;
+			resp->address_range.end = end;
+			alloc_info("<address range> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", start: 0x%" PRIx64 ", end: 0x%" PRId64 "\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm, start, end);
+			break;
+
+		case REQ_ALLOC:
+			resp->response_type = RESP_ALLOC;
+			resp->alloc.offset = ial->alloc(ial,
+							req->alloc.handle,
+							req->alloc.size,
+							req->alloc.alignment);
+			alloc_info("<alloc> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u, handle: %u"
+				   ", size: 0x%" PRIx64 ", offset: 0x%" PRIx64
+				   ", alignment: 0x%" PRIx64 "\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm,
+				   req->alloc.handle, req->alloc.size,
+				   resp->alloc.offset, req->alloc.alignment);
+			break;
+
+		case REQ_FREE:
+			resp->response_type = RESP_FREE;
+			resp->free.freed = ial->free(ial, req->free.handle);
+			alloc_info("<free> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", handle: %u, freed: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm,
+				   req->free.handle, resp->free.freed);
+			break;
+
+		case REQ_IS_ALLOCATED:
+			resp->response_type = RESP_IS_ALLOCATED;
+			allocated = ial->is_allocated(ial,
+						      req->is_allocated.handle,
+						      req->is_allocated.size,
+						      req->is_allocated.offset);
+			resp->is_allocated.allocated = allocated;
+			alloc_info("<is allocated> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", offset: 0x%" PRIx64
+				   ", allocated: %d\n", (long) req->tid,
+				   req->allocator_handle, al->ctx, al->vm,
+				   req->is_allocated.offset, allocated);
+			break;
+
+		case REQ_RESERVE:
+			resp->response_type = RESP_RESERVE;
+			reserved = ial->reserve(ial,
+						req->reserve.handle,
+						req->reserve.start,
+						req->reserve.end);
+			resp->reserve.reserved = reserved;
+			alloc_info("<reserve> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u, handle: %u"
+				   ", start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", reserved: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm, req->reserve.handle,
+				   req->reserve.start, req->reserve.end, reserved);
+			break;
+
+		case REQ_UNRESERVE:
+			resp->response_type = RESP_UNRESERVE;
+			unreserved = ial->unreserve(ial,
+						    req->unreserve.handle,
+						    req->unreserve.start,
+						    req->unreserve.end);
+			resp->unreserve.unreserved = unreserved;
+			alloc_info("<unreserve> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u, handle: %u"
+				   ", start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", unreserved: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm, req->unreserve.handle,
+				   req->unreserve.start, req->unreserve.end,
+				   unreserved);
+			break;
+
+		case REQ_IS_RESERVED:
+			resp->response_type = RESP_IS_RESERVED;
+			reserved = ial->is_reserved(ial,
+						    req->is_reserved.start,
+						    req->is_reserved.end);
+			resp->is_reserved.reserved = reserved;
+			alloc_info("<is reserved> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", reserved: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm, req->is_reserved.start,
+				   req->is_reserved.end, reserved);
+			break;
+
+		case REQ_RESERVE_IF_NOT_ALLOCATED:
+			resp->response_type = RESP_RESERVE_IF_NOT_ALLOCATED;
+			size = req->reserve.end - req->reserve.start;
+
+			allocated = ial->is_allocated(ial, req->reserve.handle,
+						      size, req->reserve.start);
+			if (allocated) {
+				resp->reserve_if_not_allocated.allocated = allocated;
+				alloc_info("<reserve if not allocated> [tid: %ld] "
+					   "ahnd: %" PRIx64 ", ctx: %u, vm: %u"
+					   ", handle: %u, size: 0x%lx"
+					   ", start: 0x%" PRIx64 ", end: 0x%" PRIx64
+					   ", allocated: %d, reserved: %d\n",
+					   (long) req->tid, req->allocator_handle,
+					   al->ctx, al->vm, req->reserve.handle,
+					   (long) size, req->reserve.start,
+					   req->reserve.end, allocated, false);
+				break;
+			}
+
+			reserved = ial->reserve(ial,
+						req->reserve.handle,
+						req->reserve.start,
+						req->reserve.end);
+			resp->reserve_if_not_allocated.reserved = reserved;
+			alloc_info("<reserve if not allocated> [tid: %ld] "
+				   "ahnd: %" PRIx64 ", ctx: %u, vm: %u"
+				   ", handle: %u, start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", allocated: %d, reserved: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm,
+				   req->reserve.handle,
+				   req->reserve.start, req->reserve.end,
+				   false, reserved);
+			break;
+		}
+
+		if (req->request_type > REQ_CLOSE)
+			pthread_mutex_unlock(&ial->mutex);
+
+		return 0;
+	}
+
+	ret = send_req_recv_resp(channel, req, resp);
+
+	if (ret < 0)
+		exit(0);
+
+	return ret;
+}
+
+static void kill_children(int sig)
+{
+	sighandler_t old;
+
+	old = signal(sig, SIG_IGN);
+	igt_assert(old != SIG_ERR);
+	kill(-getpgrp(), sig);
+	igt_assert(signal(sig, old) != SIG_ERR);
+}
+
+static void *allocator_thread_loop(void *data)
+{
+	struct alloc_req req;
+	struct alloc_resp resp;
+	int ret;
+	(void) data;
+
+	alloc_info("Allocator pid: %ld, tid: %ld\n",
+		   (long) allocator_pid, (long) gettid());
+	alloc_info("Entering allocator loop\n");
+
+	while (1) {
+		ret = recv_req(channel, &req);
+
+		if (ret == -1) {
+			igt_warn("Error receiving request in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+			kill_children(SIGINT);
+			return (void *) -1;
+		}
+
+		/* Fake message to stop the thread */
+		if (req.request_type == REQ_STOP) {
+			alloc_info("<stop request>\n");
+			break;
+		}
+
+		ret = handle_request(&req, &resp);
+		if (ret) {
+			igt_warn("Error handling request in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+			break;
+		}
+
+		ret = send_resp(channel, req.tid, &resp);
+		if (ret) {
+			igt_warn("Error sending response in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+
+			kill_children(SIGINT);
+			return (void *) -1;
+		}
+	}
+
+	return NULL;
+}
+
+/**
+ * intel_allocator_multiprocess_start:
+ *
+ * Function turns on intel_allocator multiprocess mode what means
+ * all allocations from children processes are performed in a separate thread
+ * within main igt process. Children are aware of the situation and use
+ * some interprocess communication channel to send/receive messages
+ * (open, close, alloc, free, ...) to/from allocator thread.
+ *
+ * Must be used when you want to use an allocator in non single-process code.
+ * All allocations in threads spawned in main igt process are handled by
+ * mutexing, not by sending/receiving messages to/from allocator thread.
+ *
+ * Note. This destroys all previously created allocators and theirs content.
+ */
+void intel_allocator_multiprocess_start(void)
+{
+	alloc_info("allocator multiprocess start\n");
+
+	igt_assert_f(child_pid == -1,
+		     "Allocator thread can be spawned only in main IGT process\n");
+	intel_allocator_init();
+
+	multiprocess = true;
+	channel->init(channel);
+
+	pthread_create(&allocator_thread, NULL,
+		       allocator_thread_loop, NULL);
+}
+
+/**
+ * intel_allocator_multiprocess_stop:
+ *
+ * Function turns off intel_allocator multiprocess mode what means
+ * stopping allocator thread and deinitializing its data.
+ */
+void intel_allocator_multiprocess_stop(void)
+{
+	alloc_info("allocator multiprocess stop\n");
+
+	if (multiprocess) {
+		send_alloc_stop(channel);
+		/* Deinit, this should stop all blocked syscalls, if any */
+		channel->deinit(channel);
+		pthread_join(allocator_thread, NULL);
+		/* But we're not sure does child will stuck */
+		kill_children(SIGINT);
+		igt_waitchildren_timeout(5, "Stopping children");
+		multiprocess = false;
+	}
+}
+
+static uint64_t __intel_allocator_open_full(int fd, uint32_t ctx,
+					    uint32_t vm,
+					    uint64_t start, uint64_t end,
+					    uint8_t allocator_type,
+					    enum allocator_strategy strategy)
+{
+	struct alloc_req req = { .request_type = REQ_OPEN,
+				 .open.fd = fd,
+				 .open.ctx = ctx,
+				 .open.vm = vm,
+				 .open.start = start,
+				 .open.end = end,
+				 .open.allocator_type = allocator_type,
+				 .open.allocator_strategy = strategy };
+	struct alloc_resp resp;
+
+	/* Get child_tid only once at open() */
+	if (child_tid == -1)
+		child_tid = gettid();
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.open.allocator_handle);
+	igt_assert(resp.response_type == RESP_OPEN);
+
+	return resp.open.allocator_handle;
+}
+
+/**
+ * intel_allocator_open_full:
+ * @fd: i915 descriptor
+ * @ctx: context
+ * @start: address of the beginning
+ * @end: address of the end
+ * @allocator_type: one of INTEL_ALLOCATOR_* define
+ * @strategy: passed to the allocator to define the strategy (like order
+ * of allocation, see notes below).
+ *
+ * Function opens an allocator instance within <@start, @end) vm for given
+ * @fd and @ctx and returns its handle. If the allocator for such pair
+ * doesn't exist it is created with refcount = 1.
+ * Parallel opens returns same handle bumping its refcount.
+ *
+ * Returns: unique handle to the currently opened allocator.
+ *
+ * Notes:
+ * Strategy is generally used internally by the underlying allocator:
+ *
+ * For SIMPLE allocator:
+ * - ALLOC_STRATEGY_HIGH_TO_LOW means topmost addresses are allocated first,
+ * - ALLOC_STRATEGY_LOW_TO_HIGH opposite, allocation starts from lowest
+ *   addresses.
+ *
+ * For RANDOM allocator:
+ * - none of strategy is currently implemented.
+ */
+uint64_t intel_allocator_open_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   uint8_t allocator_type,
+				   enum allocator_strategy strategy)
+{
+	return __intel_allocator_open_full(fd, ctx, 0, start, end,
+					   allocator_type, strategy);
+}
+
+uint64_t intel_allocator_open_vm_full(int fd, uint32_t vm,
+				      uint64_t start, uint64_t end,
+				      uint8_t allocator_type,
+				      enum allocator_strategy strategy)
+{
+	igt_assert(vm != 0);
+	return __intel_allocator_open_full(fd, 0, vm, start, end,
+					   allocator_type, strategy);
+}
+
+/**
+ * intel_allocator_open:
+ * @fd: i915 descriptor
+ * @ctx: context
+ * @allocator_type: one of INTEL_ALLOCATOR_* define
+ *
+ * Function opens an allocator instance for given @fd and @ctx and returns
+ * its handle. If the allocator for such pair doesn't exist it is created
+ * with refcount = 1. Parallel opens returns same handle bumping its refcount.
+ *
+ * Returns: unique handle to the currently opened allocator.
+ *
+ * Notes: we pass ALLOC_STRATEGY_HIGH_TO_LOW as default, playing with higher
+ * addresses makes easier to find addressing issues (like passing non-canonical
+ * offsets, which won't be catched unless 47-bit is set).
+ */
+uint64_t intel_allocator_open(int fd, uint32_t ctx, uint8_t allocator_type)
+{
+	return intel_allocator_open_full(fd, ctx, 0, 0, allocator_type,
+					 ALLOC_STRATEGY_HIGH_TO_LOW);
+}
+
+uint64_t intel_allocator_open_vm(int fd, uint32_t vm, uint8_t allocator_type)
+{
+	return intel_allocator_open_vm_full(fd, vm, 0, 0, allocator_type,
+					    ALLOC_STRATEGY_HIGH_TO_LOW);
+}
+
+uint64_t intel_allocator_open_vm_as(uint64_t allocator_handle, uint32_t new_vm)
+{
+	struct alloc_req req = { .request_type = REQ_OPEN_AS,
+				 .allocator_handle = allocator_handle,
+				 .open_as.new_vm = new_vm };
+	struct alloc_resp resp;
+
+	/* Get child_tid only once at open() */
+	if (child_tid == -1)
+		child_tid = gettid();
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.open_as.allocator_handle);
+	igt_assert(resp.response_type == RESP_OPEN_AS);
+
+	return resp.open.allocator_handle;
+}
+
+/**
+ * intel_allocator_close:
+ * @allocator_handle: handle to the allocator that will be closed
+ *
+ * Function decreases an allocator refcount for the given @handle.
+ * When refcount reaches zero allocator is closed (destroyed) and all
+ * allocated / reserved areas are freed.
+ *
+ * Returns: true if closed allocator was empty, false otherwise.
+ */
+bool intel_allocator_close(uint64_t allocator_handle)
+{
+	struct alloc_req req = { .request_type = REQ_CLOSE,
+				 .allocator_handle = allocator_handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_CLOSE);
+
+	return resp.close.is_empty;
+}
+
+/**
+ * intel_allocator_get_address_range:
+ * @allocator_handle: handle to an allocator
+ * @startp: pointer to the variable where function writes starting offset
+ * @endp: pointer to the variable where function writes ending offset
+ *
+ * Function fills @startp, @endp with respectively, starting and ending offset
+ * of the allocator working virtual address space range.
+ *
+ * Note. Allocators working ranges can differ depending on the device or
+ * the allocator type so before reserving a specific offset a good practise
+ * is to ensure that address is between accepted range.
+ */
+void intel_allocator_get_address_range(uint64_t allocator_handle,
+				       uint64_t *startp, uint64_t *endp)
+{
+	struct alloc_req req = { .request_type = REQ_ADDRESS_RANGE,
+				 .allocator_handle = allocator_handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_ADDRESS_RANGE);
+
+	if (startp)
+		*startp = resp.address_range.start;
+
+	if (endp)
+		*endp = resp.address_range.end;
+}
+
+/**
+ * __intel_allocator_alloc:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @alignment: determines object alignment
+ *
+ * Function finds and returns the most suitable offset with given @alignment
+ * for an object with @size identified by the @handle.
+ *
+ * Returns: currently assigned address for a given object. If an object was
+ * already allocated returns same address. If allocator can't find suitable
+ * range returns ALLOC_INVALID_ADDRESS.
+ */
+uint64_t __intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
+				 uint64_t size, uint64_t alignment)
+{
+	struct alloc_req req = { .request_type = REQ_ALLOC,
+				 .allocator_handle = allocator_handle,
+				 .alloc.handle = handle,
+				 .alloc.size = size,
+				 .alloc.alignment = alignment };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_ALLOC);
+
+	return resp.alloc.offset;
+}
+
+/**
+ * intel_allocator_alloc:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @alignment: determines object alignment
+ *
+ * Same as __intel_allocator_alloc() but asserts if allocator can't return
+ * valid address.
+ */
+uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t alignment)
+{
+	uint64_t offset;
+
+	offset = __intel_allocator_alloc(allocator_handle, handle,
+					 size, alignment);
+	igt_assert(offset != ALLOC_INVALID_ADDRESS);
+
+	return offset;
+}
+
+/**
+ * intel_allocator_free:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object to be freed
+ *
+ * Function free object identified by the @handle in allocator what makes it
+ * offset again allocable.
+ *
+ * Note. Reserved objects can only be freed by an #intel_allocator_unreserve
+ * function.
+ *
+ * Returns: true if the object was successfully freed, otherwise false.
+ */
+bool intel_allocator_free(uint64_t allocator_handle, uint32_t handle)
+{
+	struct alloc_req req = { .request_type = REQ_FREE,
+				 .allocator_handle = allocator_handle,
+				 .free.handle = handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_FREE);
+
+	return resp.free.freed;
+}
+
+/**
+ * intel_allocator_is_allocated:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function checks whether the object identified by the @handle and @size
+ * is allocated at the @offset.
+ *
+ * Returns: true if the object is currently allocated at the @offset,
+ * otherwise false.
+ */
+bool intel_allocator_is_allocated(uint64_t allocator_handle, uint32_t handle,
+				  uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_IS_ALLOCATED,
+				 .allocator_handle = allocator_handle,
+				 .is_allocated.handle = handle,
+				 .is_allocated.size = size,
+				 .is_allocated.offset = offset };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_IS_ALLOCATED);
+
+	return resp.is_allocated.allocated;
+}
+
+/**
+ * intel_allocator_reserve:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function reserves space that starts at the @offset and has @size.
+ * Optionally we can pass @handle to mark that space is for a specific
+ * object, otherwise pass -1.
+ *
+ * Note. Reserved space is identified by offset and size, not a handle.
+ * So an object can have multiple reserved spaces with its handle.
+ *
+ * Returns: true if space is successfully reserved, otherwise false.
+ */
+bool intel_allocator_reserve(uint64_t allocator_handle, uint32_t handle,
+			     uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_RESERVE,
+				 .allocator_handle = allocator_handle,
+				 .reserve.handle = handle,
+				 .reserve.start = offset,
+				 .reserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_RESERVE);
+
+	return resp.reserve.reserved;
+}
+
+/**
+ * intel_allocator_unreserve:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function unreserves space that starts at the @offset, @size and @handle.
+ *
+ * Note. @handle, @size and @offset have to match those used in reservation.
+ * i.e. check with the same offset but even smaller size will fail.
+ *
+ * Returns: true if the space is successfully unreserved, otherwise false.
+ */
+bool intel_allocator_unreserve(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_UNRESERVE,
+				 .allocator_handle = allocator_handle,
+				 .unreserve.handle = handle,
+				 .unreserve.start = offset,
+				 .unreserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_UNRESERVE);
+
+	return resp.unreserve.unreserved;
+}
+
+/**
+ * intel_allocator_is_reserved:
+ * @allocator_handle: handle to an allocator
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function checks whether space starting at the @offset and @size is
+ * currently under reservation.
+ *
+ * Note. @size and @offset have to match those used in reservation,
+ * i.e. check with the same offset but even smaller size will fail.
+ *
+ * Returns: true if space is reserved, othwerise false.
+ */
+bool intel_allocator_is_reserved(uint64_t allocator_handle,
+				 uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_IS_RESERVED,
+				 .allocator_handle = allocator_handle,
+				 .is_reserved.start = offset,
+				 .is_reserved.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_IS_RESERVED);
+
+	return resp.is_reserved.reserved;
+}
+
+/**
+ * intel_allocator_reserve_if_not_allocated:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ * @is_allocatedp: if not NULL function writes there object allocation status
+ * (true/false)
+ *
+ * Function checks whether the object identified by the @handle and @size
+ * is allocated at the @offset and writes the result to @is_allocatedp.
+ * If it's not it reserves it at the given @offset.
+ *
+ * Returns: true if the space for an object was reserved, otherwise false.
+ */
+bool intel_allocator_reserve_if_not_allocated(uint64_t allocator_handle,
+					      uint32_t handle,
+					      uint64_t size, uint64_t offset,
+					      bool *is_allocatedp)
+{
+	struct alloc_req req = { .request_type = REQ_RESERVE_IF_NOT_ALLOCATED,
+				 .allocator_handle = allocator_handle,
+				 .reserve.handle = handle,
+				 .reserve.start = offset,
+				 .reserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_RESERVE_IF_NOT_ALLOCATED);
+
+	if (is_allocatedp)
+		*is_allocatedp = resp.reserve_if_not_allocated.allocated;
+
+	return resp.reserve_if_not_allocated.reserved;
+}
+
+/**
+ * intel_allocator_print:
+ * @allocator_handle: handle to an allocator
+ *
+ * Function prints statistics and content of the allocator.
+ * Mainly for debugging purposes.
+ *
+ * Note. Printing possible only in the main process.
+ **/
+void intel_allocator_print(uint64_t allocator_handle)
+{
+	igt_assert(allocator_handle);
+
+	if (!multiprocess || is_same_process()) {
+		struct allocator *al;
+
+		al = __allocator_find_by_handle(allocator_handle);
+		pthread_mutex_lock(&map_mutex);
+		al->ial->print(al->ial, true);
+		pthread_mutex_unlock(&map_mutex);
+	} else {
+		igt_warn("Print stats is in main process only\n");
+	}
+}
+
+static int equal_handles(const void *key1, const void *key2)
+{
+	const struct handle_entry *h1 = key1, *h2 = key2;
+
+	alloc_debug("h1: %llx, h2: %llx\n",
+		   (long long) h1->handle, (long long) h2->handle);
+
+	return h1->handle == h2->handle;
+}
+
+static int equal_ctx(const void *key1, const void *key2)
+{
+	const struct allocator *a1 = key1, *a2 = key2;
+
+	alloc_debug("a1: <fd: %d, ctx: %u>, a2 <fd: %d, ctx: %u>\n",
+		   a1->fd, a1->ctx, a2->fd, a2->ctx);
+
+	return a1->fd == a2->fd && a1->ctx == a2->ctx;
+}
+
+static int equal_vm(const void *key1, const void *key2)
+{
+	const struct allocator *a1 = key1, *a2 = key2;
+
+	alloc_debug("a1: <fd: %d, vm: %u>, a2 <fd: %d, vm: %u>\n",
+		   a1->fd, a1->vm, a2->fd, a2->vm);
+
+	return a1->fd == a2->fd && a1->vm == a2->vm;
+}
+
+/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */
+#define GOLDEN_RATIO_PRIME_32 0x9e370001UL
+
+static inline uint32_t hash_handles(const void *val)
+{
+	uint32_t hash = ((struct handle_entry *) val)->handle;
+
+	hash = hash * GOLDEN_RATIO_PRIME_32;
+	return hash;
+}
+
+static inline uint32_t hash_instance(const void *val)
+{
+	uint64_t hash = ((struct allocator *) val)->fd;
+
+	hash = hash * GOLDEN_RATIO_PRIME_32;
+	return hash;
+}
+
+static void __free_maps(struct igt_map *map, bool close_allocators)
+{
+	struct igt_map_entry *pos;
+	const struct handle_entry *h;
+
+	if (!map)
+		return;
+
+	if (close_allocators)
+		igt_map_foreach(map, pos) {
+			h = pos->key;
+			allocator_close(h->handle);
+		}
+
+	igt_map_destroy(map, map_entry_free_func);
+}
+
+/**
+ * intel_allocator_init:
+ *
+ * Function initializes the allocators infrastructure. The second call will
+ * override current infra and destroy existing there allocators. It is called
+ * in igt_constructor.
+ **/
+void intel_allocator_init(void)
+{
+	alloc_info("Prepare an allocator infrastructure\n");
+
+	allocator_pid = getpid();
+	alloc_info("Allocator pid: %ld\n", (long) allocator_pid);
+
+	__free_maps(handles, true);
+	__free_maps(ctx_map, false);
+	__free_maps(vm_map, false);
+
+	atomic_init(&next_handle, 1);
+	handles = igt_map_create(hash_handles, equal_handles);
+	ctx_map = igt_map_create(hash_instance, equal_ctx);
+	vm_map = igt_map_create(hash_instance, equal_vm);
+	igt_assert(handles && ctx_map && vm_map);
+
+	channel = intel_allocator_get_msgchannel(CHANNEL_SYSVIPC_MSGQUEUE);
+}
+
+igt_constructor {
+	intel_allocator_init();
+}
diff --git a/lib/intel_allocator.h b/lib/intel_allocator.h
new file mode 100644
index 000000000..440c5992d
--- /dev/null
+++ b/lib/intel_allocator.h
@@ -0,0 +1,223 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __INTEL_ALLOCATOR_H__
+#define __INTEL_ALLOCATOR_H__
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <pthread.h>
+#include <stdint.h>
+#include <stdatomic.h>
+
+/**
+ * SECTION:intel_allocator
+ * @short_description: igt implementation of allocator
+ * @title: Intel allocator
+ * @include: intel_allocator.h
+ *
+ * # Introduction
+ *
+ * With the era of discrete cards we requested to adopt IGT to handle
+ * addresses in userspace only (softpin, without support of relocations).
+ * Writing an allocator for single purpose would be relatively easy
+ * but supporting different tests with different requirements became
+ * quite complicated task where couple of scenarios may be not covered yet.
+ *
+ * # Assumptions
+ *
+ * - Allocator has to work in multiprocess / multithread environment.
+ * - Allocator backend (algorithm) should be plugable. Currently we support
+ *   SIMPLE (borrowed from Mesa allocator), RELOC (pseudo allocator which
+ *   returns incremented addresses without checking overlapping)
+ *   and RANDOM (pseudo allocator which randomize addresses without
+ *   checking overlapping).
+ * - Has to integrate in intel-bb (our simpler libdrm replacement used in
+ *   couple of tests).
+ *
+ * # Implementation
+ *
+ * ## Single process (allows multiple threads)
+ *
+ * For single process we don't need to create dedicated
+ * entity (kind of arbiter) to solve allocations. Simple locking over
+ * allocator data structure is enough. As basic usage example would be:
+ *
+ * |[<!-- language="c" -->
+ * struct object {
+ *      uint32_t handle;
+ *      uint64_t offset;
+ *      uint64_t size;
+ * };
+ *
+ * struct object obj1, obj2;
+ * uint64_t ahnd, startp, endp, size = 4096, align = 1 << 13;
+ * int fd = -1;
+ *
+ * fd = drm_open_driver(DRIVER_INTEL);
+ * ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+ *
+ * obj1.handle = gem_create(4096);
+ * obj2.handle = gem_create(4096);
+ *
+ * // Reserve hole for an object in given address.
+ * // In this example the first possible address.
+ * intel_allocator_get_address_range(ahnd, &startp, &endp);
+ * obj1.offset = startp;
+ * igt_assert(intel_allocator_reserve(ahnd, obj1.handle, size, startp));
+ *
+ * // Get the most suitable offset for the object. Preferred way.
+ * obj2.offset = intel_allocator_alloc(ahnd, obj2.handle, size, align);
+ *
+ *  ...
+ *
+ * // Reserved addresses can be only freed by unreserve.
+ * intel_allocator_unreserve(ahnd, obj1.handle, size, obj1.offset);
+ * intel_allocator_free(ahnd, obj2.handle);
+ *
+ * gem_close(obj1.handle);
+ * gem_close(obj2.handle);
+ * ]|
+ *
+ * Description:
+ * - ahnd is allocator handle (vm space handled by it)
+ * - we call get_address_range() to get start/end range provided by the
+ *   allocator (we haven't specified its range in open so allocator code will
+ *   assume some safe address range - we don't want to exercise some potential
+ *   HW bugs on the last page)
+ * - alloc() / free() pair just gets address for gem object proposed by the
+ *   allocator
+ * - reserve() / unreserve() pair gives us full control of acquire/return
+ *   range we're interested in
+ *
+ * ## Multiple processes
+ *
+ * When process forks and its child uses same fd vm its address space is also
+ * the same. Some coordination - in this case interprocess communication -
+ * is required to assign proper addresses for gem objects and avoid collision.
+ * Additional thread is spawned for such case to cover child processes needs.
+ * It uses some form of communication channel to receive, perform action
+ * (alloc, free...) and send response to requesting process. Currently
+ * SYSVIPC message queue was chosen for this but it can replaced by other
+ * mechanism. Allocation techniques are same as for single process, we
+ * just need to wrap such code with:
+ *
+ *
+ * |[<!-- language="c" -->
+ *
+ *
+ * intel_allocator_multiprocess_start();
+ *
+ * ... allocation code (open, close, alloc, free, ...)
+ *
+ * intel_allocator_multiprocess_stop();
+ * ]|
+ *
+ * Calling start() spawns additional allocator thread ready for handling
+ * incoming allocation requests (open / close are also requests in that case).
+ *
+ * Calling stop() request to stop allocator thread unblocking all pending
+ * children (if any).
+ */
+
+enum allocator_strategy {
+	ALLOC_STRATEGY_NONE,
+	ALLOC_STRATEGY_LOW_TO_HIGH,
+	ALLOC_STRATEGY_HIGH_TO_LOW
+};
+
+struct intel_allocator {
+	int fd;
+	uint8_t type;
+	enum allocator_strategy strategy;
+	_Atomic(int32_t) refcount;
+	pthread_mutex_t mutex;
+
+	/* allocator's private structure */
+	void *priv;
+
+	void (*get_address_range)(struct intel_allocator *ial,
+				  uint64_t *startp, uint64_t *endp);
+	uint64_t (*alloc)(struct intel_allocator *ial, uint32_t handle,
+			  uint64_t size, uint64_t alignment);
+	bool (*is_allocated)(struct intel_allocator *ial, uint32_t handle,
+			     uint64_t size, uint64_t alignment);
+	bool (*reserve)(struct intel_allocator *ial,
+			uint32_t handle, uint64_t start, uint64_t size);
+	bool (*unreserve)(struct intel_allocator *ial,
+			  uint32_t handle, uint64_t start, uint64_t size);
+	bool (*is_reserved)(struct intel_allocator *ial,
+			    uint64_t start, uint64_t size);
+	bool (*free)(struct intel_allocator *ial, uint32_t handle);
+
+	void (*destroy)(struct intel_allocator *ial);
+
+	bool (*is_empty)(struct intel_allocator *ial);
+
+	void (*print)(struct intel_allocator *ial, bool full);
+};
+
+void intel_allocator_init(void);
+void intel_allocator_multiprocess_start(void);
+void intel_allocator_multiprocess_stop(void);
+
+uint64_t intel_allocator_open(int fd, uint32_t ctx, uint8_t allocator_type);
+uint64_t intel_allocator_open_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   uint8_t allocator_type,
+				   enum allocator_strategy strategy);
+uint64_t intel_allocator_open_vm(int fd, uint32_t vm, uint8_t allocator_type);
+uint64_t intel_allocator_open_vm_full(int fd, uint32_t vm,
+				      uint64_t start, uint64_t end,
+				      uint8_t allocator_type,
+				      enum allocator_strategy strategy);
+
+uint64_t intel_allocator_open_vm_as(uint64_t allocator_handle, uint32_t new_vm);
+bool intel_allocator_close(uint64_t allocator_handle);
+void intel_allocator_get_address_range(uint64_t allocator_handle,
+				       uint64_t *startp, uint64_t *endp);
+uint64_t __intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
+				 uint64_t size, uint64_t alignment);
+uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t alignment);
+bool intel_allocator_free(uint64_t allocator_handle, uint32_t handle);
+bool intel_allocator_is_allocated(uint64_t allocator_handle, uint32_t handle,
+				  uint64_t size, uint64_t offset);
+bool intel_allocator_reserve(uint64_t allocator_handle, uint32_t handle,
+			     uint64_t size, uint64_t offset);
+bool intel_allocator_unreserve(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t offset);
+bool intel_allocator_is_reserved(uint64_t allocator_handle,
+				 uint64_t size, uint64_t offset);
+bool intel_allocator_reserve_if_not_allocated(uint64_t allocator_handle,
+					      uint32_t handle,
+					      uint64_t size, uint64_t offset,
+					      bool *is_allocatedp);
+
+void intel_allocator_print(uint64_t allocator_handle);
+
+#define ALLOC_INVALID_ADDRESS (-1ull)
+#define INTEL_ALLOCATOR_NONE   0
+#define INTEL_ALLOCATOR_RELOC  1
+#define INTEL_ALLOCATOR_RANDOM 2
+#define INTEL_ALLOCATOR_SIMPLE 3
+
+#define GEN8_GTT_ADDRESS_WIDTH 48
+
+static inline uint64_t sign_extend64(uint64_t x, int high)
+{
+	int shift = 63 - high;
+
+	return (int64_t)(x << shift) >> shift;
+}
+
+static inline uint64_t CANONICAL(uint64_t offset)
+{
+	return sign_extend64(offset, GEN8_GTT_ADDRESS_WIDTH - 1);
+}
+
+#define DECANONICAL(offset) (offset & ((1ull << GEN8_GTT_ADDRESS_WIDTH) - 1))
+
+#endif
diff --git a/lib/intel_allocator_msgchannel.c b/lib/intel_allocator_msgchannel.c
new file mode 100644
index 000000000..8280bc4ec
--- /dev/null
+++ b/lib/intel_allocator_msgchannel.c
@@ -0,0 +1,187 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include "igt.h"
+#include "intel_allocator_msgchannel.h"
+
+extern __thread pid_t child_tid;
+
+/* ----- SYSVIPC MSGQUEUE ----- */
+
+#define FTOK_IGT_ALLOCATOR_KEY "/tmp/igt.allocator.key"
+#define FTOK_IGT_ALLOCATOR_PROJID 2020
+
+#define ALLOCATOR_REQUEST 1
+
+struct msgqueue_data {
+	key_t key;
+	int queue;
+};
+
+struct msgqueue_buf {
+	long mtype;
+	union {
+		struct alloc_req request;
+		struct alloc_resp response;
+	} data;
+};
+
+static void msgqueue_init(struct msg_channel *channel)
+{
+	struct msgqueue_data *msgdata;
+	struct msqid_ds qstat;
+	key_t key;
+	int fd, queue;
+
+	igt_debug("Init msgqueue\n");
+
+	/* Create ftok key only if not exists */
+	fd = open(FTOK_IGT_ALLOCATOR_KEY, O_CREAT | O_EXCL | O_WRONLY, 0600);
+	igt_assert(fd >= 0 || errno == EEXIST);
+	if (fd >= 0)
+		close(fd);
+
+	key = ftok(FTOK_IGT_ALLOCATOR_KEY, FTOK_IGT_ALLOCATOR_PROJID);
+	igt_assert(key != -1);
+	igt_debug("Queue key: %x\n", (int) key);
+
+	queue = msgget(key, 0);
+	if (queue != -1) {
+		igt_assert(msgctl(queue, IPC_STAT, &qstat) == 0);
+		igt_debug("old messages: %lu\n", qstat.msg_qnum);
+		igt_assert(msgctl(queue, IPC_RMID, NULL) == 0);
+	}
+
+	queue = msgget(key, IPC_CREAT);
+	igt_debug("msg queue: %d\n", queue);
+
+	msgdata = calloc(1, sizeof(*msgdata));
+	igt_assert(msgdata);
+	msgdata->key = key;
+	msgdata->queue = queue;
+	channel->priv = msgdata;
+}
+
+static void msgqueue_deinit(struct msg_channel *channel)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+
+	igt_debug("Deinit msgqueue\n");
+	msgctl(msgdata->queue, IPC_RMID, NULL);
+	free(channel->priv);
+}
+
+static int msgqueue_send_req(struct msg_channel *channel,
+			     struct alloc_req *request)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret;
+
+	buf.mtype = ALLOCATOR_REQUEST;
+	buf.data.request.request_type = 1;
+	memcpy(&buf.data.request, request, sizeof(*request));
+
+retry:
+	ret = msgsnd(msgdata->queue, &buf, sizeof(buf) - sizeof(long), 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_recv_req(struct msg_channel *channel,
+			     struct alloc_req *request)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret, size = sizeof(buf) - sizeof(long);
+
+retry:
+	ret = msgrcv(msgdata->queue, &buf, size, ALLOCATOR_REQUEST, 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == size)
+		memcpy(request, &buf.data.request, sizeof(*request));
+	else if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_send_resp(struct msg_channel *channel,
+			      struct alloc_resp *response)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret;
+
+	buf.mtype = response->tid;
+	memcpy(&buf.data.response, response, sizeof(*response));
+
+retry:
+	ret = msgsnd(msgdata->queue, &buf, sizeof(buf) - sizeof(long), 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_recv_resp(struct msg_channel *channel,
+			      struct alloc_resp *response)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret, size = sizeof(buf) - sizeof(long);
+
+retry:
+	ret = msgrcv(msgdata->queue, &buf, sizeof(buf) - sizeof(long),
+		     response->tid, 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == size)
+		memcpy(response, &buf.data.response, sizeof(*response));
+	else if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static struct msg_channel msgqueue_channel = {
+	.priv = NULL,
+	.init = msgqueue_init,
+	.deinit = msgqueue_deinit,
+	.send_req = msgqueue_send_req,
+	.recv_req = msgqueue_recv_req,
+	.send_resp = msgqueue_send_resp,
+	.recv_resp = msgqueue_recv_resp,
+};
+
+struct msg_channel *intel_allocator_get_msgchannel(enum msg_channel_type type)
+{
+	struct msg_channel *channel = NULL;
+
+	switch (type) {
+	case CHANNEL_SYSVIPC_MSGQUEUE:
+		channel = &msgqueue_channel;
+	}
+
+	igt_assert(channel);
+
+	return channel;
+}
diff --git a/lib/intel_allocator_msgchannel.h b/lib/intel_allocator_msgchannel.h
new file mode 100644
index 000000000..ac6edfb9e
--- /dev/null
+++ b/lib/intel_allocator_msgchannel.h
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __INTEL_ALLOCATOR_MSGCHANNEL_H__
+#define __INTEL_ALLOCATOR_MSGCHANNEL_H__
+
+#include <sys/types.h>
+#include <unistd.h>
+#include <stdint.h>
+
+enum reqtype {
+	REQ_STOP,
+	REQ_OPEN,
+	REQ_OPEN_AS,
+	REQ_CLOSE,
+	REQ_ADDRESS_RANGE,
+	REQ_ALLOC,
+	REQ_FREE,
+	REQ_IS_ALLOCATED,
+	REQ_RESERVE,
+	REQ_UNRESERVE,
+	REQ_RESERVE_IF_NOT_ALLOCATED,
+	REQ_IS_RESERVED,
+};
+
+enum resptype {
+	RESP_OPEN,
+	RESP_OPEN_AS,
+	RESP_CLOSE,
+	RESP_ADDRESS_RANGE,
+	RESP_ALLOC,
+	RESP_FREE,
+	RESP_IS_ALLOCATED,
+	RESP_RESERVE,
+	RESP_UNRESERVE,
+	RESP_IS_RESERVED,
+	RESP_RESERVE_IF_NOT_ALLOCATED,
+};
+
+struct alloc_req {
+	enum reqtype request_type;
+
+	/* Common */
+	pid_t tid;
+	uint64_t allocator_handle;
+
+	union {
+		struct {
+			int fd;
+			uint32_t ctx;
+			uint32_t vm;
+			uint64_t start;
+			uint64_t end;
+			uint8_t allocator_type;
+			uint8_t allocator_strategy;
+		} open;
+
+		struct {
+			uint32_t new_vm;
+		} open_as;
+
+		struct {
+			uint32_t handle;
+			uint64_t size;
+			uint64_t alignment;
+		} alloc;
+
+		struct {
+			uint32_t handle;
+		} free;
+
+		struct {
+			uint32_t handle;
+			uint64_t size;
+			uint64_t offset;
+		} is_allocated;
+
+		struct {
+			uint32_t handle;
+			uint64_t start;
+			uint64_t end;
+		} reserve, unreserve;
+
+		struct {
+			uint64_t start;
+			uint64_t end;
+		} is_reserved;
+
+	};
+};
+
+struct alloc_resp {
+	enum resptype response_type;
+	pid_t tid;
+
+	union {
+		struct {
+			uint64_t allocator_handle;
+		} open, open_as;
+
+		struct {
+			bool is_empty;
+		} close;
+
+		struct {
+			uint64_t start;
+			uint64_t end;
+			uint8_t direction;
+		} address_range;
+
+		struct {
+			uint64_t offset;
+		} alloc;
+
+		struct {
+			bool freed;
+		} free;
+
+		struct {
+			bool allocated;
+		} is_allocated;
+
+		struct {
+			bool reserved;
+		} reserve, is_reserved;
+
+		struct {
+			bool unreserved;
+		} unreserve;
+
+		struct {
+			bool allocated;
+			bool reserved;
+		} reserve_if_not_allocated;
+	};
+};
+
+struct msg_channel {
+	void *priv;
+	void (*init)(struct msg_channel *channel);
+	void (*deinit)(struct msg_channel *channel);
+	int (*send_req)(struct msg_channel *channel, struct alloc_req *request);
+	int (*recv_req)(struct msg_channel *channel, struct alloc_req *request);
+	int (*send_resp)(struct msg_channel *channel, struct alloc_resp *response);
+	int (*recv_resp)(struct msg_channel *channel, struct alloc_resp *response);
+};
+
+enum msg_channel_type {
+	CHANNEL_SYSVIPC_MSGQUEUE
+};
+
+struct msg_channel *intel_allocator_get_msgchannel(enum msg_channel_type type);
+
+#endif
diff --git a/lib/meson.build b/lib/meson.build
index 7254faeac..216231761 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -34,6 +34,11 @@ lib_sources = [
 	'igt_vgem.c',
 	'igt_x86.c',
 	'instdone.c',
+	'intel_allocator.c',
+	'intel_allocator_msgchannel.c',
+	'intel_allocator_random.c',
+	'intel_allocator_reloc.c',
+	'intel_allocator_simple.c',
 	'intel_batchbuffer.c',
 	'intel_bufops.c',
 	'intel_chipset.c',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 09/39] lib/intel_allocator: Try to stop smoothly instead of deinit
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (7 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 08/39] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 10/39] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
                   ` (31 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Avoid race when stop was send to allocator thread. We wait around
100 ms to give thread chance to stop smoothly instead of removing
queue and enforcing exiting all blocked message syscalls.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_allocator.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index b1ec69e45..930cd9dc2 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -106,6 +106,7 @@ static pthread_mutex_t map_mutex = PTHREAD_MUTEX_INITIALIZER;
 
 static bool multiprocess;
 static pthread_t allocator_thread;
+static bool allocator_thread_running;
 
 static bool warn_if_not_empty;
 
@@ -735,6 +736,8 @@ static void *allocator_thread_loop(void *data)
 		   (long) allocator_pid, (long) gettid());
 	alloc_info("Entering allocator loop\n");
 
+	WRITE_ONCE(allocator_thread_running, true);
+
 	while (1) {
 		ret = recv_req(channel, &req);
 
@@ -768,6 +771,8 @@ static void *allocator_thread_loop(void *data)
 		}
 	}
 
+	WRITE_ONCE(allocator_thread_running, false);
+
 	return NULL;
 }
 
@@ -807,15 +812,24 @@ void intel_allocator_multiprocess_start(void)
  * Function turns off intel_allocator multiprocess mode what means
  * stopping allocator thread and deinitializing its data.
  */
+#define STOP_TIMEOUT_MS 100
 void intel_allocator_multiprocess_stop(void)
 {
+	int time_left = STOP_TIMEOUT_MS;
+
 	alloc_info("allocator multiprocess stop\n");
 
 	if (multiprocess) {
 		send_alloc_stop(channel);
+
+		/* Give allocator thread time to complete */
+		while (time_left-- > 0 && READ_ONCE(allocator_thread_running))
+			usleep(1000); /* coarse calculation */
+
 		/* Deinit, this should stop all blocked syscalls, if any */
 		channel->deinit(channel);
 		pthread_join(allocator_thread, NULL);
+
 		/* But we're not sure does child will stuck */
 		kill_children(SIGINT);
 		igt_waitchildren_timeout(5, "Stopping children");
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 10/39] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (8 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 09/39] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 11/39] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
                   ` (30 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

When playing with multiprocess mode in allocator we're currently
using sysvipc message queues in blocking mode (request/response).
We can calculate then what is maximum depth for the queue for
requested number of children. Change alters kernel queue depth
to cover 4k users (1 is main thread and 4095 are children).

We're still prone to unlimited wait in allocator thread
(more than 4095 children successfully send the messages)
but we're going to address this later.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_allocator_msgchannel.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/intel_allocator_msgchannel.c b/lib/intel_allocator_msgchannel.c
index 8280bc4ec..172858d3d 100644
--- a/lib/intel_allocator_msgchannel.c
+++ b/lib/intel_allocator_msgchannel.c
@@ -17,6 +17,7 @@ extern __thread pid_t child_tid;
 
 #define FTOK_IGT_ALLOCATOR_KEY "/tmp/igt.allocator.key"
 #define FTOK_IGT_ALLOCATOR_PROJID 2020
+#define MAXQLEN 4096
 
 #define ALLOCATOR_REQUEST 1
 
@@ -62,6 +63,13 @@ static void msgqueue_init(struct msg_channel *channel)
 	queue = msgget(key, IPC_CREAT);
 	igt_debug("msg queue: %d\n", queue);
 
+	igt_assert(msgctl(queue, IPC_STAT, &qstat) == 0);
+	igt_debug("msg size in bytes: %lu\n", qstat.msg_qbytes);
+	qstat.msg_qbytes = MAXQLEN * sizeof(struct msgqueue_buf);
+	igt_debug("resizing queue to support %d requests\n", MAXQLEN);
+	igt_assert_f(msgctl(queue, IPC_SET, &qstat) == 0,
+		     "Couldn't change queue size to %lu\n", qstat.msg_qbytes);
+
 	msgdata = calloc(1, sizeof(*msgdata));
 	igt_assert(msgdata);
 	msgdata->key = key;
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 11/39] lib/intel_allocator: Separate allocator multiprocess start
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (9 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 10/39] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 12/39] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
                   ` (29 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Validating allocator code (leaks and memory overwriting) can be done
with address sanitizer. When allocator is not working in multiprocess
mode it is easy, problems start when fork is in the game. In this
situation we need to separate preparation and starting allocator thread.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reported-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_allocator.c | 41 ++++++++++++++++++++++++++++++++++-------
 lib/intel_allocator.h |  2 ++
 2 files changed, 36 insertions(+), 7 deletions(-)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index 930cd9dc2..6298a5a20 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -776,6 +776,38 @@ static void *allocator_thread_loop(void *data)
 	return NULL;
 }
 
+
+/**
+ * __intel_allocator_multiprocess_prepare:
+ *
+ * Prepares allocator infrastructure to work in multiprocess mode.
+ *
+ * Some description is required why prepare/start steps are separated.
+ * When we write the code and we don't use address sanitizer simple
+ * intel_allocator_multiprocess_start() call is enough. With address
+ * sanitizer and using forking we can encounter situation where one
+ * forked child called allocator alloc() (so parent has some poisoned
+ * memory in shadow map), then second fork occurs. Second child will
+ * get poisoned shadow map from parent (there allocator thread reside).
+ * Checking shadow map in this child will report memory leak.
+ *
+ * How to separate initialization steps take a look into api_intel_allocator.c
+ * fork_simple_stress() function.
+ */
+void __intel_allocator_multiprocess_prepare(void)
+{
+	intel_allocator_init();
+
+	multiprocess = true;
+	channel->init(channel);
+}
+
+void __intel_allocator_multiprocess_start(void)
+{
+	pthread_create(&allocator_thread, NULL,
+		       allocator_thread_loop, NULL);
+}
+
 /**
  * intel_allocator_multiprocess_start:
  *
@@ -797,13 +829,8 @@ void intel_allocator_multiprocess_start(void)
 
 	igt_assert_f(child_pid == -1,
 		     "Allocator thread can be spawned only in main IGT process\n");
-	intel_allocator_init();
-
-	multiprocess = true;
-	channel->init(channel);
-
-	pthread_create(&allocator_thread, NULL,
-		       allocator_thread_loop, NULL);
+	__intel_allocator_multiprocess_prepare();
+	__intel_allocator_multiprocess_start();
 }
 
 /**
diff --git a/lib/intel_allocator.h b/lib/intel_allocator.h
index 440c5992d..9b7bd0908 100644
--- a/lib/intel_allocator.h
+++ b/lib/intel_allocator.h
@@ -160,6 +160,8 @@ struct intel_allocator {
 };
 
 void intel_allocator_init(void);
+void __intel_allocator_multiprocess_prepare(void);
+void __intel_allocator_multiprocess_start(void);
 void intel_allocator_multiprocess_start(void);
 void intel_allocator_multiprocess_stop(void);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 12/39] lib/intel_bufops: Change size from 32->64 bit
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (10 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 11/39] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 13/39] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
                   ` (28 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

1. Buffer size from 32 -> 64 bit was changed to be consistent
   with drm code.
2. Remember buffer creation size to avoid recalculation.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_bufops.c        | 17 ++++++++---------
 lib/intel_bufops.h        |  7 +++++--
 tests/i915/api_intel_bb.c |  6 +++---
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index a50035e40..eb5ac4dad 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -711,7 +711,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 			     uint32_t req_tiling, uint32_t compression)
 {
 	uint32_t tiling = req_tiling;
-	uint32_t size;
+	uint64_t size;
 	uint32_t devid;
 	int tile_width;
 	int align_h = 1;
@@ -776,6 +776,9 @@ static void __intel_buf_init(struct buf_ops *bops,
 		size = buf->surface[0].stride * ALIGN(height, align_h);
 	}
 
+	/* Store real bo size to avoid mistakes in calculating it again */
+	buf->size = size;
+
 	if (handle)
 		buf->handle = handle;
 	else
@@ -1001,8 +1004,8 @@ void intel_buf_flush_and_unmap(struct intel_buf *buf)
 void intel_buf_print(const struct intel_buf *buf)
 {
 	igt_info("[name: %s]\n", buf->name);
-	igt_info("[%u]: w: %u, h: %u, stride: %u, size: %u, bo-size: %u, "
-		 "bpp: %u, tiling: %u, compress: %u\n",
+	igt_info("[%u]: w: %u, h: %u, stride: %u, size: %" PRIx64
+		 ", bo-size: %" PRIx64 ", bpp: %u, tiling: %u, compress: %u\n",
 		 buf->handle, intel_buf_width(buf), intel_buf_height(buf),
 		 buf->surface[0].stride, buf->surface[0].size,
 		 intel_buf_bo_size(buf), buf->bpp,
@@ -1208,13 +1211,9 @@ static void idempotency_selftest(struct buf_ops *bops, uint32_t tiling)
 	buf_ops_set_software_tiling(bops, tiling, false);
 }
 
-uint32_t intel_buf_bo_size(const struct intel_buf *buf)
+uint64_t intel_buf_bo_size(const struct intel_buf *buf)
 {
-	int offset = CCS_OFFSET(buf) ?: buf->surface[0].size;
-	int ccs_size =
-		buf->compression ? CCS_SIZE(buf->bops->intel_gen, buf) : 0;
-
-	return offset + ccs_size;
+	return buf->size;
 }
 
 static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency)
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 8debe7f22..5619fc6fa 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -9,10 +9,13 @@ struct buf_ops;
 
 #define INTEL_BUF_INVALID_ADDRESS (-1ull)
 #define INTEL_BUF_NAME_MAXSIZE 32
+#define INVALID_ADDR(x) ((x) == INTEL_BUF_INVALID_ADDRESS)
+
 struct intel_buf {
 	struct buf_ops *bops;
 	bool is_owner;
 	uint32_t handle;
+	uint64_t size;
 	uint32_t tiling;
 	uint32_t bpp;
 	uint32_t compression;
@@ -23,7 +26,7 @@ struct intel_buf {
 	struct {
 		uint32_t offset;
 		uint32_t stride;
-		uint32_t size;
+		uint64_t size;
 	} surface[2];
 	struct {
 		uint32_t offset;
@@ -88,7 +91,7 @@ intel_buf_ccs_height(int gen, const struct intel_buf *buf)
 	return DIV_ROUND_UP(intel_buf_height(buf), 512) * 32;
 }
 
-uint32_t intel_buf_bo_size(const struct intel_buf *buf);
+uint64_t intel_buf_bo_size(const struct intel_buf *buf);
 
 struct buf_ops *buf_ops_create(int fd);
 struct buf_ops *buf_ops_create_with_selftest(int fd);
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 9115d3f8f..1c960891f 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -123,9 +123,9 @@ static void print_buf(struct intel_buf *buf, const char *name)
 
 	ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
 					buf->surface[0].size, PROT_READ);
-	igt_debug("[%s] Buf handle: %d, size: %d, v: 0x%02x, presumed_addr: %p\n",
+	igt_debug("[%s] Buf handle: %d, size: %" PRIx64 ", v: 0x%02x, presumed_addr: %p\n",
 		  name, buf->handle, buf->surface[0].size, ptr[0],
-			from_user_pointer(buf->addr.offset));
+		  from_user_pointer(buf->addr.offset));
 	munmap(ptr, buf->surface[0].size);
 }
 
@@ -677,7 +677,7 @@ static int dump_base64(const char *name, struct intel_buf *buf)
 	if (ret != Z_OK) {
 		igt_warn("error compressing, ret: %d\n", ret);
 	} else {
-		igt_info("compressed %u -> %lu\n",
+		igt_info("compressed %" PRIx64 " -> %lu\n",
 			 buf->surface[0].size, outsize);
 
 		igt_info("--- %s ---\n", name);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 13/39] lib/intel_bufops: Add init with handle and size function
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (11 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 12/39] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 14/39] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
                   ` (27 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

For some cases (fb with compression) fb size is bigger than calculated
intel_buf what lead to execbuf failure when allocator is used
along with EXEC_OBJECT_PINNED flag set for all objects.

We need to create intel_buf with size equal to fb so new function
in which we pass handle and size is required.

v2: add bo_size check to better protect input size (Jason)

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_bufops.c | 34 +++++++++++++++++++++++++++++++---
 lib/intel_bufops.h |  7 +++++++
 2 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index eb5ac4dad..00be2bd04 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -708,7 +708,8 @@ static void __intel_buf_init(struct buf_ops *bops,
 			     uint32_t handle,
 			     struct intel_buf *buf,
 			     int width, int height, int bpp, int alignment,
-			     uint32_t req_tiling, uint32_t compression)
+			     uint32_t req_tiling, uint32_t compression,
+			     uint64_t bo_size)
 {
 	uint32_t tiling = req_tiling;
 	uint64_t size;
@@ -776,6 +777,11 @@ static void __intel_buf_init(struct buf_ops *bops,
 		size = buf->surface[0].stride * ALIGN(height, align_h);
 	}
 
+	if (bo_size > 0) {
+		igt_assert(bo_size >= size);
+		size = bo_size;
+	}
+
 	/* Store real bo size to avoid mistakes in calculating it again */
 	buf->size = size;
 
@@ -809,7 +815,7 @@ void intel_buf_init(struct buf_ops *bops,
 		    uint32_t tiling, uint32_t compression)
 {
 	__intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
-			 tiling, compression);
+			 tiling, compression, 0);
 
 	intel_buf_set_ownership(buf, true);
 }
@@ -858,7 +864,7 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
 				 uint32_t req_tiling, uint32_t compression)
 {
 	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
-			 req_tiling, compression);
+			 req_tiling, compression, 0);
 }
 
 /**
@@ -927,6 +933,28 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
 	return buf;
 }
 
+struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
+							 uint32_t handle,
+							 int width, int height,
+							 int bpp, int alignment,
+							 uint32_t req_tiling,
+							 uint32_t compression,
+							 uint64_t size)
+{
+	struct intel_buf *buf;
+
+	igt_assert(bops);
+
+	buf = calloc(1, sizeof(*buf));
+	igt_assert(buf);
+
+	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
+			 req_tiling, compression, size);
+
+	return buf;
+}
+
+
 /**
  * intel_buf_destroy
  * @buf: intel_buf
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 5619fc6fa..54480bff6 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -139,6 +139,13 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
 						uint32_t req_tiling,
 						uint32_t compression);
 
+struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
+							 uint32_t handle,
+							 int width, int height,
+							 int bpp, int alignment,
+							 uint32_t req_tiling,
+							 uint32_t compression,
+							 uint64_t size);
 void intel_buf_destroy(struct intel_buf *buf);
 
 void *intel_buf_cpu_map(struct intel_buf *buf, bool write);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 14/39] lib/intel_batchbuffer: Integrate intel_bb with allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (12 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 13/39] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 15/39] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
                   ` (26 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Refactor the intel-bb interface to introduce the IGT allocator for
specifying the position of objects within the ppGTT.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_aux_pgtable.c      |  26 +-
 lib/intel_batchbuffer.c      | 511 +++++++++++++++++++++++++----------
 lib/intel_batchbuffer.h      |  23 +-
 tests/i915/api_intel_bb.c    |  16 +-
 tests/i915/gem_mmap_offset.c |   4 +-
 5 files changed, 403 insertions(+), 177 deletions(-)

diff --git a/lib/intel_aux_pgtable.c b/lib/intel_aux_pgtable.c
index c05d4511e..f29a30916 100644
--- a/lib/intel_aux_pgtable.c
+++ b/lib/intel_aux_pgtable.c
@@ -94,9 +94,9 @@ pgt_table_count(int address_bits, struct intel_buf **bufs, int buf_count)
 		/* We require bufs to be sorted. */
 		igt_assert(i == 0 ||
 			   buf->addr.offset >= bufs[i - 1]->addr.offset +
-					       intel_buf_bo_size(bufs[i - 1]));
-
+				intel_buf_bo_size(bufs[i - 1]));
 		start = ALIGN_DOWN(buf->addr.offset, 1UL << address_bits);
+
 		/* Avoid double counting for overlapping aligned bufs. */
 		start = max(start, end);
 
@@ -346,10 +346,8 @@ pgt_populate_entries_for_buf(struct pgtable *pgt,
 			     uint64_t top_table,
 			     int surface_idx)
 {
-	uint64_t surface_addr = buf->addr.offset +
-				buf->surface[surface_idx].offset;
-	uint64_t surface_end = surface_addr +
-			       buf->surface[surface_idx].size;
+	uint64_t surface_addr = buf->addr.offset + buf->surface[surface_idx].offset;
+	uint64_t surface_end = surface_addr +  buf->surface[surface_idx].size;
 	uint64_t aux_addr = buf->addr.offset + buf->ccs[surface_idx].offset;
 	uint64_t l1_flags = pgt_get_l1_flags(buf, surface_idx);
 	uint64_t lx_flags = pgt_get_lx_flags();
@@ -441,7 +439,6 @@ struct intel_buf *
 intel_aux_pgtable_create(struct intel_bb *ibb,
 			 struct intel_buf **bufs, int buf_count)
 {
-	struct drm_i915_gem_exec_object2 *obj;
 	static const struct pgtable_level_desc level_desc[] = {
 		{
 			.idx_shift = 16,
@@ -465,7 +462,6 @@ intel_aux_pgtable_create(struct intel_bb *ibb,
 	struct pgtable *pgt;
 	struct buf_ops *bops;
 	struct intel_buf *buf;
-	uint64_t prev_alignment;
 
 	igt_assert(buf_count);
 	bops = bufs[0]->bops;
@@ -476,11 +472,8 @@ intel_aux_pgtable_create(struct intel_bb *ibb,
 				    I915_COMPRESSION_NONE);
 
 	/* We need to use pgt->max_align for aux table */
-	prev_alignment = intel_bb_set_default_object_alignment(ibb,
-							       pgt->max_align);
-	obj = intel_bb_add_intel_buf(ibb, pgt->buf, false);
-	intel_bb_set_default_object_alignment(ibb, prev_alignment);
-	obj->alignment = pgt->max_align;
+	intel_bb_add_intel_buf_with_alignment(ibb, pgt->buf,
+					      pgt->max_align, false);
 
 	pgt_map(ibb->i915, pgt);
 	pgt_populate_entries(pgt, bufs, buf_count);
@@ -498,9 +491,10 @@ aux_pgtable_reserve_buf_slot(struct intel_buf **bufs, int buf_count,
 {
 	int i;
 
-	for (i = 0; i < buf_count; i++)
+	for (i = 0; i < buf_count; i++) {
 		if (bufs[i]->addr.offset > new_buf->addr.offset)
 			break;
+	}
 
 	memmove(&bufs[i + 1], &bufs[i], sizeof(bufs[0]) * (buf_count - i));
 
@@ -606,8 +600,10 @@ gen12_aux_pgtable_cleanup(struct intel_bb *ibb, struct aux_pgtable_info *info)
 		igt_assert_eq_u64(addr, info->buf_pin_offsets[i]);
 	}
 
-	if (info->pgtable_buf)
+	if (info->pgtable_buf) {
+		intel_bb_remove_intel_buf(ibb, info->pgtable_buf);
 		intel_buf_destroy(info->pgtable_buf);
+	}
 }
 
 uint32_t
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 8118dc945..c9a6c8909 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -50,7 +50,6 @@
 #include "media_spin.h"
 #include "gpgpu_fill.h"
 #include "igt_aux.h"
-#include "igt_rand.h"
 #include "i830_reg.h"
 #include "huc_copy.h"
 #include <glib.h>
@@ -1211,23 +1210,9 @@ static void __reallocate_objects(struct intel_bb *ibb)
 	}
 }
 
-/*
- * gen8_canonical_addr
- * Used to convert any address into canonical form, i.e. [63:48] == [47].
- * Based on kernel's sign_extend64 implementation.
- * @address - a virtual address
- */
-#define GEN8_HIGH_ADDRESS_BIT 47
-static uint64_t gen8_canonical_addr(uint64_t address)
-{
-	int shift = 63 - GEN8_HIGH_ADDRESS_BIT;
-
-	return (int64_t)(address << shift) >> shift;
-}
-
 static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
 					     uint32_t handle,
-					     uint32_t size,
+					     uint64_t size,
 					     uint32_t alignment)
 {
 	uint64_t offset;
@@ -1235,33 +1220,77 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
 	if (ibb->enforce_relocs)
 		return 0;
 
-	/* randomize the address, we try to avoid relocations */
-	offset = hars_petruska_f54_1_random64(&ibb->prng);
-	offset += 256 << 10; /* Keep the low 256k clear, for negative deltas */
-	offset &= ibb->gtt_size - 1;
-	offset &= ~(ibb->alignment - 1);
-	offset = gen8_canonical_addr(offset);
+	offset = intel_allocator_alloc(ibb->allocator_handle,
+				       handle, size, alignment);
 
 	return offset;
 }
 
 /**
- * intel_bb_create:
+ * __intel_bb_create:
  * @i915: drm fd
+ * @ctx: context
  * @size: size of the batchbuffer
+ * @do_relocs: use relocations or allocator
+ * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
+ *
+ * intel-bb assumes it will work in one of two modes - with relocations or
+ * with using allocator (currently RANDOM and SIMPLE are implemented).
+ * Some description is required to describe how they maintain the addresses.
+ *
+ * Before entering into each scenarios generic rule is intel-bb keeps objects
+ * and their offsets in the internal cache and reuses in subsequent execs.
+ *
+ * 1. intel-bb with relocations
+ *
+ * Creating new intel-bb adds handle to cache implicitly and sets its address
+ * to 0. Objects added to intel-bb later also have address 0 set for first run.
+ * After calling execbuf cache is altered with new addresses. As intel-bb
+ * works in reloc mode addresses are only suggestion to the driver and we
+ * cannot be sure they won't change at next exec.
+ *
+ * 2. with allocator
+ *
+ * This mode is valid only for ppgtt. Addresses are acquired from allocator
+ * and softpinned. intel-bb cache must be then coherent with allocator
+ * (simple is coherent, random is not due to fact we don't keep its state).
+ * When we do intel-bb reset with purging cache it has to reacquire addresses
+ * from allocator (allocator should return same address - what is true for
+ * simple allocator and false for random as mentioned before).
+ *
+ * If we do reset without purging caches we use addresses from intel-bb cache
+ * during execbuf objects construction.
  *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
  */
 static struct intel_bb *
-__intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
+__intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
+		  uint8_t allocator_type)
 {
+	struct drm_i915_gem_exec_object2 *object;
 	struct intel_bb *ibb = calloc(1, sizeof(*ibb));
-	uint64_t gtt_size;
 
 	igt_assert(ibb);
 
+	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(i915);
+
+	/*
+	 * If we don't have full ppgtt driver can change our addresses
+	 * so allocator is useless in this case. Just enforce relocations
+	 * for such gens and don't use allocator at all.
+	 */
+	if (!ibb->uses_full_ppgtt) {
+		do_relocs = true;
+		allocator_type = INTEL_ALLOCATOR_NONE;
+	}
+
+	if (!do_relocs)
+		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+	else
+		igt_assert(allocator_type == INTEL_ALLOCATOR_NONE);
+	ibb->allocator_type = allocator_type;
 	ibb->i915 = i915;
 	ibb->devid = intel_get_drm_devid(i915);
 	ibb->gen = intel_gen(ibb->devid);
@@ -1273,41 +1302,43 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
 	ibb->batch = calloc(1, size);
 	igt_assert(ibb->batch);
 	ibb->ptr = ibb->batch;
-	ibb->prng = (uint32_t) to_user_pointer(ibb);
 	ibb->fence = -1;
 
-	gtt_size = gem_aperture_size(i915);
-	if (!gem_uses_full_ppgtt(i915))
-		gtt_size /= 2;
-	if ((gtt_size - 1) >> 32) {
+	ibb->gtt_size = gem_aperture_size(i915);
+	if ((ibb->gtt_size - 1) >> 32)
 		ibb->supports_48b_address = true;
 
-		/*
-		 * Until we develop IGT address allocator we workaround
-		 * playing with canonical addresses with 47-bit set to 1
-		 * just by limiting gtt size to 46-bit when gtt is 47 or 48
-		 * bit size. Current interface doesn't pass bo size, so
-		 * limiting to 46 bit make us sure we won't enter to
-		 * addresses with 47-bit set (we use 32-bit size now so
-		 * still we fit 47-bit address space).
-		 */
-		if (gtt_size & (3ull << 47))
-			gtt_size = (1ull << 46);
-	}
-	ibb->gtt_size = gtt_size;
-
-	ibb->batch_offset = __intel_bb_get_offset(ibb,
-						  ibb->handle,
-						  ibb->size,
-						  ibb->alignment);
-	intel_bb_add_object(ibb, ibb->handle, ibb->size,
-			    ibb->batch_offset, false);
+	object = intel_bb_add_object(ibb, ibb->handle, ibb->size,
+				     INTEL_BUF_INVALID_ADDRESS, ibb->alignment,
+				     false);
+	ibb->batch_offset = object->offset;
 
 	ibb->refcount = 1;
 
 	return ibb;
 }
 
+/**
+ * intel_bb_create_full:
+ * @i915: drm fd
+ * @ctx: context
+ * @size: size of the batchbuffer
+ * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ *
+ * Creates bb with context passed in @ctx, size in @size and allocator type
+ * in @allocator_type. Relocations are set to false because IGT allocator
+ * is not used in that case.
+ *
+ * Returns:
+ *
+ * Pointer the intel_bb, asserts on failure.
+ */
+struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+				      uint8_t allocator_type)
+{
+	return __intel_bb_create(i915, ctx, size, false, allocator_type);
+}
+
 /**
  * intel_bb_create:
  * @i915: drm fd
@@ -1318,10 +1349,19 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
+ *
+ * Notes:
+ *
+ * intel_bb must not be created in igt_fixture. The reason is intel_bb
+ * "opens" connection to the allocator and when test completes it can
+ * leave the allocator in unknown state (mostly for failed tests).
+ * As igt_core was armed to reset the allocator infrastructure
+ * connection to it inside intel_bb is not valid anymore.
+ * Trying to use it leads to catastrofic errors.
  */
 struct intel_bb *intel_bb_create(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, false);
+	return __intel_bb_create(i915, 0, size, false, INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1339,7 +1379,7 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, false);
+	return __intel_bb_create(i915, ctx, size, false, INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1356,7 +1396,7 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
  */
 struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, true);
+	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
 }
 
 /**
@@ -1375,7 +1415,7 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, true);
+	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
 }
 
 static void __intel_bb_destroy_relocations(struct intel_bb *ibb)
@@ -1429,6 +1469,10 @@ void intel_bb_destroy(struct intel_bb *ibb)
 	__intel_bb_destroy_objects(ibb);
 	__intel_bb_destroy_cache(ibb);
 
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE) {
+		intel_allocator_free(ibb->allocator_handle, ibb->handle);
+		intel_allocator_close(ibb->allocator_handle);
+	}
 	gem_close(ibb->i915, ibb->handle);
 
 	if (ibb->fence >= 0)
@@ -1445,6 +1489,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
  *
  * Recreate batch bo when there's no additional reference.
 */
+
 void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 {
 	uint32_t i;
@@ -1468,28 +1513,32 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	__intel_bb_destroy_objects(ibb);
 	__reallocate_objects(ibb);
 
-	if (purge_objects_cache) {
+	if (purge_objects_cache)
 		__intel_bb_destroy_cache(ibb);
+
+	/*
+	 * When we use allocators we're in no-reloc mode so we have to free
+	 * and reacquire offset (ibb->handle can change in multiprocess
+	 * environment). We also have to remove and add it again to
+	 * objects and cache tree.
+	 */
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE && !purge_objects_cache)
+		intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset,
+				       ibb->size);
+
+	gem_close(ibb->i915, ibb->handle);
+	ibb->handle = gem_create(ibb->i915, ibb->size);
+
+	/* Keep address for bb in reloc mode and RANDOM allocator */
+	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
 		ibb->batch_offset = __intel_bb_get_offset(ibb,
 							  ibb->handle,
 							  ibb->size,
 							  ibb->alignment);
-	} else {
-		struct drm_i915_gem_exec_object2 *object;
-
-		object = intel_bb_find_object(ibb, ibb->handle);
-		ibb->batch_offset = object ? object->offset :
-					     __intel_bb_get_offset(ibb,
-								   ibb->handle,
-								   ibb->size,
-								   ibb->alignment);
-	}
-
-	gem_close(ibb->i915, ibb->handle);
-	ibb->handle = gem_create(ibb->i915, ibb->size);
 
 	intel_bb_add_object(ibb, ibb->handle, ibb->size,
-			    ibb->batch_offset, false);
+			    ibb->batch_offset,
+			    ibb->alignment, false);
 	ibb->ptr = ibb->batch;
 	memset(ibb->batch, 0, ibb->size);
 }
@@ -1528,8 +1577,8 @@ void intel_bb_print(struct intel_bb *ibb)
 		 ibb->i915, ibb->gen, ibb->devid, ibb->debug);
 	igt_info("handle: %u, size: %u, batch: %p, ptr: %p\n",
 		 ibb->handle, ibb->size, ibb->batch, ibb->ptr);
-	igt_info("prng: %u, gtt_size: %" PRIu64 ", supports 48bit: %d\n",
-		 ibb->prng, ibb->gtt_size, ibb->supports_48b_address);
+	igt_info("gtt_size: %" PRIu64 ", supports 48bit: %d\n",
+		 ibb->gtt_size, ibb->supports_48b_address);
 	igt_info("ctx: %u\n", ibb->ctx);
 	igt_info("root: %p\n", ibb->root);
 	igt_info("objects: %p, num_objects: %u, allocated obj: %u\n",
@@ -1605,7 +1654,7 @@ __add_to_cache(struct intel_bb *ibb, uint32_t handle)
 	if (*found == object) {
 		memset(object, 0, sizeof(*object));
 		object->handle = handle;
-		object->alignment = ibb->alignment;
+		object->offset = INTEL_BUF_INVALID_ADDRESS;
 	} else {
 		free(object);
 		object = *found;
@@ -1614,6 +1663,25 @@ __add_to_cache(struct intel_bb *ibb, uint32_t handle)
 	return object;
 }
 
+static bool __remove_from_cache(struct intel_bb *ibb, uint32_t handle)
+{
+	struct drm_i915_gem_exec_object2 **found, *object;
+
+	object = intel_bb_find_object(ibb, handle);
+	if (!object) {
+		igt_warn("Object: handle: %u not found\n", handle);
+		return false;
+	}
+
+	found = tdelete((void *) object, &ibb->root, __compare_objects);
+	if (!found)
+		return false;
+
+	free(object);
+
+	return true;
+}
+
 static int __compare_handles(const void *p1, const void *p2)
 {
 	return (int) (*(int32_t *) p1 - *(int32_t *) p2);
@@ -1639,12 +1707,54 @@ static void __add_to_objects(struct intel_bb *ibb,
 	}
 }
 
+static void __remove_from_objects(struct intel_bb *ibb,
+				  struct drm_i915_gem_exec_object2 *object)
+{
+	uint32_t i, **handle, *to_free;
+	bool found = false;
+
+	for (i = 0; i < ibb->num_objects; i++) {
+		if (ibb->objects[i] == object) {
+			found = true;
+			break;
+		}
+	}
+
+	/*
+	 * When we reset bb (without purging) we have:
+	 * 1. cache which contains all cached objects
+	 * 2. objects array which contains only bb object (cleared in reset
+	 *    path with bb object added at the end)
+	 * So !found is normal situation and no warning is added here.
+	 */
+	if (!found)
+		return;
+
+	ibb->num_objects--;
+	if (i < ibb->num_objects)
+		memmove(&ibb->objects[i], &ibb->objects[i + 1],
+			sizeof(object) * (ibb->num_objects - i));
+
+	handle = tfind((void *) &object->handle,
+		       &ibb->current, __compare_handles);
+	if (!handle) {
+		igt_warn("Object %u doesn't exist in the tree, can't remove",
+			 object->handle);
+		return;
+	}
+
+	to_free = *handle;
+	tdelete((void *) &object->handle, &ibb->current, __compare_handles);
+	free(to_free);
+}
+
 /**
  * intel_bb_add_object:
  * @ibb: pointer to intel_bb
  * @handle: which handle to add to objects array
  * @size: object size
  * @offset: presumed offset of the object when no relocation is enforced
+ * @alignment: alignment of the object, if 0 it will be set to page size
  * @write: does a handle is a render target
  *
  * Function adds or updates execobj slot in bb objects array and
@@ -1652,23 +1762,71 @@ static void __add_to_objects(struct intel_bb *ibb,
  * be marked with EXEC_OBJECT_WRITE flag.
  */
 struct drm_i915_gem_exec_object2 *
-intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
-		    uint64_t offset, bool write)
+intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
+		    uint64_t offset, uint64_t alignment, bool write)
 {
 	struct drm_i915_gem_exec_object2 *object;
 
+	igt_assert(INVALID_ADDR(offset) || alignment == 0
+		   || ALIGN(offset, alignment) == offset);
+
 	object = __add_to_cache(ibb, handle);
+	object->alignment = alignment ?: 4096;
 	__add_to_objects(ibb, object);
 
-	/* Limit current offset to gtt size */
+	/*
+	 * If object->offset == INVALID_ADDRESS we added freshly object to the
+	 * cache. In that case we have two choices:
+	 * a) get new offset (passed offset was invalid)
+	 * b) use offset passed in the call (valid)
+	 */
+	if (INVALID_ADDR(object->offset)) {
+		if (INVALID_ADDR(offset)) {
+			offset = __intel_bb_get_offset(ibb, handle, size,
+						       object->alignment);
+		} else {
+			offset = offset & (ibb->gtt_size - 1);
+
+			/*
+			 * For simple allocator check entry consistency
+			 * - reserve if it is not already allocated.
+			 */
+			if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE) {
+				bool allocated, reserved;
+
+				reserved = intel_allocator_reserve_if_not_allocated(ibb->allocator_handle,
+										    handle, size, offset,
+										    &allocated);
+				igt_assert_f(allocated || reserved,
+					     "Can't get offset, allocated: %d, reserved: %d\n",
+					     allocated, reserved);
+			}
+		}
+	} else {
+		/*
+		 * This assertion makes sense only when we have to be consistent
+		 * with underlying allocator. For relocations and when !ppgtt
+		 * we can expect addresses passed by the user can be moved
+		 * within the driver.
+		 */
+		if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+			igt_assert_f(object->offset == offset,
+				     "(pid: %ld) handle: %u, offset not match: %" PRIx64 " <> %" PRIx64 "\n",
+				     (long) getpid(), handle,
+				     (uint64_t) object->offset,
+				     offset);
+	}
+
 	object->offset = offset;
-	if (offset != INTEL_BUF_INVALID_ADDRESS)
-		object->offset = gen8_canonical_addr(offset & (ibb->gtt_size - 1));
 
-	if (object->offset == INTEL_BUF_INVALID_ADDRESS)
+	/* Limit current offset to gtt size */
+	if (offset != INTEL_BUF_INVALID_ADDRESS) {
+		object->offset = CANONICAL(offset & (ibb->gtt_size - 1));
+	} else {
 		object->offset = __intel_bb_get_offset(ibb,
 						       handle, size,
 						       object->alignment);
+	}
 
 	if (write)
 		object->flags |= EXEC_OBJECT_WRITE;
@@ -1676,40 +1834,95 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
 	if (ibb->supports_48b_address)
 		object->flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 
+	if (ibb->uses_full_ppgtt && ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+		object->flags |= EXEC_OBJECT_PINNED;
+
 	return object;
 }
 
-struct drm_i915_gem_exec_object2 *
-intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write)
+bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
+			    uint64_t offset, uint64_t size)
 {
-	struct drm_i915_gem_exec_object2 *obj;
+	struct drm_i915_gem_exec_object2 *object;
+	bool is_reserved;
 
-	obj = intel_bb_add_object(ibb, buf->handle,
-				  intel_buf_bo_size(buf),
-				  buf->addr.offset, write);
+	object = intel_bb_find_object(ibb, handle);
+	if (!object)
+		return false;
 
-	/* For compressed surfaces ensure address is aligned to 64KB */
-	if (ibb->gen >= 12 && buf->compression) {
-		obj->offset &= ~(0x10000 - 1);
-		obj->alignment = 0x10000;
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE) {
+		intel_allocator_free(ibb->allocator_handle, handle);
+		is_reserved = intel_allocator_is_reserved(ibb->allocator_handle,
+							  size, offset);
+		if (is_reserved)
+			intel_allocator_unreserve(ibb->allocator_handle, handle,
+						  size, offset);
 	}
 
-	/* For gen3 ensure tiled buffers are aligned to power of two size */
-	if (ibb->gen == 3 && buf->tiling) {
-		uint64_t alignment = 1024 * 1024;
+	__remove_from_objects(ibb, object);
+	__remove_from_cache(ibb, handle);
 
-		while (alignment < buf->surface[0].size)
-			alignment <<= 1;
-		obj->offset &= ~(alignment - 1);
-		obj->alignment = alignment;
+	return true;
+}
+
+static struct drm_i915_gem_exec_object2 *
+__intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
+			 uint64_t alignment, bool write)
+{
+	struct drm_i915_gem_exec_object2 *obj;
+
+	igt_assert(ALIGN(alignment, 4096) == alignment);
+
+	if (!alignment) {
+		alignment = 0x1000;
+
+		if (ibb->gen >= 12 && buf->compression)
+			alignment = 0x10000;
+
+		/* For gen3 ensure tiled buffers are aligned to power of two size */
+		if (ibb->gen == 3 && buf->tiling) {
+			alignment = 1024 * 1024;
+
+			while (alignment < buf->surface[0].size)
+				alignment <<= 1;
+		}
 	}
 
-	/* Update address in intel_buf buffer */
+	obj = intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
+				  buf->addr.offset, alignment, write);
 	buf->addr.offset = obj->offset;
 
+	if (!ibb->enforce_relocs)
+		obj->alignment = alignment;
+
 	return obj;
 }
 
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write)
+{
+	return __intel_bb_add_intel_buf(ibb, buf, 0, write);
+}
+
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
+				      uint64_t alignment, bool write)
+{
+	return __intel_bb_add_intel_buf(ibb, buf, alignment, write);
+}
+
+bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf)
+{
+	bool removed = intel_bb_remove_object(ibb, buf->handle,
+					      buf->addr.offset,
+					      intel_buf_bo_size(buf));
+
+	if (removed)
+		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+
+	return removed;
+}
+
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 {
@@ -1717,6 +1930,8 @@ intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 	struct drm_i915_gem_exec_object2 **found;
 
 	found = tfind((void *) &object, &ibb->root, __compare_objects);
+	if (!found)
+		return NULL;
 
 	return *found;
 }
@@ -1727,6 +1942,8 @@ intel_bb_object_set_flag(struct intel_bb *ibb, uint32_t handle, uint64_t flag)
 	struct drm_i915_gem_exec_object2 object = { .handle = handle };
 	struct drm_i915_gem_exec_object2 **found;
 
+	igt_assert_f(ibb->root, "Trying to search in null tree\n");
+
 	found = tfind((void *) &object, &ibb->root, __compare_objects);
 	if (!found) {
 		igt_warn("Trying to set fence on not found handle: %u\n",
@@ -1766,14 +1983,9 @@ intel_bb_object_clear_flag(struct intel_bb *ibb, uint32_t handle, uint64_t flag)
  * @write_domain: gem domain bit for the relocation
  * @delta: delta value to add to @buffer's gpu address
  * @offset: offset within bb to be patched
- * @presumed_offset: address of the object in address space. If -1 is passed
- * then final offset of the object will be randomized (for no-reloc bb) or
- * 0 (for reloc bb, in that case reloc.presumed_offset will be -1). In
- * case address is known it should passed in @presumed_offset (for no-reloc).
  *
  * Function allocates additional relocation slot in reloc array for a handle.
- * It also implicitly adds handle in the objects array if object doesn't
- * exists but doesn't mark it as a render target.
+ * Object must be previously added to bb.
  */
 static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
 				   uint32_t to_handle,
@@ -1788,13 +2000,8 @@ static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
 	struct drm_i915_gem_exec_object2 *object, *to_object;
 	uint32_t i;
 
-	if (ibb->enforce_relocs) {
-		object = intel_bb_add_object(ibb, handle, 0,
-					     presumed_offset, false);
-	} else {
-		object = intel_bb_find_object(ibb, handle);
-		igt_assert(object);
-	}
+	object = intel_bb_find_object(ibb, handle);
+	igt_assert(object);
 
 	/* For ibb we have relocs allocated in chunks */
 	if (to_handle == ibb->handle) {
@@ -1988,45 +2195,47 @@ static void intel_bb_dump_execbuf(struct intel_bb *ibb,
 	int i, j;
 	uint64_t address;
 
-	igt_info("execbuf batch len: %u, start offset: 0x%x, "
-		 "DR1: 0x%x, DR4: 0x%x, "
-		 "num clip: %u, clipptr: 0x%llx, "
-		 "flags: 0x%llx, rsvd1: 0x%llx, rsvd2: 0x%llx\n",
-		 execbuf->batch_len, execbuf->batch_start_offset,
-		 execbuf->DR1, execbuf->DR4,
-		 execbuf->num_cliprects, execbuf->cliprects_ptr,
-		 execbuf->flags, execbuf->rsvd1, execbuf->rsvd2);
-
-	igt_info("execbuf buffer_count: %d\n", execbuf->buffer_count);
+	igt_debug("execbuf [pid: %ld, fd: %d, ctx: %u]\n",
+		  (long) getpid(), ibb->i915, ibb->ctx);
+	igt_debug("execbuf batch len: %u, start offset: 0x%x, "
+		  "DR1: 0x%x, DR4: 0x%x, "
+		  "num clip: %u, clipptr: 0x%llx, "
+		  "flags: 0x%llx, rsvd1: 0x%llx, rsvd2: 0x%llx\n",
+		  execbuf->batch_len, execbuf->batch_start_offset,
+		  execbuf->DR1, execbuf->DR4,
+		  execbuf->num_cliprects, execbuf->cliprects_ptr,
+		  execbuf->flags, execbuf->rsvd1, execbuf->rsvd2);
+
+	igt_debug("execbuf buffer_count: %d\n", execbuf->buffer_count);
 	for (i = 0; i < execbuf->buffer_count; i++) {
 		objects = &((struct drm_i915_gem_exec_object2 *)
 			    from_user_pointer(execbuf->buffers_ptr))[i];
 		relocs = from_user_pointer(objects->relocs_ptr);
 		address = objects->offset;
-		igt_info(" [%d] handle: %u, reloc_count: %d, reloc_ptr: %p, "
-			 "align: 0x%llx, offset: 0x%" PRIx64 ", flags: 0x%llx, "
-			 "rsvd1: 0x%llx, rsvd2: 0x%llx\n",
-			 i, objects->handle, objects->relocation_count,
-			 relocs,
-			 objects->alignment,
-			 address,
-			 objects->flags,
-			 objects->rsvd1, objects->rsvd2);
+		igt_debug(" [%d] handle: %u, reloc_count: %d, reloc_ptr: %p, "
+			  "align: 0x%llx, offset: 0x%" PRIx64 ", flags: 0x%llx, "
+			  "rsvd1: 0x%llx, rsvd2: 0x%llx\n",
+			  i, objects->handle, objects->relocation_count,
+			  relocs,
+			  objects->alignment,
+			  address,
+			  objects->flags,
+			  objects->rsvd1, objects->rsvd2);
 		if (objects->relocation_count) {
-			igt_info("\texecbuf relocs:\n");
+			igt_debug("\texecbuf relocs:\n");
 			for (j = 0; j < objects->relocation_count; j++) {
 				reloc = &relocs[j];
 				address = reloc->presumed_offset;
-				igt_info("\t [%d] target handle: %u, "
-					 "offset: 0x%llx, delta: 0x%x, "
-					 "presumed_offset: 0x%" PRIx64 ", "
-					 "read_domains: 0x%x, "
-					 "write_domain: 0x%x\n",
-					 j, reloc->target_handle,
-					 reloc->offset, reloc->delta,
-					 address,
-					 reloc->read_domains,
-					 reloc->write_domain);
+				igt_debug("\t [%d] target handle: %u, "
+					  "offset: 0x%llx, delta: 0x%x, "
+					  "presumed_offset: 0x%" PRIx64 ", "
+					  "read_domains: 0x%x, "
+					  "write_domain: 0x%x\n",
+					  j, reloc->target_handle,
+					  reloc->offset, reloc->delta,
+					  address,
+					  reloc->read_domains,
+					  reloc->write_domain);
 			}
 		}
 	}
@@ -2069,6 +2278,12 @@ static void print_node(const void *node, VISIT which, int depth)
 	}
 }
 
+void intel_bb_dump_cache(struct intel_bb *ibb)
+{
+	igt_info("[pid: %ld] dump cache\n", (long) getpid());
+	twalk(ibb->root, print_node);
+}
+
 static struct drm_i915_gem_exec_object2 *
 create_objects_array(struct intel_bb *ibb)
 {
@@ -2078,8 +2293,10 @@ create_objects_array(struct intel_bb *ibb)
 	objects = malloc(sizeof(*objects) * ibb->num_objects);
 	igt_assert(objects);
 
-	for (i = 0; i < ibb->num_objects; i++)
+	for (i = 0; i < ibb->num_objects; i++) {
 		objects[i] = *(ibb->objects[i]);
+		objects[i].offset = CANONICAL(objects[i].offset);
+	}
 
 	return objects;
 }
@@ -2094,7 +2311,10 @@ static void update_offsets(struct intel_bb *ibb,
 		object = intel_bb_find_object(ibb, objects[i].handle);
 		igt_assert(object);
 
-		object->offset = objects[i].offset;
+		object->offset = DECANONICAL(objects[i].offset);
+
+		if (i == 0)
+			ibb->batch_offset = object->offset;
 	}
 }
 
@@ -2122,6 +2342,7 @@ static int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 	ibb->objects[0]->relocs_ptr = to_user_pointer(ibb->relocs);
 	ibb->objects[0]->relocation_count = ibb->num_relocs;
 	ibb->objects[0]->handle = ibb->handle;
+	ibb->objects[0]->offset = ibb->batch_offset;
 
 	gem_write(ibb->i915, ibb->handle, 0, ibb->batch, ibb->size);
 
@@ -2206,7 +2427,6 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 {
 	struct drm_i915_gem_exec_object2 object = { .handle = handle };
 	struct drm_i915_gem_exec_object2 **found;
-	uint64_t address;
 
 	igt_assert(ibb);
 
@@ -2214,12 +2434,7 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 	if (!found)
 		return INTEL_BUF_INVALID_ADDRESS;
 
-	address = (*found)->offset;
-
-	if (address == INTEL_BUF_INVALID_ADDRESS)
-		return address;
-
-	return address & (ibb->gtt_size - 1);
+	return (*found)->offset;
 }
 
 /**
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 76989eaae..b9a4ee7e8 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -8,6 +8,7 @@
 #include "igt_core.h"
 #include "intel_reg.h"
 #include "drmtest.h"
+#include "intel_allocator.h"
 
 #define BATCH_SZ 4096
 #define BATCH_RESERVED 16
@@ -441,6 +442,9 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
  * Batchbuffer without libdrm dependency
  */
 struct intel_bb {
+	uint64_t allocator_handle;
+	uint8_t allocator_type;
+
 	int i915;
 	unsigned int gen;
 	bool debug;
@@ -454,9 +458,9 @@ struct intel_bb {
 	uint64_t alignment;
 	int fence;
 
-	uint32_t prng;
 	uint64_t gtt_size;
 	bool supports_48b_address;
+	bool uses_full_ppgtt;
 
 	uint32_t ctx;
 
@@ -484,6 +488,8 @@ struct intel_bb {
 	int32_t refcount;
 };
 
+struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+				      uint8_t allocator_type);
 struct intel_bb *intel_bb_create(int i915, uint32_t size);
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size);
@@ -510,6 +516,7 @@ void intel_bb_dump(struct intel_bb *ibb, const char *filename);
 void intel_bb_set_debug(struct intel_bb *ibb, bool debug);
 void intel_bb_set_dump_base64(struct intel_bb *ibb, bool dump);
 
+/*
 static inline uint64_t
 intel_bb_set_default_object_alignment(struct intel_bb *ibb, uint64_t alignment)
 {
@@ -525,6 +532,7 @@ intel_bb_get_default_object_alignment(struct intel_bb *ibb)
 {
 	return ibb->alignment;
 }
+*/
 
 static inline uint32_t intel_bb_offset(struct intel_bb *ibb)
 {
@@ -574,11 +582,16 @@ static inline void intel_bb_out(struct intel_bb *ibb, uint32_t dword)
 }
 
 struct drm_i915_gem_exec_object2 *
-intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
-		    uint64_t offset, bool write);
+intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
+		    uint64_t offset, uint64_t alignment, bool write);
+bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
+			    uint64_t offset, uint64_t size);
 struct drm_i915_gem_exec_object2 *
 intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write);
-
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
+				      uint64_t alignment, bool write);
+bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf);
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle);
 
@@ -625,6 +638,8 @@ uint64_t intel_bb_offset_reloc_to_object(struct intel_bb *ibb,
 					 uint32_t offset,
 					 uint64_t presumed_offset);
 
+void intel_bb_dump_cache(struct intel_bb *ibb);
+
 void intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 		   uint64_t flags, bool sync);
 
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 1c960891f..7a50189da 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -810,11 +810,11 @@ static void offset_control(struct buf_ops *bops)
 	dst2 = create_buf(bops, WIDTH, HEIGHT, COLOR_77);
 
 	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
-			    src->addr.offset, false);
+			    src->addr.offset, 0, false);
 	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
-			    dst1->addr.offset, true);
+			    dst1->addr.offset, 0, true);
 	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
-			    dst2->addr.offset, true);
+			    dst2->addr.offset, 0, true);
 
 	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
 	intel_bb_ptr_align(ibb, 8);
@@ -838,13 +838,13 @@ static void offset_control(struct buf_ops *bops)
 
 	dst3 = create_buf(bops, WIDTH, HEIGHT, COLOR_33);
 	intel_bb_add_object(ibb, dst3->handle, intel_buf_bo_size(dst3),
-			    dst3->addr.offset, true);
+			    dst3->addr.offset, 0, true);
 	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
-			    src->addr.offset, false);
+			    src->addr.offset, 0, false);
 	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
-			    dst1->addr.offset, true);
+			    dst1->addr.offset, 0, true);
 	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
-			    dst2->addr.offset, true);
+			    dst2->addr.offset, 0, true);
 
 	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
 	intel_bb_ptr_align(ibb, 8);
@@ -901,7 +901,7 @@ static void delta_check(struct buf_ops *bops)
 	buf = create_buf(bops, 0x1000, 0x10, COLOR_CC);
 	buf->addr.offset = 0xfffff000;
 	intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
-			    buf->addr.offset, false);
+			    buf->addr.offset, 0, false);
 
 	intel_bb_out(ibb, MI_STORE_DWORD_IMM);
 	intel_bb_emit_reloc(ibb, buf->handle,
diff --git a/tests/i915/gem_mmap_offset.c b/tests/i915/gem_mmap_offset.c
index de927e32e..c7145832f 100644
--- a/tests/i915/gem_mmap_offset.c
+++ b/tests/i915/gem_mmap_offset.c
@@ -614,8 +614,8 @@ static void blt_coherency(int i915)
 	dst = create_bo(bops, 1, width, height);
 	size = src->surface[0].size;
 
-	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, false);
-	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, true);
+	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, 0, false);
+	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, 0, true);
 
 	intel_bb_blt_copy(ibb,
 			  src, 0, 0, src->surface[0].stride,
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 15/39] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (13 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 14/39] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 16/39] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
                   ` (25 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

We want to use relocations as long as we can so change intel-bb
strategy to use it as default up to gen12. For gen12 we have
to use softpinning in ccs aux tables so there we enforce using
allocator instead of relocations.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_batchbuffer.c | 39 ++++++++++++++++++++++++++++-----------
 1 file changed, 28 insertions(+), 11 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index c9a6c8909..388395a94 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1275,25 +1275,25 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 	igt_assert(ibb);
 
 	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(i915);
+	ibb->devid = intel_get_drm_devid(i915);
+	ibb->gen = intel_gen(ibb->devid);
 
 	/*
 	 * If we don't have full ppgtt driver can change our addresses
 	 * so allocator is useless in this case. Just enforce relocations
 	 * for such gens and don't use allocator at all.
 	 */
-	if (!ibb->uses_full_ppgtt) {
+	if (!ibb->uses_full_ppgtt)
 		do_relocs = true;
-		allocator_type = INTEL_ALLOCATOR_NONE;
-	}
 
-	if (!do_relocs)
-		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+	/* if relocs are set we won't use an allocator */
+	if (do_relocs)
+		allocator_type = INTEL_ALLOCATOR_NONE;
 	else
-		igt_assert(allocator_type == INTEL_ALLOCATOR_NONE);
+		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+
 	ibb->allocator_type = allocator_type;
 	ibb->i915 = i915;
-	ibb->devid = intel_get_drm_devid(i915);
-	ibb->gen = intel_gen(ibb->devid);
 	ibb->enforce_relocs = do_relocs;
 	ibb->handle = gem_create(i915, size);
 	ibb->size = size;
@@ -1327,7 +1327,7 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
  *
  * Creates bb with context passed in @ctx, size in @size and allocator type
  * in @allocator_type. Relocations are set to false because IGT allocator
- * is not used in that case.
+ * is used in that case.
  *
  * Returns:
  *
@@ -1339,6 +1339,11 @@ struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
 	return __intel_bb_create(i915, ctx, size, false, allocator_type);
 }
 
+static bool aux_needs_softpin(int i915)
+{
+	return intel_gen(intel_get_drm_devid(i915)) >= 12;
+}
+
 /**
  * intel_bb_create:
  * @i915: drm fd
@@ -1361,7 +1366,11 @@ struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
  */
 struct intel_bb *intel_bb_create(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, false, INTEL_ALLOCATOR_SIMPLE);
+	bool relocs = gem_has_relocations(i915);
+
+	return __intel_bb_create(i915, 0, size,
+				 relocs && !aux_needs_softpin(i915),
+				 INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1379,7 +1388,11 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, false, INTEL_ALLOCATOR_SIMPLE);
+	bool relocs = gem_has_relocations(i915);
+
+	return __intel_bb_create(i915, ctx, size,
+				 relocs && !aux_needs_softpin(i915),
+				 INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1396,6 +1409,8 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
  */
 struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
+	igt_require(gem_has_relocations(i915));
+
 	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
 }
 
@@ -1415,6 +1430,8 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
+	igt_require(gem_has_relocations(i915));
+
 	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 16/39] lib/intel_batchbuffer: Create bb with strategy / vm ranges
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (14 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 15/39] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 17/39] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
                   ` (24 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Previously intel-bb just used default allocator settings (safe,
chosen for work from the box). But limitation is not good if we want
to exercise non-standard cases (vm ranges for example). As allocator
provides passing full settings lets intel-bb also allows to use them.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_batchbuffer.c | 63 +++++++++++++++++++++++++++++++++--------
 lib/intel_batchbuffer.h | 10 +++++--
 2 files changed, 59 insertions(+), 14 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 388395a94..045f3f157 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1267,7 +1267,8 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  */
 static struct intel_bb *
 __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
-		  uint8_t allocator_type)
+		  uint64_t start, uint64_t end,
+		  uint8_t allocator_type, enum allocator_strategy strategy)
 {
 	struct drm_i915_gem_exec_object2 *object;
 	struct intel_bb *ibb = calloc(1, sizeof(*ibb));
@@ -1290,9 +1291,12 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 	if (do_relocs)
 		allocator_type = INTEL_ALLOCATOR_NONE;
 	else
-		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
-
+		ibb->allocator_handle = intel_allocator_open_full(i915, ctx,
+								  start, end,
+								  allocator_type,
+								  strategy);
 	ibb->allocator_type = allocator_type;
+	ibb->allocator_strategy = strategy;
 	ibb->i915 = i915;
 	ibb->enforce_relocs = do_relocs;
 	ibb->handle = gem_create(i915, size);
@@ -1323,20 +1327,51 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
  * @i915: drm fd
  * @ctx: context
  * @size: size of the batchbuffer
+ * @start: allocator vm start address
+ * @end: allocator vm start address
  * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ * @strategy: allocation strategy
  *
  * Creates bb with context passed in @ctx, size in @size and allocator type
  * in @allocator_type. Relocations are set to false because IGT allocator
- * is used in that case.
+ * is used in that case. VM range is passed to allocator (@start and @end)
+ * and allocation @strategy (suggestion to allocator about address allocation
+ * preferences).
  *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
  */
 struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
-				      uint8_t allocator_type)
+				      uint64_t start, uint64_t end,
+				      uint8_t allocator_type,
+				      enum allocator_strategy strategy)
+{
+	return __intel_bb_create(i915, ctx, size, false, start, end,
+				 allocator_type, strategy);
+}
+
+/**
+ * intel_bb_create_with_allocator:
+ * @i915: drm fd
+ * @ctx: context
+ * @size: size of the batchbuffer
+ * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ *
+ * Creates bb with context passed in @ctx, size in @size and allocator type
+ * in @allocator_type. Relocations are set to false because IGT allocator
+ * is used in that case.
+ *
+ * Returns:
+ *
+ * Pointer the intel_bb, asserts on failure.
+ */
+struct intel_bb *intel_bb_create_with_allocator(int i915, uint32_t ctx,
+						uint32_t size,
+						uint8_t allocator_type)
 {
-	return __intel_bb_create(i915, ctx, size, false, allocator_type);
+	return __intel_bb_create(i915, ctx, size, false, 0, 0,
+				 allocator_type, ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 static bool aux_needs_softpin(int i915)
@@ -1369,8 +1404,9 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 	bool relocs = gem_has_relocations(i915);
 
 	return __intel_bb_create(i915, 0, size,
-				 relocs && !aux_needs_softpin(i915),
-				 INTEL_ALLOCATOR_SIMPLE);
+				 relocs && !aux_needs_softpin(i915), 0, 0,
+				 INTEL_ALLOCATOR_SIMPLE,
+				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
@@ -1391,8 +1427,9 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 	bool relocs = gem_has_relocations(i915);
 
 	return __intel_bb_create(i915, ctx, size,
-				 relocs && !aux_needs_softpin(i915),
-				 INTEL_ALLOCATOR_SIMPLE);
+				 relocs && !aux_needs_softpin(i915), 0, 0,
+				 INTEL_ALLOCATOR_SIMPLE,
+				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
@@ -1411,7 +1448,8 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
 	igt_require(gem_has_relocations(i915));
 
-	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
+	return __intel_bb_create(i915, 0, size, true, 0, 0,
+				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
 }
 
 /**
@@ -1432,7 +1470,8 @@ intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
 	igt_require(gem_has_relocations(i915));
 
-	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
+	return __intel_bb_create(i915, ctx, size, true, 0, 0,
+				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
 }
 
 static void __intel_bb_destroy_relocations(struct intel_bb *ibb)
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index b9a4ee7e8..702052d22 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -444,6 +444,7 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
 struct intel_bb {
 	uint64_t allocator_handle;
 	uint8_t allocator_type;
+	enum allocator_strategy allocator_strategy;
 
 	int i915;
 	unsigned int gen;
@@ -488,8 +489,13 @@ struct intel_bb {
 	int32_t refcount;
 };
 
-struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
-				      uint8_t allocator_type);
+struct intel_bb *
+intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+		     uint64_t start, uint64_t end,
+		     uint8_t allocator_type, enum allocator_strategy strategy);
+struct intel_bb *
+intel_bb_create_with_allocator(int i915, uint32_t ctx,
+			       uint32_t size, uint8_t allocator_type);
 struct intel_bb *intel_bb_create(int i915, uint32_t size);
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 17/39] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (15 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 16/39] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 18/39] lib/intel_batchbuffer: Don't collect relocations for newer gens Zbigniew Kempczyński
                   ` (23 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

From now on intel_bb starts tracking added/removed intel_bufs.
We're safe now regardless order of intel_buf close/destroy
or intel_bb destroy paths.

When intel_buf is closed/destroyed first and it was previously added
to intel_bb it calls the code which removes itself from intel_bb.
In destroy path we go over all tracked intel_bufs and clear tracking
information and buffer offset (it is set to INTEL_BUF_INVALID_ADDRESS).

Reset path is handled as follows:
- intel_bb_reset(ibb, false) - just clean objects array leaving
  cache / allocator state intact.
- intel_bb_reset(ibb, true) - purge cache as well as detach intel_bufs
  from intel_bb (release offsets from allocator).

Remove intel_bb_object_offset_to_buf() function as tracking intel_buf
updates (checks for allocator) their offsets after execbuf.

Alter api_intel_bb according to intel-bb changes.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/intel_batchbuffer.c   | 186 ++++++++++++++++++++++++++++----------
 lib/intel_batchbuffer.h   |  29 +++---
 lib/intel_bufops.c        |  13 ++-
 lib/intel_bufops.h        |   6 ++
 lib/media_spin.c          |   2 -
 tests/i915/api_intel_bb.c |   7 --
 6 files changed, 169 insertions(+), 74 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 045f3f157..1b234b14d 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1261,6 +1261,8 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  * If we do reset without purging caches we use addresses from intel-bb cache
  * during execbuf objects construction.
  *
+ * If we do reset with purging caches allocator entries are freed as well.
+ *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
@@ -1303,6 +1305,7 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 	ibb->size = size;
 	ibb->alignment = 4096;
 	ibb->ctx = ctx;
+	ibb->vm_id = 0;
 	ibb->batch = calloc(1, size);
 	igt_assert(ibb->batch);
 	ibb->ptr = ibb->batch;
@@ -1317,6 +1320,8 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 				     false);
 	ibb->batch_offset = object->offset;
 
+	IGT_INIT_LIST_HEAD(&ibb->intel_bufs);
+
 	ibb->refcount = 1;
 
 	return ibb;
@@ -1508,6 +1513,22 @@ static void __intel_bb_destroy_cache(struct intel_bb *ibb)
 	ibb->root = NULL;
 }
 
+static void __intel_bb_detach_intel_bufs(struct intel_bb *ibb)
+{
+	struct intel_buf *entry, *tmp;
+
+	igt_list_for_each_entry_safe(entry, tmp, &ibb->intel_bufs, link)
+		intel_bb_detach_intel_buf(ibb, entry);
+}
+
+static void __intel_bb_remove_intel_bufs(struct intel_bb *ibb)
+{
+	struct intel_buf *entry, *tmp;
+
+	igt_list_for_each_entry_safe(entry, tmp, &ibb->intel_bufs, link)
+		intel_bb_remove_intel_buf(ibb, entry);
+}
+
 /**
  * intel_bb_destroy:
  * @ibb: pointer to intel_bb
@@ -1521,6 +1542,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
 	ibb->refcount--;
 	igt_assert_f(ibb->refcount == 0, "Trying to destroy referenced bb!");
 
+	__intel_bb_remove_intel_bufs(ibb);
 	__intel_bb_destroy_relocations(ibb);
 	__intel_bb_destroy_objects(ibb);
 	__intel_bb_destroy_cache(ibb);
@@ -1544,6 +1566,10 @@ void intel_bb_destroy(struct intel_bb *ibb)
  * @purge_objects_cache: if true destroy internal execobj and relocs + cache
  *
  * Recreate batch bo when there's no additional reference.
+ *
+ * When purge_object_cache == true we destroy cache as well as remove intel_buf
+ * from intel-bb tracking list. Removing intel_bufs releases their addresses
+ * in the allocator.
 */
 
 void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
@@ -1569,8 +1595,10 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	__intel_bb_destroy_objects(ibb);
 	__reallocate_objects(ibb);
 
-	if (purge_objects_cache)
+	if (purge_objects_cache) {
+		__intel_bb_remove_intel_bufs(ibb);
 		__intel_bb_destroy_cache(ibb);
+	}
 
 	/*
 	 * When we use allocators we're in no-reloc mode so we have to free
@@ -1621,6 +1649,50 @@ int intel_bb_sync(struct intel_bb *ibb)
 	return ret;
 }
 
+uint64_t intel_bb_assign_vm(struct intel_bb *ibb, uint64_t allocator,
+			    uint32_t vm_id)
+{
+	struct drm_i915_gem_context_param arg = {
+		.param = I915_CONTEXT_PARAM_VM,
+	};
+	uint64_t prev_allocator = ibb->allocator_handle;
+	bool closed = false;
+
+	if (ibb->vm_id == vm_id) {
+		igt_debug("Skipping to assign same vm_id: %u\n", vm_id);
+		return 0;
+	}
+
+	/* Cannot switch if someone keeps bb refcount */
+	igt_assert(ibb->refcount == 1);
+
+	/* Detach intel_bufs and remove bb handle */
+	__intel_bb_detach_intel_bufs(ibb);
+	intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset, ibb->size);
+
+	/* Cache + objects are not valid after change anymore */
+	__intel_bb_destroy_objects(ibb);
+	__intel_bb_destroy_cache(ibb);
+
+	/* Attach new allocator */
+	ibb->allocator_handle = allocator;
+
+	/* Setparam */
+	ibb->vm_id = vm_id;
+
+	/* Skip set param, we likely return to default vm */
+	if (vm_id) {
+		arg.ctx_id = ibb->ctx;
+		arg.value = vm_id;
+		gem_context_set_param(ibb->i915, &arg);
+	}
+
+	/* Recreate bb */
+	intel_bb_reset(ibb, false);
+
+	return closed ? 0 : prev_allocator;
+}
+
 /*
  * intel_bb_print:
  * @ibb: pointer to intel_bb
@@ -1875,22 +1947,13 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
 
 	object->offset = offset;
 
-	/* Limit current offset to gtt size */
-	if (offset != INTEL_BUF_INVALID_ADDRESS) {
-		object->offset = CANONICAL(offset & (ibb->gtt_size - 1));
-	} else {
-		object->offset = __intel_bb_get_offset(ibb,
-						       handle, size,
-						       object->alignment);
-	}
-
 	if (write)
 		object->flags |= EXEC_OBJECT_WRITE;
 
 	if (ibb->supports_48b_address)
 		object->flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 
-	if (ibb->uses_full_ppgtt && ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+	if (ibb->uses_full_ppgtt && !ibb->enforce_relocs)
 		object->flags |= EXEC_OBJECT_PINNED;
 
 	return object;
@@ -1927,6 +1990,9 @@ __intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
 {
 	struct drm_i915_gem_exec_object2 *obj;
 
+	igt_assert(ibb);
+	igt_assert(buf);
+	igt_assert(!buf->ibb || buf->ibb == ibb);
 	igt_assert(ALIGN(alignment, 4096) == alignment);
 
 	if (!alignment) {
@@ -1951,6 +2017,13 @@ __intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
 	if (!ibb->enforce_relocs)
 		obj->alignment = alignment;
 
+	if (igt_list_empty(&buf->link)) {
+		igt_list_add_tail(&buf->link, &ibb->intel_bufs);
+		buf->ibb = ibb;
+	} else {
+		igt_assert(buf->ibb == ibb);
+	}
+
 	return obj;
 }
 
@@ -1967,18 +2040,53 @@ intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *bu
 	return __intel_bb_add_intel_buf(ibb, buf, alignment, write);
 }
 
+void intel_bb_detach_intel_buf(struct intel_bb *ibb, struct intel_buf *buf)
+{
+	igt_assert(ibb);
+	igt_assert(buf);
+	igt_assert(!buf->ibb || buf->ibb == ibb);
+
+	if (!igt_list_empty(&buf->link)) {
+		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+		buf->ibb = NULL;
+		igt_list_del_init(&buf->link);
+	}
+}
+
 bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf)
 {
-	bool removed = intel_bb_remove_object(ibb, buf->handle,
-					      buf->addr.offset,
-					      intel_buf_bo_size(buf));
+	bool removed;
 
-	if (removed)
+	igt_assert(ibb);
+	igt_assert(buf);
+	igt_assert(!buf->ibb || buf->ibb == ibb);
+
+	if (igt_list_empty(&buf->link))
+		return false;
+
+	removed = intel_bb_remove_object(ibb, buf->handle,
+					 buf->addr.offset,
+					 intel_buf_bo_size(buf));
+	if (removed) {
 		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+		buf->ibb = NULL;
+		igt_list_del_init(&buf->link);
+	}
 
 	return removed;
 }
 
+void intel_bb_print_intel_bufs(struct intel_bb *ibb)
+{
+	struct intel_buf *entry;
+
+	igt_list_for_each_entry(entry, &ibb->intel_bufs, link) {
+		igt_info("handle: %u, ibb: %p, offset: %lx\n",
+			 entry->handle, entry->ibb,
+			 (long) entry->addr.offset);
+	}
+}
+
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 {
@@ -2361,6 +2469,7 @@ static void update_offsets(struct intel_bb *ibb,
 			   struct drm_i915_gem_exec_object2 *objects)
 {
 	struct drm_i915_gem_exec_object2 *object;
+	struct intel_buf *entry;
 	uint32_t i;
 
 	for (i = 0; i < ibb->num_objects; i++) {
@@ -2372,11 +2481,23 @@ static void update_offsets(struct intel_bb *ibb,
 		if (i == 0)
 			ibb->batch_offset = object->offset;
 	}
+
+	igt_list_for_each_entry(entry, &ibb->intel_bufs, link) {
+		object = intel_bb_find_object(ibb, entry->handle);
+		igt_assert(object);
+
+		if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+			igt_assert(object->offset == entry->addr.offset);
+		else
+			entry->addr.offset = object->offset;
+
+		entry->addr.ctx = ibb->ctx;
+	}
 }
 
 #define LINELEN 76
 /*
- * @__intel_bb_exec:
+ * __intel_bb_exec:
  * @ibb: pointer to intel_bb
  * @end_offset: offset of the last instruction in the bb
  * @flags: flags passed directly to execbuf
@@ -2416,6 +2537,9 @@ static int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 	if (ibb->dump_base64)
 		intel_bb_dump_base64(ibb, LINELEN);
 
+	/* For debugging on CI, remove in final series */
+	intel_bb_dump_execbuf(ibb, &execbuf);
+
 	ret = __gem_execbuf_wr(ibb->i915, &execbuf);
 	if (ret) {
 		intel_bb_dump_execbuf(ibb, &execbuf);
@@ -2493,36 +2617,6 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 	return (*found)->offset;
 }
 
-/**
- * intel_bb_object_offset_to_buf:
- * @ibb: pointer to intel_bb
- * @buf: buffer we want to store last exec offset and context id
- *
- * Copy object offset used in the batch to intel_buf to allow caller prepare
- * other batch likely without relocations.
- */
-bool intel_bb_object_offset_to_buf(struct intel_bb *ibb, struct intel_buf *buf)
-{
-	struct drm_i915_gem_exec_object2 object = { .handle = buf->handle };
-	struct drm_i915_gem_exec_object2 **found;
-
-	igt_assert(ibb);
-	igt_assert(buf);
-
-	found = tfind((void *)&object, &ibb->root, __compare_objects);
-	if (!found) {
-		buf->addr.offset = 0;
-		buf->addr.ctx = ibb->ctx;
-
-		return false;
-	}
-
-	buf->addr.offset = (*found)->offset & (ibb->gtt_size - 1);
-	buf->addr.ctx = ibb->ctx;
-
-	return true;
-}
-
 /*
  * intel_bb_emit_bbe:
  * @ibb: batchbuffer
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 702052d22..6f148713b 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -6,6 +6,7 @@
 #include <i915_drm.h>
 
 #include "igt_core.h"
+#include "igt_list.h"
 #include "intel_reg.h"
 #include "drmtest.h"
 #include "intel_allocator.h"
@@ -464,6 +465,7 @@ struct intel_bb {
 	bool uses_full_ppgtt;
 
 	uint32_t ctx;
+	uint32_t vm_id;
 
 	/* Cache */
 	void *root;
@@ -481,6 +483,9 @@ struct intel_bb {
 	uint32_t num_relocs;
 	uint32_t allocated_relocs;
 
+	/* Tracked intel_bufs */
+	struct igt_list_head intel_bufs;
+
 	/*
 	 * BO recreate in reset path only when refcount == 0
 	 * Currently we don't need to use atomics because intel_bb
@@ -517,29 +522,15 @@ static inline void intel_bb_unref(struct intel_bb *ibb)
 
 void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache);
 int intel_bb_sync(struct intel_bb *ibb);
+
+uint64_t intel_bb_assign_vm(struct intel_bb *ibb, uint64_t allocator,
+			    uint32_t vm_id);
+
 void intel_bb_print(struct intel_bb *ibb);
 void intel_bb_dump(struct intel_bb *ibb, const char *filename);
 void intel_bb_set_debug(struct intel_bb *ibb, bool debug);
 void intel_bb_set_dump_base64(struct intel_bb *ibb, bool dump);
 
-/*
-static inline uint64_t
-intel_bb_set_default_object_alignment(struct intel_bb *ibb, uint64_t alignment)
-{
-	uint64_t old = ibb->alignment;
-
-	ibb->alignment = alignment;
-
-	return old;
-}
-
-static inline uint64_t
-intel_bb_get_default_object_alignment(struct intel_bb *ibb)
-{
-	return ibb->alignment;
-}
-*/
-
 static inline uint32_t intel_bb_offset(struct intel_bb *ibb)
 {
 	return (uint32_t) ((uint8_t *) ibb->ptr - (uint8_t *) ibb->batch);
@@ -597,7 +588,9 @@ intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write);
 struct drm_i915_gem_exec_object2 *
 intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
 				      uint64_t alignment, bool write);
+void intel_bb_detach_intel_buf(struct intel_bb *ibb, struct intel_buf *buf);
 bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf);
+void intel_bb_print_intel_bufs(struct intel_bb *ibb);
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle);
 
diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index 00be2bd04..3c1cca0cf 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -727,6 +727,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 
 	buf->bops = bops;
 	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+	IGT_INIT_LIST_HEAD(&buf->link);
 
 	if (compression) {
 		int aux_width, aux_height;
@@ -827,13 +828,23 @@ void intel_buf_init(struct buf_ops *bops,
  *
  * Function closes gem BO inside intel_buf if bo is owned by intel_buf.
  * For handle passed from the caller intel_buf doesn't take ownership and
- * doesn't close it in close()/destroy() paths.
+ * doesn't close it in close()/destroy() paths. When intel_buf was previously
+ * added to intel_bb (intel_bb_add_intel_buf() call) it is tracked there and
+ * must be removed from its internal structures.
  */
 void intel_buf_close(struct buf_ops *bops, struct intel_buf *buf)
 {
 	igt_assert(bops);
 	igt_assert(buf);
 
+	/* If buf is tracked by some intel_bb ensure it will be removed there */
+	if (buf->ibb) {
+		intel_bb_remove_intel_buf(buf->ibb, buf);
+		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+		buf->ibb = NULL;
+		IGT_INIT_LIST_HEAD(&buf->link);
+	}
+
 	if (buf->is_owner)
 		gem_close(bops->fd, buf->handle);
 }
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 54480bff6..1a3d86925 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -2,6 +2,7 @@
 #define __INTEL_BUFOPS_H__
 
 #include <stdint.h>
+#include "igt_list.h"
 #include "igt_aux.h"
 #include "intel_batchbuffer.h"
 
@@ -13,6 +14,7 @@ struct buf_ops;
 
 struct intel_buf {
 	struct buf_ops *bops;
+
 	bool is_owner;
 	uint32_t handle;
 	uint64_t size;
@@ -40,6 +42,10 @@ struct intel_buf {
 		uint32_t ctx;
 	} addr;
 
+	/* Tracking */
+	struct intel_bb *ibb;
+	struct igt_list_head link;
+
 	/* CPU mapping */
 	uint32_t *ptr;
 	bool cpu_write;
diff --git a/lib/media_spin.c b/lib/media_spin.c
index 5da469a52..d2345d153 100644
--- a/lib/media_spin.c
+++ b/lib/media_spin.c
@@ -132,7 +132,6 @@ gen8_media_spinfunc(int i915, struct intel_buf *buf, uint32_t spins)
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 
-	intel_bb_object_offset_to_buf(ibb, buf);
 	intel_bb_destroy(ibb);
 }
 
@@ -186,6 +185,5 @@ gen9_media_spinfunc(int i915, struct intel_buf *buf, uint32_t spins)
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 
-	intel_bb_object_offset_to_buf(ibb, buf);
 	intel_bb_destroy(ibb);
 }
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 7a50189da..78035bc5b 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -828,9 +828,6 @@ static void offset_control(struct buf_ops *bops)
 		print_buf(dst2, "dst2");
 	}
 
-	igt_assert(intel_bb_object_offset_to_buf(ibb, src) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst1) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst2) == true);
 	poff_src = src->addr.offset;
 	poff_dst1 = dst1->addr.offset;
 	poff_dst2 = dst2->addr.offset;
@@ -853,10 +850,6 @@ static void offset_control(struct buf_ops *bops)
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 	intel_bb_sync(ibb);
 
-	igt_assert(intel_bb_object_offset_to_buf(ibb, src) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst1) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst2) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst3) == true);
 	igt_assert(poff_src == src->addr.offset);
 	igt_assert(poff_dst1 == dst1->addr.offset);
 	igt_assert(poff_dst2 == dst2->addr.offset);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 18/39] lib/intel_batchbuffer: Don't collect relocations for newer gens
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (16 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 17/39] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 19/39] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
                   ` (22 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Preparing universal batches for rendercopy, gpgpu fill and media fill
must take into account does kernel support relocations for specified
gen or not.

In intel-bb we assume relocations are supported so user is calling
intel_bb_emit_reloc() like functions to emit address in the batch
what collects relocation required for execbuf call in intel-bb.
For newer gens kernel will reject such relocations returning
-EINVAL so we should just provide an offset (returned from allocator)
and skip adding relocation entry to intel-bb.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 1b234b14d..0b2c5b21e 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -2148,7 +2148,8 @@ intel_bb_object_clear_flag(struct intel_bb *ibb, uint32_t handle, uint64_t flag)
  * @delta: delta value to add to @buffer's gpu address
  * @offset: offset within bb to be patched
  *
- * Function allocates additional relocation slot in reloc array for a handle.
+ * When relocations are requested function allocates additional relocation slot
+ * in reloc array for a handle.
  * Object must be previously added to bb.
  */
 static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
@@ -2167,6 +2168,10 @@ static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
 	object = intel_bb_find_object(ibb, handle);
 	igt_assert(object);
 
+	/* In no-reloc mode we just return the previously assigned address */
+	if (!ibb->enforce_relocs)
+		goto out;
+
 	/* For ibb we have relocs allocated in chunks */
 	if (to_handle == ibb->handle) {
 		relocs = ibb->relocs;
@@ -2207,6 +2212,7 @@ static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
 		  delta, offset,
 		  from_user_pointer(relocs[i].presumed_offset));
 
+out:
 	return object->offset;
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 19/39] lib/igt_fb: Initialize intel_buf with same size as fb
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (17 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 18/39] lib/intel_batchbuffer: Don't collect relocations for newer gens Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 20/39] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
                   ` (21 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

We need to have same size when intel_buf is initialized over fb
(with compression) because allocator could be called with smaller
size what could lead to relocation.

Use new intel_buf function which allows initialize with handle and size.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/igt_fb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index f0fcd1a7f..259449d60 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2197,11 +2197,11 @@ igt_fb_create_intel_buf(int fd, struct buf_ops *bops,
 	bo_name = gem_flink(fd, fb->gem_handle);
 	handle = gem_open(fd, bo_name);
 
-	buf = intel_buf_create_using_handle(bops, handle,
-					    fb->width, fb->height,
-					    fb->plane_bpp[0], 0,
-					    igt_fb_mod_to_tiling(fb->modifier),
-					    compression);
+	buf = intel_buf_create_using_handle_and_size(bops, handle,
+						     fb->width, fb->height,
+						     fb->plane_bpp[0], 0,
+						     igt_fb_mod_to_tiling(fb->modifier),
+						     compression, fb->size);
 	intel_buf_set_name(buf, name);
 
 	/* Make sure we close handle on destroy path */
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 20/39] tests/api_intel_bb: Remove check-canonical test
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (18 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 19/39] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 21/39] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
                   ` (20 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

As intel-bb uses internally decanonical address for objects/intel_bufs
checking canonical bits makes no sense anymore.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_bb.c | 44 ---------------------------------------
 1 file changed, 44 deletions(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 78035bc5b..61acb41d4 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -206,47 +206,6 @@ static void lot_of_buffers(struct buf_ops *bops)
 		intel_buf_destroy(buf[i]);
 }
 
-/*
- * Make sure intel-bb space allocator currently doesn't enter 47-48 bit
- * gtt sizes.
- */
-static void check_canonical(struct buf_ops *bops)
-{
-	int i915 = buf_ops_get_fd(bops);
-	struct intel_bb *ibb;
-	struct intel_buf *buf;
-	uint32_t offset;
-	uint64_t address;
-	bool supports_48bit;
-
-	ibb = intel_bb_create(i915, PAGE_SIZE);
-	supports_48bit = ibb->supports_48b_address;
-	if (!supports_48bit)
-		intel_bb_destroy(ibb);
-	igt_require_f(supports_48bit, "We need 48bit ppgtt for testing\n");
-
-	address = 0xc00000000000;
-	if (debug_bb)
-		intel_bb_set_debug(ibb, true);
-
-	offset = intel_bb_emit_bbe(ibb);
-
-	buf = intel_buf_create(bops, 512, 512, 32, 0,
-			       I915_TILING_NONE, I915_COMPRESSION_NONE);
-
-	buf->addr.offset = address;
-	intel_bb_add_intel_buf(ibb, buf, true);
-	intel_bb_object_set_flag(ibb, buf->handle, EXEC_OBJECT_PINNED);
-
-	igt_assert(buf->addr.offset == 0);
-
-	intel_bb_exec(ibb, offset,
-		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
-
-	intel_buf_destroy(buf);
-	intel_bb_destroy(ibb);
-}
-
 /*
  * Check flags are cleared after intel_bb_reset(ibb, false);
  */
@@ -1192,9 +1151,6 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("lot-of-buffers")
 		lot_of_buffers(bops);
 
-	igt_subtest("check-canonical")
-		check_canonical(bops);
-
 	igt_subtest("reset-flags")
 		reset_flags(bops);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 21/39] tests/api_intel_bb: Modify test to verify intel_bb with allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (19 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 20/39] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 22/39] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
                   ` (19 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

intel_bb was adopted to use allocator. Change the test to verify
addresses in different scenarios - with relocations and with softpin.

v2: adding intel-buf to intel-bb inserts addresses so they should
    be same even if intel-bb cache purge was called

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_bb.c | 492 +++++++++++++++++++++++++++++---------
 1 file changed, 377 insertions(+), 115 deletions(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 61acb41d4..918ebd629 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -22,6 +22,7 @@
  */
 
 #include "igt.h"
+#include "i915/gem.h"
 #include <unistd.h>
 #include <stdlib.h>
 #include <stdio.h>
@@ -94,6 +95,7 @@ static void check_buf(struct intel_buf *buf, uint8_t color)
 
 	ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
 					buf->surface[0].size, PROT_READ);
+	gem_set_domain(i915, buf->handle, I915_GEM_DOMAIN_WC, 0);
 
 	for (i = 0; i < buf->surface[0].size; i++)
 		igt_assert(ptr[i] == color);
@@ -123,24 +125,34 @@ static void print_buf(struct intel_buf *buf, const char *name)
 
 	ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
 					buf->surface[0].size, PROT_READ);
-	igt_debug("[%s] Buf handle: %d, size: %" PRIx64 ", v: 0x%02x, presumed_addr: %p\n",
+	igt_debug("[%s] Buf handle: %d, size: %" PRIu64
+		  ", v: 0x%02x, presumed_addr: %p\n",
 		  name, buf->handle, buf->surface[0].size, ptr[0],
 		  from_user_pointer(buf->addr.offset));
 	munmap(ptr, buf->surface[0].size);
 }
 
+static void reset_bb(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	intel_bb_reset(ibb, false);
+	intel_bb_destroy(ibb);
+}
+
 static void simple_bb(struct buf_ops *bops, bool use_context)
 {
 	int i915 = buf_ops_get_fd(bops);
 	struct intel_bb *ibb;
-	uint32_t ctx;
+	uint32_t ctx = 0;
 
-	if (use_context) {
+	if (use_context)
 		gem_require_contexts(i915);
-		ctx = gem_context_create(i915);
-	}
 
-	ibb = intel_bb_create(i915, PAGE_SIZE);
+	ibb = intel_bb_create_with_allocator(i915, ctx, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
 	if (debug_bb)
 		intel_bb_set_debug(ibb, true);
 
@@ -155,10 +167,8 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 	intel_bb_reset(ibb, false);
 	intel_bb_reset(ibb, true);
 
-	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
-	intel_bb_ptr_align(ibb, 8);
-
 	if (use_context) {
+		ctx = gem_context_create(i915);
 		intel_bb_destroy(ibb);
 		ibb = intel_bb_create_with_context(i915, ctx, PAGE_SIZE);
 		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
@@ -166,11 +176,10 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 		intel_bb_exec(ibb, intel_bb_offset(ibb),
 			      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC,
 			      true);
+		gem_context_destroy(i915, ctx);
 	}
 
 	intel_bb_destroy(ibb);
-	if (use_context)
-		gem_context_destroy(i915, ctx);
 }
 
 /*
@@ -194,16 +203,20 @@ static void lot_of_buffers(struct buf_ops *bops)
 	for (i = 0; i < NUM_BUFS; i++) {
 		buf[i] = intel_buf_create(bops, 4096, 1, 8, 0, I915_TILING_NONE,
 					  I915_COMPRESSION_NONE);
-		intel_bb_add_intel_buf(ibb, buf[i], false);
+		if (i % 2)
+			intel_bb_add_intel_buf(ibb, buf[i], false);
+		else
+			intel_bb_add_intel_buf_with_alignment(ibb, buf[i],
+							      0x4000, false);
 	}
 
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
 
-	intel_bb_destroy(ibb);
-
 	for (i = 0; i < NUM_BUFS; i++)
 		intel_buf_destroy(buf[i]);
+
+	intel_bb_destroy(ibb);
 }
 
 /*
@@ -298,70 +311,287 @@ static void reset_flags(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
+static void add_remove_objects(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *mid, *dst;
+	uint32_t offset;
+	const uint32_t width = 512;
+	const uint32_t height = 512;
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
 
-#define MI_FLUSH_DW (0x26<<23)
-#define BCS_SWCTRL  0x22200
-#define BCS_SRC_Y   (1 << 0)
-#define BCS_DST_Y   (1 << 1)
-static void __emit_blit(struct intel_bb *ibb,
-			struct intel_buf *src, struct intel_buf *dst)
+	src = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	mid = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, mid, true);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(mid);
+	intel_buf_destroy(dst);
+	intel_bb_destroy(ibb);
+}
+
+static void destroy_bb(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *mid, *dst;
+	uint32_t offset;
+	const uint32_t width = 512;
+	const uint32_t height = 512;
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	mid = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, mid, true);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	/* Check destroy will detach intel_bufs */
+	intel_bb_destroy(ibb);
+	igt_assert(src->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(src->ibb == NULL);
+	igt_assert(mid->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(mid->ibb == NULL);
+	igt_assert(dst->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(dst->ibb == NULL);
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	intel_bb_destroy(ibb);
+	intel_buf_destroy(src);
+	intel_buf_destroy(mid);
+	intel_buf_destroy(dst);
+}
+
+static void object_reloc(struct buf_ops *bops, enum obj_cache_ops cache_op)
 {
-	uint32_t mask;
-	bool has_64b_reloc;
-	uint64_t address;
-
-	has_64b_reloc = ibb->gen >= 8;
-
-	if ((src->tiling | dst->tiling) >= I915_TILING_Y) {
-		intel_bb_out(ibb, MI_LOAD_REGISTER_IMM);
-		intel_bb_out(ibb, BCS_SWCTRL);
-
-		mask = (BCS_SRC_Y | BCS_DST_Y) << 16;
-		if (src->tiling == I915_TILING_Y)
-			mask |= BCS_SRC_Y;
-		if (dst->tiling == I915_TILING_Y)
-			mask |= BCS_DST_Y;
-		intel_bb_out(ibb, mask);
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	uint32_t h1, h2;
+	uint64_t poff_bb, poff_h1, poff_h2;
+	uint64_t poff2_bb, poff2_h1, poff2_h2;
+	uint64_t flags = 0;
+	uint64_t shift = cache_op == PURGE_CACHE ? 0x2000 : 0x0;
+	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
+
+	ibb = intel_bb_create_with_relocs(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	h1 = gem_create(i915, PAGE_SIZE);
+	h2 = gem_create(i915, PAGE_SIZE);
+
+	/* intel_bb_create adds bb handle so it has 0 for relocs */
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	igt_assert(poff_bb == 0);
+
+	/* Before adding to intel_bb it should return INVALID_ADDRESS */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[1] poff_h1: %lx\n", (long) poff_h1);
+	igt_debug("[1] poff_h2: %lx\n", (long) poff_h2);
+	igt_assert(poff_h1 == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(poff_h2 == INTEL_BUF_INVALID_ADDRESS);
+
+	intel_bb_add_object(ibb, h1, PAGE_SIZE, poff_h1, 0, true);
+	intel_bb_add_object(ibb, h2, PAGE_SIZE, poff_h2, 0x2000, true);
+
+	/*
+	 * Objects were added to bb, we expect initial addresses are zeroed
+	 * for relocs.
+	 */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_assert(poff_h1 == 0);
+	igt_assert(poff_h2 == 0);
+
+	intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
+
+	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff2_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff2_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[2] poff2_h1: %lx\n", (long) poff2_h1);
+	igt_debug("[2] poff2_h2: %lx\n", (long) poff2_h2);
+	/* Some addresses won't be 0 */
+	igt_assert(poff2_bb | poff2_h1 | poff2_h2);
+
+	intel_bb_reset(ibb, purge_cache);
+
+	if (purge_cache) {
+		intel_bb_add_object(ibb, h1, PAGE_SIZE, poff2_h1, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2 + shift, 0x2000, true);
 	}
 
-	intel_bb_out(ibb,
-		     XY_SRC_COPY_BLT_CMD |
-		     XY_SRC_COPY_BLT_WRITE_ALPHA |
-		     XY_SRC_COPY_BLT_WRITE_RGB |
-		     (6 + 2 * has_64b_reloc));
-	intel_bb_out(ibb, 3 << 24 | 0xcc << 16 | dst->surface[0].stride);
-	intel_bb_out(ibb, 0);
-	intel_bb_out(ibb, intel_buf_height(dst) << 16 | intel_buf_width(dst));
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[3] poff_h1: %lx\n", (long) poff_h1);
+	igt_debug("[3] poff_h2: %lx\n", (long) poff_h2);
+	igt_debug("[3] poff2_h1: %lx\n", (long) poff2_h1);
+	igt_debug("[3] poff2_h2: %lx + shift (%lx)\n", (long) poff2_h2,
+		 (long) shift);
+	igt_assert(poff_h1 == poff2_h1);
+	igt_assert(poff_h2 == poff2_h2 + shift);
+	intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
 
-	address = intel_bb_get_object_offset(ibb, dst->handle);
-	intel_bb_emit_reloc_fenced(ibb, dst->handle,
-				   I915_GEM_DOMAIN_RENDER,
-				   I915_GEM_DOMAIN_RENDER,
-				   0, address);
-	intel_bb_out(ibb, 0);
-	intel_bb_out(ibb, src->surface[0].stride);
+	gem_close(i915, h1);
+	gem_close(i915, h2);
+	intel_bb_destroy(ibb);
+}
 
-	address = intel_bb_get_object_offset(ibb, src->handle);
-	intel_bb_emit_reloc_fenced(ibb, src->handle,
-				   I915_GEM_DOMAIN_RENDER, 0,
-				   0, address);
+#define WITHIN_RANGE(offset, start, end) \
+	(DECANONICAL(offset) >= start && DECANONICAL(offset) <= end)
+static void object_noreloc(struct buf_ops *bops, enum obj_cache_ops cache_op,
+			   uint8_t allocator_type)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	uint32_t h1, h2;
+	uint64_t start, end;
+	uint64_t poff_bb, poff_h1, poff_h2;
+	uint64_t poff2_bb, poff2_h1, poff2_h2;
+	uint64_t flags = 0;
+	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
 
-	if ((src->tiling | dst->tiling) >= I915_TILING_Y) {
-		igt_assert(ibb->gen >= 6);
-		intel_bb_out(ibb, MI_FLUSH_DW | 2);
-		intel_bb_out(ibb, 0);
-		intel_bb_out(ibb, 0);
-		intel_bb_out(ibb, 0);
+	igt_require(gem_uses_full_ppgtt(i915));
+
+	ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE, allocator_type);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	h1 = gem_create(i915, PAGE_SIZE);
+	h2 = gem_create(i915, PAGE_SIZE);
+
+	intel_allocator_get_address_range(ibb->allocator_handle,
+					  &start, &end);
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	igt_debug("[1] bb presumed offset: 0x%" PRIx64
+		  ", start: %" PRIx64 ", end: %" PRIx64 "\n",
+		  poff_bb, start, end);
+	igt_assert(WITHIN_RANGE(poff_bb, start, end));
+
+	/* Before adding to intel_bb it should return INVALID_ADDRESS */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[1] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[1] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+	igt_assert(poff_h1 == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(poff_h2 == INTEL_BUF_INVALID_ADDRESS);
+
+	intel_bb_add_object(ibb, h1, PAGE_SIZE, poff_h1, 0, true);
+	intel_bb_add_object(ibb, h2, PAGE_SIZE, poff_h2, 0, true);
+
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[2] bb presumed offset: 0x%"PRIx64"\n", poff_bb);
+	igt_debug("[2] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[2] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+	igt_assert(WITHIN_RANGE(poff_bb, start, end));
+	igt_assert(WITHIN_RANGE(poff_h1, start, end));
+	igt_assert(WITHIN_RANGE(poff_h2, start, end));
+
+	intel_bb_emit_bbe(ibb);
+	igt_debug("exec flags: %" PRIX64 "\n", flags);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
+
+	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff2_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff2_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[3] bb presumed offset: 0x%"PRIx64"\n", poff2_bb);
+	igt_debug("[3] h1 presumed offset: 0x%"PRIx64"\n", poff2_h1);
+	igt_debug("[3] h2 presumed offset: 0x%"PRIx64"\n", poff2_h2);
+	igt_assert(poff_h1 == poff2_h1);
+	igt_assert(poff_h2 == poff2_h2);
+
+	igt_debug("purge: %d\n", purge_cache);
+	intel_bb_reset(ibb, purge_cache);
 
-		intel_bb_out(ibb, MI_LOAD_REGISTER_IMM);
-		intel_bb_out(ibb, BCS_SWCTRL);
-		intel_bb_out(ibb, (BCS_SRC_Y | BCS_DST_Y) << 16);
+	/*
+	 * Check if intel-bb cache was purged:
+	 * a) retrieve same address from allocator (works for simple, not random)
+	 * b) passing previous address enters allocator <-> intel_bb cache
+	 *    consistency check path.
+	 */
+	if (purge_cache) {
+		intel_bb_add_object(ibb, h1, PAGE_SIZE,
+				    INTEL_BUF_INVALID_ADDRESS, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2, 0, true);
+	} else {
+		/* See consistency check will not fail */
+		intel_bb_add_object(ibb, h1, PAGE_SIZE, poff2_h1, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2, 0, true);
 	}
+
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[4] bb presumed offset: 0x%"PRIx64"\n", poff_bb);
+	igt_debug("[4] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[4] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+
+	/* For simple allocator and purge=cache we must have same addresses */
+	if (allocator_type == INTEL_ALLOCATOR_SIMPLE || !purge_cache) {
+		igt_assert(poff_h1 == poff2_h1);
+		igt_assert(poff_h2 == poff2_h2);
+	}
+
+	gem_close(i915, h1);
+	gem_close(i915, h2);
+	intel_bb_destroy(ibb);
+}
+static void __emit_blit(struct intel_bb *ibb,
+			 struct intel_buf *src, struct intel_buf *dst)
+{
+	intel_bb_emit_blt_copy(ibb,
+			       src, 0, 0, src->surface[0].stride,
+			       dst, 0, 0, dst->surface[0].stride,
+			       intel_buf_width(dst),
+			       intel_buf_height(dst),
+			       dst->bpp);
 }
 
 static void blit(struct buf_ops *bops,
 		 enum reloc_objects reloc_obj,
-		 enum obj_cache_ops cache_op)
+		 enum obj_cache_ops cache_op,
+		 uint8_t allocator_type)
 {
 	int i915 = buf_ops_get_fd(bops);
 	struct intel_bb *ibb;
@@ -372,49 +602,45 @@ static void blit(struct buf_ops *bops,
 	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
 	bool do_relocs = reloc_obj == RELOC ? true : false;
 
-	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
-	dst = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
-
-	if (buf_info) {
-		print_buf(src, "src");
-		print_buf(dst, "dst");
-	}
+	if (!do_relocs)
+		igt_require(gem_uses_full_ppgtt(i915));
 
 	if (do_relocs) {
 		ibb = intel_bb_create_with_relocs(i915, PAGE_SIZE);
 	} else {
-		ibb = intel_bb_create(i915, PAGE_SIZE);
+		ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE,
+						     allocator_type);
 		flags |= I915_EXEC_NO_RELOC;
 	}
 
-	if (ibb->gen >= 6)
-		flags |= I915_EXEC_BLT;
+	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
+	dst = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
+
+	if (buf_info) {
+		print_buf(src, "src");
+		print_buf(dst, "dst");
+	}
 
 	if (debug_bb)
 		intel_bb_set_debug(ibb, true);
 
-
-	intel_bb_add_intel_buf(ibb, src, false);
-	intel_bb_add_intel_buf(ibb, dst, true);
-
 	__emit_blit(ibb, src, dst);
 
 	/* We expect initial addresses are zeroed for relocs */
-	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
-	poff_src = intel_bb_get_object_offset(ibb, src->handle);
-	poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
-	igt_debug("bb  presumed offset: 0x%"PRIx64"\n", poff_bb);
-	igt_debug("src presumed offset: 0x%"PRIx64"\n", poff_src);
-	igt_debug("dst presumed offset: 0x%"PRIx64"\n", poff_dst);
 	if (reloc_obj == RELOC) {
+		poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+		poff_src = intel_bb_get_object_offset(ibb, src->handle);
+		poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
+		igt_debug("bb  presumed offset: 0x%"PRIx64"\n", poff_bb);
+		igt_debug("src presumed offset: 0x%"PRIx64"\n", poff_src);
+		igt_debug("dst presumed offset: 0x%"PRIx64"\n", poff_dst);
 		igt_assert(poff_bb == 0);
 		igt_assert(poff_src == 0);
 		igt_assert(poff_dst == 0);
 	}
 
 	intel_bb_emit_bbe(ibb);
-	igt_debug("exec flags: %" PRIX64 "\n", flags);
-	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, true);
+	intel_bb_flush_blit(ibb);
 	check_buf(dst, COLOR_CC);
 
 	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
@@ -423,15 +649,29 @@ static void blit(struct buf_ops *bops,
 
 	intel_bb_reset(ibb, purge_cache);
 
+	/* For purge we lost offsets and bufs were removed from tracking list */
+	if (purge_cache) {
+		src->addr.offset = poff_src;
+		dst->addr.offset = poff_dst;
+	}
+
+	/* Add buffers again, should work both for purge and keep cache */
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	igt_assert_f(poff_src == src->addr.offset,
+		     "prev src addr: %" PRIx64 " <> src addr %" PRIx64 "\n",
+		     poff_src, src->addr.offset);
+	igt_assert_f(poff_dst == dst->addr.offset,
+		     "prev dst addr: %" PRIx64 " <> dst addr %" PRIx64 "\n",
+		     poff_dst, dst->addr.offset);
+
 	fill_buf(src, COLOR_77);
 	fill_buf(dst, COLOR_00);
 
-	if (purge_cache && !do_relocs) {
-		intel_bb_add_intel_buf(ibb, src, false);
-		intel_bb_add_intel_buf(ibb, dst, true);
-	}
-
 	__emit_blit(ibb, src, dst);
+	intel_bb_flush_blit(ibb);
+	check_buf(dst, COLOR_77);
 
 	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
 	poff2_src = intel_bb_get_object_offset(ibb, src->handle);
@@ -455,21 +695,9 @@ static void blit(struct buf_ops *bops,
 	 * we are in full control of our own GTT.
 	 */
 	if (gem_uses_full_ppgtt(i915)) {
-		if (purge_cache) {
-			if (do_relocs) {
-				igt_assert_eq_u64(poff2_bb,  0);
-				igt_assert_eq_u64(poff2_src, 0);
-				igt_assert_eq_u64(poff2_dst, 0);
-			} else {
-				igt_assert_neq_u64(poff_bb, poff2_bb);
-				igt_assert_eq_u64(poff_src, poff2_src);
-				igt_assert_eq_u64(poff_dst, poff2_dst);
-			}
-		} else {
-			igt_assert_eq_u64(poff_bb,  poff2_bb);
-			igt_assert_eq_u64(poff_src, poff2_src);
-			igt_assert_eq_u64(poff_dst, poff2_dst);
-		}
+		igt_assert_eq_u64(poff_bb,  poff2_bb);
+		igt_assert_eq_u64(poff_src, poff2_src);
+		igt_assert_eq_u64(poff_dst, poff2_dst);
 	}
 
 	intel_bb_emit_bbe(ibb);
@@ -636,7 +864,7 @@ static int dump_base64(const char *name, struct intel_buf *buf)
 	if (ret != Z_OK) {
 		igt_warn("error compressing, ret: %d\n", ret);
 	} else {
-		igt_info("compressed %" PRIx64 " -> %lu\n",
+		igt_info("compressed %" PRIu64 " -> %lu\n",
 			 buf->surface[0].size, outsize);
 
 		igt_info("--- %s ---\n", name);
@@ -1142,6 +1370,10 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 		gen = intel_gen(intel_get_drm_devid(i915));
 	}
 
+	igt_describe("Ensure reset is possible on fresh bb");
+	igt_subtest("reset-bb")
+		reset_bb(bops);
+
 	igt_subtest("simple-bb")
 		simple_bb(bops, false);
 
@@ -1154,17 +1386,47 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("reset-flags")
 		reset_flags(bops);
 
-	igt_subtest("blit-noreloc-keep-cache")
-		blit(bops, NORELOC, KEEP_CACHE);
+	igt_subtest("add-remove-objects")
+		add_remove_objects(bops);
 
-	igt_subtest("blit-reloc-purge-cache")
-		blit(bops, RELOC, PURGE_CACHE);
+	igt_subtest("destroy-bb")
+		destroy_bb(bops);
 
-	igt_subtest("blit-noreloc-purge-cache")
-		blit(bops, NORELOC, PURGE_CACHE);
+	igt_subtest("object-reloc-purge-cache")
+		object_reloc(bops, PURGE_CACHE);
+
+	igt_subtest("object-reloc-keep-cache")
+		object_reloc(bops, KEEP_CACHE);
+
+	igt_subtest("object-noreloc-purge-cache-simple")
+		object_noreloc(bops, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("object-noreloc-keep-cache-simple")
+		object_noreloc(bops, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("object-noreloc-purge-cache-random")
+		object_noreloc(bops, PURGE_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("object-noreloc-keep-cache-random")
+		object_noreloc(bops, KEEP_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-reloc-purge-cache")
+		blit(bops, RELOC, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
 
 	igt_subtest("blit-reloc-keep-cache")
-		blit(bops, RELOC, KEEP_CACHE);
+		blit(bops, RELOC, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("blit-noreloc-keep-cache-random")
+		blit(bops, NORELOC, KEEP_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-noreloc-purge-cache-random")
+		blit(bops, NORELOC, PURGE_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-noreloc-keep-cache")
+		blit(bops, NORELOC, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("blit-noreloc-purge-cache")
+		blit(bops, NORELOC, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
 
 	igt_subtest("intel-bb-blit-none")
 		do_intel_bb_blit(bops, 10, I915_TILING_NONE);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 22/39] tests/api_intel_bb: Add compressed->compressed copy
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (20 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 21/39] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 23/39] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
                   ` (18 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Check aux pagetables are working when more than one compressed
buffers are added.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_bb.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 918ebd629..dd1a4ac13 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -1260,7 +1260,7 @@ static void render_ccs(struct buf_ops *bops)
 	struct intel_bb *ibb;
 	const int width = 1024;
 	const int height = 1024;
-	struct intel_buf src, dst, final;
+	struct intel_buf src, dst, dst2, final;
 	int i915 = buf_ops_get_fd(bops);
 	uint32_t fails = 0;
 	uint32_t compressed = 0;
@@ -1275,6 +1275,8 @@ static void render_ccs(struct buf_ops *bops)
 			 I915_COMPRESSION_NONE);
 	scratch_buf_init(bops, &dst, width, height, I915_TILING_Y,
 			 I915_COMPRESSION_RENDER);
+	scratch_buf_init(bops, &dst2, width, height, I915_TILING_Y,
+			 I915_COMPRESSION_RENDER);
 	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
 			 I915_COMPRESSION_NONE);
 
@@ -1294,6 +1296,12 @@ static void render_ccs(struct buf_ops *bops)
 	render_copy(ibb,
 		    &dst,
 		    0, 0, width, height,
+		    &dst2,
+		    0, 0);
+
+	render_copy(ibb,
+		    &dst2,
+		    0, 0, width, height,
 		    &final,
 		    0, 0);
 
@@ -1309,12 +1317,15 @@ static void render_ccs(struct buf_ops *bops)
 	if (write_png) {
 		intel_buf_write_to_png(&src, "render-ccs-src.png");
 		intel_buf_write_to_png(&dst, "render-ccs-dst.png");
+		intel_buf_write_to_png(&dst2, "render-ccs-dst2.png");
 		intel_buf_write_aux_to_png(&dst, "render-ccs-dst-aux.png");
+		intel_buf_write_aux_to_png(&dst2, "render-ccs-dst2-aux.png");
 		intel_buf_write_to_png(&final, "render-ccs-final.png");
 	}
 
 	intel_buf_close(bops, &src);
 	intel_buf_close(bops, &dst);
+	intel_buf_close(bops, &dst2);
 	intel_buf_close(bops, &final);
 
 	igt_assert_f(fails == 0, "render-ccs fails: %d\n", fails);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 23/39] tests/api_intel_bb: Add purge-bb test
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (21 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 22/39] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 24/39] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
                   ` (17 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Check address acquired is same after purging bb. For relocations
we expect 0 twice, for allocator release and alloc same address.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_bb.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index dd1a4ac13..3bdf3f1d5 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -142,6 +142,33 @@ static void reset_bb(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
+static void purge_bb(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_buf *buf;
+	struct intel_bb *ibb;
+	uint64_t offset0, offset1;
+
+	buf = intel_buf_create(bops, 512, 512, 32, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	ibb = intel_bb_create(i915, 4096);
+	intel_bb_set_debug(ibb, true);
+
+	intel_bb_add_intel_buf(ibb, buf, false);
+	offset0 = buf->addr.offset;
+
+	intel_bb_reset(ibb, true);
+	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+
+	intel_bb_add_intel_buf(ibb, buf, false);
+	offset1 = buf->addr.offset;
+
+	igt_assert(offset0 == offset1);
+
+	intel_buf_destroy(buf);
+	intel_bb_destroy(ibb);
+}
+
 static void simple_bb(struct buf_ops *bops, bool use_context)
 {
 	int i915 = buf_ops_get_fd(bops);
@@ -1385,6 +1412,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("reset-bb")
 		reset_bb(bops);
 
+	igt_subtest_f("purge-bb")
+		purge_bb(bops);
+
 	igt_subtest("simple-bb")
 		simple_bb(bops, false);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 24/39] tests/api_intel_bb: Add simple intel-bb which uses allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (22 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 23/39] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 25/39] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
                   ` (16 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

A simple test which uses allocator and can be easily copy-paste
when intel-bb is used for batchbuffer creation.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_bb.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 3bdf3f1d5..9c62f71e8 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -209,6 +209,36 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 	intel_bb_destroy(ibb);
 }
 
+static void bb_with_allocator(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *dst;
+	uint32_t ctx = 0;
+
+	igt_require(gem_uses_full_ppgtt(i915));
+
+	ibb = intel_bb_create_with_allocator(i915, ctx, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	intel_bb_remove_intel_buf(ibb, src);
+	intel_bb_remove_intel_buf(ibb, dst);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(dst);
+	intel_bb_destroy(ibb);
+}
+
 /*
  * Make sure we lead to realloc in the intel_bb.
  */
@@ -1421,6 +1451,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("simple-bb-ctx")
 		simple_bb(bops, true);
 
+	igt_subtest("bb-with-allocator")
+		bb_with_allocator(bops);
+
 	igt_subtest("lot-of-buffers")
 		lot_of_buffers(bops);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 25/39] tests/api_intel_bb: Use allocator in delta-check test
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (23 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 24/39] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 26/39] tests/api_intel_bb: Check switching vm in intel-bb Zbigniew Kempczyński
                   ` (15 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

We want to use address returned from emit_reloc() but do not call
kernel relocation path. Change intel-bb to use allocator to fully
control addresses passed in execbuf.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_bb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 9c62f71e8..ccb78d928 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -1126,7 +1126,8 @@ static void delta_check(struct buf_ops *bops)
 	uint64_t offset;
 	bool supports_48bit;
 
-	ibb = intel_bb_create(i915, PAGE_SIZE);
+	ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
 	supports_48bit = ibb->supports_48b_address;
 	if (!supports_48bit)
 		intel_bb_destroy(ibb);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 26/39] tests/api_intel_bb: Check switching vm in intel-bb
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (24 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 25/39] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 27/39] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
                   ` (14 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

For more vm-controlled scenarios we have to support changing vm
in the intel-bb. Test verifies allocator is able to provide
two vm's which can be assigned to used within intel-bb context.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_bb.c | 105 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 105 insertions(+)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index ccb78d928..eafa856d0 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -36,6 +36,7 @@
 #include <glib.h>
 #include <zlib.h>
 #include "intel_bufops.h"
+#include "i915/gem_vm.h"
 
 #define PAGE_SIZE 4096
 
@@ -239,6 +240,107 @@ static void bb_with_allocator(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
+static void bb_with_vm(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct drm_i915_gem_context_param arg = {
+		.param = I915_CONTEXT_PARAM_VM,
+	};
+	struct intel_bb *ibb;
+	struct intel_buf *src, *dst, *gap;
+	uint32_t ctx = 0, vm_id1, vm_id2;
+	uint64_t prev_vm, vm;
+	uint64_t src_addr[5], dst_addr[5];
+
+	igt_require(gem_uses_full_ppgtt(i915));
+
+	ibb = intel_bb_create_with_allocator(i915, ctx, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	gap = intel_buf_create(bops, 4096, 128, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+
+	/* vm for second blit */
+	vm_id1 = gem_vm_create(i915);
+
+	/* Get vm_id for default vm */
+	arg.ctx_id = ctx;
+	gem_context_get_param(i915, &arg);
+	vm_id2 = arg.value;
+
+	igt_debug("Vm_id1: %u\n", vm_id1);
+	igt_debug("Vm_id2: %u\n", vm_id2);
+
+	/* First blit without set calling setparam */
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[0] = src->addr.offset;
+	dst_addr[0] = dst->addr.offset;
+	igt_debug("step1: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[0], (long long) dst_addr[0]);
+
+	/* Open new allocator with vm_id */
+	vm = intel_allocator_open_vm(i915, vm_id1, INTEL_ALLOCATOR_SIMPLE);
+	prev_vm = intel_bb_assign_vm(ibb, vm, vm_id1);
+
+	intel_bb_add_intel_buf(ibb, gap, false);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[1] = src->addr.offset;
+	dst_addr[1] = dst->addr.offset;
+	igt_debug("step2: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[1], (long long) dst_addr[1]);
+
+	/* Back with default vm */
+	intel_bb_assign_vm(ibb, prev_vm, vm_id2);
+	intel_bb_add_intel_buf(ibb, gap, false);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[2] = src->addr.offset;
+	dst_addr[2] = dst->addr.offset;
+	igt_debug("step3: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[2], (long long) dst_addr[2]);
+
+	/* And exchange one more time */
+	intel_bb_assign_vm(ibb, vm, vm_id1);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[3] = src->addr.offset;
+	dst_addr[3] = dst->addr.offset;
+	igt_debug("step4: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[3], (long long) dst_addr[3]);
+
+	/* Back with default vm */
+	gem_vm_destroy(i915, vm_id1);
+	gem_vm_destroy(i915, vm_id2);
+	intel_bb_assign_vm(ibb, prev_vm, 0);
+
+	/* We can close it after assign previous vm to ibb */
+	intel_allocator_close(vm);
+
+	/* Try default vm still works */
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[4] = src->addr.offset;
+	dst_addr[4] = dst->addr.offset;
+	igt_debug("step5: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[4], (long long) dst_addr[4]);
+
+	/* Addresses should match for vm and prev_vm blits */
+	igt_assert_eq(src_addr[0], src_addr[2]);
+	igt_assert_eq(dst_addr[0], dst_addr[2]);
+	igt_assert_eq(src_addr[1], src_addr[3]);
+	igt_assert_eq(dst_addr[1], dst_addr[3]);
+	igt_assert_eq(src_addr[2], src_addr[4]);
+	igt_assert_eq(dst_addr[2], dst_addr[4]);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(dst);
+	intel_buf_destroy(gap);
+	intel_bb_destroy(ibb);
+}
+
 /*
  * Make sure we lead to realloc in the intel_bb.
  */
@@ -1455,6 +1557,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("bb-with-allocator")
 		bb_with_allocator(bops);
 
+	igt_subtest("bb-with-vm")
+		bb_with_vm(bops);
+
 	igt_subtest("lot-of-buffers")
 		lot_of_buffers(bops);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 27/39] tests/api_intel_allocator: Simple allocator test suite
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (25 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 26/39] tests/api_intel_bb: Check switching vm in intel-bb Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 28/39] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
                   ` (13 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

We want to verify allocator works as expected. Try to exploit it.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_allocator.c | 573 +++++++++++++++++++++++++++++++
 tests/meson.build                |   1 +
 2 files changed, 574 insertions(+)
 create mode 100644 tests/i915/api_intel_allocator.c

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
new file mode 100644
index 000000000..e59b17297
--- /dev/null
+++ b/tests/i915/api_intel_allocator.c
@@ -0,0 +1,573 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <stdatomic.h>
+#include "i915/gem.h"
+#include "igt.h"
+#include "igt_aux.h"
+#include "intel_allocator.h"
+
+#define OBJ_SIZE 1024
+
+struct test_obj {
+	uint32_t handle;
+	uint64_t offset;
+	uint64_t size;
+};
+
+static _Atomic(uint32_t) next_handle;
+
+static inline uint32_t gem_handle_gen(void)
+{
+	return atomic_fetch_add(&next_handle, 1);
+}
+
+static void alloc_simple(int fd)
+{
+	uint64_t ahnd;
+	uint64_t offset0, offset1, size = 0x1000, align = 0x1000, start, end;
+	bool is_allocated, freed;
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	offset0 = intel_allocator_alloc(ahnd, 1, size, align);
+	offset1 = intel_allocator_alloc(ahnd, 1, size, align);
+	igt_assert(offset0 == offset1);
+
+	is_allocated = intel_allocator_is_allocated(ahnd, 1, size, offset0);
+	igt_assert(is_allocated);
+
+	freed = intel_allocator_free(ahnd, 1);
+	igt_assert(freed);
+
+	is_allocated = intel_allocator_is_allocated(ahnd, 1, size, offset0);
+	igt_assert(!is_allocated);
+
+	freed = intel_allocator_free(ahnd, 1);
+	igt_assert(!freed);
+
+	intel_allocator_get_address_range(ahnd, &start, &end);
+	offset0 = intel_allocator_alloc(ahnd, 1, end - start, 0);
+	offset1 = __intel_allocator_alloc(ahnd, 2, 4096, 0);
+	igt_assert(offset1 == ALLOC_INVALID_ADDRESS);
+	intel_allocator_free(ahnd, 1);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+static void reserve_simple(int fd)
+{
+	uint64_t ahnd, start, size = 0x1000;
+	bool reserved, unreserved;
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	intel_allocator_get_address_range(ahnd, &start, NULL);
+
+	reserved = intel_allocator_reserve(ahnd, 0, size, start);
+	igt_assert(reserved);
+
+	reserved = intel_allocator_is_reserved(ahnd, size, start);
+	igt_assert(reserved);
+
+	reserved = intel_allocator_reserve(ahnd, 0, size, start);
+	igt_assert(!reserved);
+
+	unreserved = intel_allocator_unreserve(ahnd, 0, size, start);
+	igt_assert(unreserved);
+
+	reserved = intel_allocator_is_reserved(ahnd, size, start);
+	igt_assert(!reserved);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+static void reserve(int fd, uint8_t type)
+{
+	struct test_obj obj;
+	uint64_t ahnd, offset = 0x40000, size = 0x1000;
+
+	ahnd = intel_allocator_open(fd, 0, type);
+
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, size, offset), true);
+	/* try overlapping won't succeed */
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, size, offset + size/2), false);
+
+	obj.handle = gem_handle_gen();
+	obj.size = OBJ_SIZE;
+	obj.offset = intel_allocator_alloc(ahnd, obj.handle, obj.size, 0);
+
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, obj.size, obj.offset), false);
+	intel_allocator_free(ahnd, obj.handle);
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, obj.size, obj.offset), true);
+
+	igt_assert_eq(intel_allocator_unreserve(ahnd, 0, obj.size, obj.offset), true);
+	igt_assert_eq(intel_allocator_unreserve(ahnd, 0, size, offset), true);
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, size, offset + size/2), true);
+	igt_assert_eq(intel_allocator_unreserve(ahnd, 0, size, offset + size/2), true);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+static bool overlaps(struct test_obj *buf1, struct test_obj *buf2)
+{
+	uint64_t begin1 = buf1->offset;
+	uint64_t end1 = buf1->offset + buf1->size;
+	uint64_t begin2 = buf2->offset;
+	uint64_t end2 = buf2->offset + buf2->size;
+
+	return (end1 > begin2 && end2 > end1) || (end2 > begin1 && end1 > end2);
+}
+
+static void basic_alloc(int fd, int cnt, uint8_t type)
+{
+	struct test_obj *obj;
+	uint64_t ahnd;
+	int i, j;
+
+	ahnd = intel_allocator_open(fd, 0, type);
+	obj = malloc(sizeof(struct test_obj) * cnt);
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("allocating objects: ", i, cnt);
+		obj[i].handle = gem_handle_gen();
+		obj[i].size = OBJ_SIZE;
+		obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+						      obj[i].size, 4096);
+		igt_assert_eq(obj[i].offset % 4096, 0);
+	}
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("check overlapping: ", i, cnt);
+
+		if (type == INTEL_ALLOCATOR_RANDOM)
+			continue;
+
+		for (j = 0; j < cnt; j++) {
+			if (j == i)
+				continue;
+				igt_assert(!overlaps(&obj[i], &obj[j]));
+		}
+	}
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("freeing objects: ", i, cnt);
+		intel_allocator_free(ahnd, obj[i].handle);
+	}
+
+	free(obj);
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+static void reuse(int fd, uint8_t type)
+{
+	struct test_obj obj[128], tmp;
+	uint64_t ahnd, prev_offset;
+	int i;
+
+	ahnd = intel_allocator_open(fd, 0, type);
+
+	for (i = 0; i < 128; i++) {
+		obj[i].handle = gem_handle_gen();
+		obj[i].size = OBJ_SIZE;
+		obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+						      obj[i].size, 0x40);
+	}
+
+	/* check simple reuse */
+	for (i = 0; i < 128; i++) {
+		prev_offset = obj[i].offset;
+		obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+						      obj[i].size, 0);
+		igt_assert(prev_offset == obj[i].offset);
+	}
+	i--;
+
+	/* free previously allocated bo */
+	intel_allocator_free(ahnd, obj[i].handle);
+	/* alloc different buffer to fill freed hole */
+	tmp.handle = gem_handle_gen();
+	tmp.offset = intel_allocator_alloc(ahnd, tmp.handle, OBJ_SIZE, 0);
+	igt_assert(prev_offset == tmp.offset);
+
+	obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+					      obj[i].size, 0);
+	igt_assert(prev_offset != obj[i].offset);
+	intel_allocator_free(ahnd, tmp.handle);
+
+	for (i = 0; i < 128; i++)
+		intel_allocator_free(ahnd, obj[i].handle);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+struct ial_thread_args {
+	uint64_t ahnd;
+	pthread_t thread;
+	uint32_t *handles;
+	uint64_t *offsets;
+	uint32_t count;
+	int threads;
+	int idx;
+};
+
+static void *alloc_bo_in_thread(void *arg)
+{
+	struct ial_thread_args *a = arg;
+	int i;
+
+	for (i = a->idx; i < a->count; i += a->threads) {
+		a->handles[i] = gem_handle_gen();
+		a->offsets[i] = intel_allocator_alloc(a->ahnd, a->handles[i], OBJ_SIZE,
+						      1UL << ((random() % 20) + 1));
+	}
+
+	return NULL;
+}
+
+static void *free_bo_in_thread(void *arg)
+{
+	struct ial_thread_args *a = arg;
+	int i;
+
+	for (i = (a->idx + 1) % a->threads; i < a->count; i += a->threads)
+		intel_allocator_free(a->ahnd, a->handles[i]);
+
+	return NULL;
+}
+
+#define THREADS 6
+
+static void parallel_one(int fd, uint8_t type)
+{
+	struct ial_thread_args a[THREADS];
+	uint32_t *handles;
+	uint64_t ahnd, *offsets;
+	int count, i;
+
+	srandom(0xdeadbeef);
+	ahnd = intel_allocator_open(fd, 0, type);
+	count = 1UL << 12;
+
+	handles = malloc(sizeof(uint32_t) * count);
+	offsets = calloc(1, sizeof(uint64_t) * count);
+
+	for (i = 0; i < THREADS; i++) {
+		a[i].ahnd = ahnd;
+		a[i].handles = handles;
+		a[i].offsets = offsets;
+		a[i].count = count;
+		a[i].threads = THREADS;
+		a[i].idx = i;
+		pthread_create(&a[i].thread, NULL, alloc_bo_in_thread, &a[i]);
+	}
+
+	for (i = 0; i < THREADS; i++)
+		pthread_join(a[i].thread, NULL);
+
+	/* Check if all objects are allocated */
+	for (i = 0; i < count; i++) {
+		/* Reloc + random allocators don't have state. */
+		if (type == INTEL_ALLOCATOR_RELOC || type == INTEL_ALLOCATOR_RANDOM)
+			break;
+
+		igt_assert_eq(offsets[i],
+			      intel_allocator_alloc(a->ahnd, handles[i], OBJ_SIZE, 0));
+	}
+
+	for (i = 0; i < THREADS; i++)
+		pthread_create(&a[i].thread, NULL, free_bo_in_thread, &a[i]);
+
+	for (i = 0; i < THREADS; i++)
+		pthread_join(a[i].thread, NULL);
+
+	free(handles);
+	free(offsets);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+#define SIMPLE_GROUP_ALLOCS 8
+static void __simple_allocs(int fd)
+{
+	uint32_t handles[SIMPLE_GROUP_ALLOCS];
+	uint64_t ahnd;
+	uint32_t ctx;
+	int i;
+
+	ctx = rand() % 2;
+	ahnd = intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_SIMPLE);
+
+	for (i = 0; i < SIMPLE_GROUP_ALLOCS; i++) {
+		uint32_t size;
+
+		size = (rand() % 4 + 1) * 0x1000;
+		handles[i] = gem_create(fd, size);
+		intel_allocator_alloc(ahnd, handles[i], size, 0x1000);
+	}
+
+	for (i = 0; i < SIMPLE_GROUP_ALLOCS; i++) {
+		igt_assert_f(intel_allocator_free(ahnd, handles[i]) == 1,
+			     "Error freeing handle: %u\n", handles[i]);
+		gem_close(fd, handles[i]);
+	}
+
+	intel_allocator_close(ahnd);
+}
+
+static void fork_simple_once(int fd)
+{
+	intel_allocator_multiprocess_start();
+
+	igt_fork(child, 1)
+		__simple_allocs(fd);
+
+	igt_waitchildren();
+
+	intel_allocator_multiprocess_stop();
+}
+
+#define SIMPLE_TIMEOUT 5
+static void *__fork_simple_thread(void *data)
+{
+	int fd = (int) (long) data;
+
+	igt_until_timeout(SIMPLE_TIMEOUT) {
+		__simple_allocs(fd);
+	}
+
+	return NULL;
+}
+
+static void fork_simple_stress(int fd, bool two_level_inception)
+{
+	pthread_t thread0, thread1;
+	uint64_t ahnd0, ahnd1;
+	bool are_empty;
+
+	__intel_allocator_multiprocess_prepare();
+
+	igt_fork(child, 8) {
+		if (two_level_inception) {
+			pthread_create(&thread0, NULL, __fork_simple_thread,
+				       (void *) (long) fd);
+			pthread_create(&thread1, NULL, __fork_simple_thread,
+				       (void *) (long) fd);
+		}
+
+		igt_until_timeout(SIMPLE_TIMEOUT) {
+			__simple_allocs(fd);
+		}
+
+		if (two_level_inception) {
+			pthread_join(thread0, NULL);
+			pthread_join(thread1, NULL);
+		}
+	}
+
+	pthread_create(&thread0, NULL, __fork_simple_thread, (void *) (long) fd);
+	pthread_create(&thread1, NULL, __fork_simple_thread, (void *) (long) fd);
+
+	ahnd0 = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd1 = intel_allocator_open(fd, 1, INTEL_ALLOCATOR_SIMPLE);
+
+	__intel_allocator_multiprocess_start();
+
+	igt_waitchildren();
+
+	pthread_join(thread0, NULL);
+	pthread_join(thread1, NULL);
+
+	are_empty = intel_allocator_close(ahnd0);
+	are_empty &= intel_allocator_close(ahnd1);
+
+	intel_allocator_multiprocess_stop();
+
+	igt_assert_f(are_empty, "Allocators were not emptied\n");
+}
+
+static void __reopen_allocs(int fd1, int fd2, bool check)
+{
+	uint64_t ahnd0, ahnd1, ahnd2;
+
+	ahnd0 = intel_allocator_open(fd1, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd1 = intel_allocator_open(fd2, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd2 = intel_allocator_open(fd2, 0, INTEL_ALLOCATOR_SIMPLE);
+	igt_assert(ahnd0 != ahnd1);
+	igt_assert(ahnd1 != ahnd2);
+
+	/* in fork mode we can have more references, so skip check */
+	if (!check) {
+		intel_allocator_close(ahnd0);
+		intel_allocator_close(ahnd1);
+		intel_allocator_close(ahnd2);
+	} else {
+		igt_assert_eq(intel_allocator_close(ahnd0), true);
+		igt_assert_eq(intel_allocator_close(ahnd1), false);
+		igt_assert_eq(intel_allocator_close(ahnd2), true);
+	}
+}
+
+static void reopen(int fd)
+{
+	int fd2;
+
+	igt_require_gem(fd);
+
+	fd2 = gem_reopen_driver(fd);
+
+	__reopen_allocs(fd, fd2, true);
+
+	close(fd2);
+}
+
+#define REOPEN_TIMEOUT 3
+static void reopen_fork(int fd)
+{
+	int fd2;
+
+	igt_require_gem(fd);
+
+	intel_allocator_multiprocess_start();
+
+	fd2 = gem_reopen_driver(fd);
+
+	igt_fork(child, 2) {
+		igt_until_timeout(REOPEN_TIMEOUT)
+			__reopen_allocs(fd, fd2, false);
+	}
+	igt_until_timeout(REOPEN_TIMEOUT)
+		__reopen_allocs(fd, fd2, false);
+
+	igt_waitchildren();
+
+	/* Check references at the end */
+	__reopen_allocs(fd, fd2, true);
+
+	close(fd2);
+
+	intel_allocator_multiprocess_stop();
+}
+
+static void open_vm(int fd)
+{
+	uint64_t ahnd[4], offset[4], size = 0x1000;
+	int i, n = ARRAY_SIZE(ahnd);
+
+	ahnd[0] = intel_allocator_open_vm(fd, 1, INTEL_ALLOCATOR_SIMPLE);
+	ahnd[1] = intel_allocator_open_vm(fd, 1, INTEL_ALLOCATOR_SIMPLE);
+	ahnd[2] = intel_allocator_open_vm_as(ahnd[1], 2);
+	ahnd[3] = intel_allocator_open(fd, 3, INTEL_ALLOCATOR_SIMPLE);
+
+	offset[0] = intel_allocator_alloc(ahnd[0], 1, size, 0);
+	offset[1] = intel_allocator_alloc(ahnd[1], 2, size, 0);
+	igt_assert(offset[0] != offset[1]);
+
+	offset[2] = intel_allocator_alloc(ahnd[2], 3, size, 0);
+	igt_assert(offset[0] != offset[2] && offset[1] != offset[2]);
+
+	offset[3] = intel_allocator_alloc(ahnd[3], 1, size, 0);
+	igt_assert(offset[0] == offset[3]);
+
+	/*
+	 * As ahnd[0-2] lead to same allocator check can we free all handles
+	 * using selected ahnd.
+	 */
+	intel_allocator_free(ahnd[0], 1);
+	intel_allocator_free(ahnd[0], 2);
+	intel_allocator_free(ahnd[0], 3);
+	intel_allocator_free(ahnd[3], 1);
+
+	for (i = 0; i < n - 1; i++)
+		igt_assert_eq(intel_allocator_close(ahnd[i]), (i == n - 2));
+	igt_assert_eq(intel_allocator_close(ahnd[n-1]), true);
+}
+
+struct allocators {
+	const char *name;
+	uint8_t type;
+} als[] = {
+	{"simple", INTEL_ALLOCATOR_SIMPLE},
+	{"reloc",  INTEL_ALLOCATOR_RELOC},
+	{"random", INTEL_ALLOCATOR_RANDOM},
+	{NULL, 0},
+};
+
+igt_main
+{
+	int fd;
+	struct allocators *a;
+
+	igt_fixture {
+		fd = drm_open_driver(DRIVER_INTEL);
+		atomic_init(&next_handle, 1);
+		srandom(0xdeadbeef);
+	}
+
+	igt_subtest_f("alloc-simple")
+		alloc_simple(fd);
+
+	igt_subtest_f("reserve-simple")
+		reserve_simple(fd);
+
+	igt_subtest_f("reuse")
+		reuse(fd, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest_f("reserve")
+		reserve(fd, INTEL_ALLOCATOR_SIMPLE);
+
+	for (a = als; a->name; a++) {
+		igt_subtest_with_dynamic_f("%s-allocator", a->name) {
+			igt_dynamic("basic")
+				basic_alloc(fd, 1UL << 8, a->type);
+
+			igt_dynamic("parallel-one")
+				parallel_one(fd, a->type);
+
+			igt_dynamic("print")
+				basic_alloc(fd, 1UL << 2, a->type);
+
+			if (a->type == INTEL_ALLOCATOR_SIMPLE) {
+				igt_dynamic("reuse")
+					reuse(fd, a->type);
+
+				igt_dynamic("reserve")
+					reserve(fd, a->type);
+			}
+		}
+	}
+
+	igt_subtest_f("fork-simple-once")
+		fork_simple_once(fd);
+
+	igt_subtest_f("fork-simple-stress")
+		fork_simple_stress(fd, false);
+
+	igt_subtest_f("fork-simple-stress-signal") {
+		igt_fork_signal_helper();
+		fork_simple_stress(fd, false);
+		igt_stop_signal_helper();
+	}
+
+	igt_subtest_f("two-level-inception")
+		fork_simple_stress(fd, true);
+
+	igt_subtest_f("two-level-inception-interruptible") {
+		igt_fork_signal_helper();
+		fork_simple_stress(fd, true);
+		igt_stop_signal_helper();
+	}
+
+	igt_subtest_f("reopen")
+		reopen(fd);
+
+	igt_subtest_f("reopen-fork")
+		reopen_fork(fd);
+
+	igt_subtest_f("open-vm")
+		open_vm(fd);
+
+	igt_fixture
+		close(fd);
+}
diff --git a/tests/meson.build b/tests/meson.build
index 54a1a3c7c..bcb7dfb88 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -111,6 +111,7 @@ test_progs = [
 ]
 
 i915_progs = [
+	'api_intel_allocator',
 	'api_intel_bb',
 	'gen3_mixed_blits',
 	'gen3_render_linear_blits',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 28/39] tests/api_intel_allocator: Add execbuf with allocator example
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (26 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 27/39] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 29/39] tests/api_intel_allocator: Verify child can use its standalone allocator Zbigniew Kempczyński
                   ` (12 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Simplified version of non-fork test which can be used as a copy-paste
template. Uses blit to show how to prepare batch with addresses
acquired from allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_allocator.c | 91 ++++++++++++++++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
index e59b17297..95a5082aa 100644
--- a/tests/i915/api_intel_allocator.c
+++ b/tests/i915/api_intel_allocator.c
@@ -484,6 +484,94 @@ static void open_vm(int fd)
 	igt_assert_eq(intel_allocator_close(ahnd[n-1]), true);
 }
 
+/* Simple execbuf which uses allocator, non-fork mode */
+static void execbuf_with_allocator(int fd)
+{
+	struct drm_i915_gem_execbuffer2 execbuf;
+	struct drm_i915_gem_exec_object2 object[3];
+	uint64_t ahnd, sz = 4096, gtt_size;
+	unsigned int flags = EXEC_OBJECT_PINNED;
+	uint32_t *ptr, batch[32], copied;
+	int gen = intel_gen(intel_get_drm_devid(fd));
+	int i;
+	const uint32_t magic = 0x900df00d;
+
+	igt_require(gem_uses_full_ppgtt(fd));
+
+	gtt_size = gem_aperture_size(fd);
+	if ((gtt_size - 1) >> 32)
+		flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	memset(object, 0, sizeof(object));
+
+	/* i == 0 (src), i == 1 (dst), i == 2 (batch) */
+	for (i = 0; i < ARRAY_SIZE(object); i++) {
+		uint64_t offset;
+
+		object[i].handle = gem_create(fd, sz);
+		offset = intel_allocator_alloc(ahnd, object[i].handle, sz, 0);
+		object[i].offset = CANONICAL(offset);
+
+		object[i].flags = flags;
+		if (i == 1)
+			object[i].flags |= EXEC_OBJECT_WRITE;
+	}
+
+	/* Prepare src data */
+	ptr = gem_mmap__device_coherent(fd, object[0].handle, 0, sz, PROT_WRITE);
+	ptr[0] = magic;
+	gem_munmap(ptr, sz);
+
+	/* Blit src -> dst */
+	i = 0;
+	batch[i++] = XY_SRC_COPY_BLT_CMD |
+		  XY_SRC_COPY_BLT_WRITE_ALPHA |
+		  XY_SRC_COPY_BLT_WRITE_RGB;
+	if (gen >= 8)
+		batch[i - 1] |= 8;
+	else
+		batch[i - 1] |= 6;
+
+	batch[i++] = (3 << 24) | (0xcc << 16) | 4;
+	batch[i++] = 0;
+	batch[i++] = (1 << 16) | 4;
+	batch[i++] = object[1].offset;
+	if (gen >= 8)
+		batch[i++] = object[1].offset >> 32;
+	batch[i++] = 0;
+	batch[i++] = 4;
+	batch[i++] = object[0].offset;
+	if (gen >= 8)
+		batch[i++] = object[0].offset >> 32;
+	batch[i++] = MI_BATCH_BUFFER_END;
+	batch[i++] = MI_NOOP;
+
+	gem_write(fd, object[2].handle, 0, batch, i * sizeof(batch[0]));
+
+	memset(&execbuf, 0, sizeof(execbuf));
+	execbuf.buffers_ptr = to_user_pointer(object);
+	execbuf.buffer_count = 3;
+	if (gen >= 6)
+		execbuf.flags = I915_EXEC_BLT;
+	gem_execbuf(fd, &execbuf);
+	gem_sync(fd, object[1].handle);
+
+	/* Check dst data */
+	ptr = gem_mmap__device_coherent(fd, object[1].handle, 0, sz, PROT_READ);
+	copied = ptr[0];
+	gem_munmap(ptr, sz);
+
+	for (i = 0; i < ARRAY_SIZE(object); i++) {
+		igt_assert(intel_allocator_free(ahnd, object[i].handle));
+		gem_close(fd, object[i].handle);
+	}
+
+	igt_assert(copied == magic);
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
 struct allocators {
 	const char *name;
 	uint8_t type;
@@ -568,6 +656,9 @@ igt_main
 	igt_subtest_f("open-vm")
 		open_vm(fd);
 
+	igt_subtest_f("execbuf-with-allocator")
+		execbuf_with_allocator(fd);
+
 	igt_fixture
 		close(fd);
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 29/39] tests/api_intel_allocator: Verify child can use its standalone allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (27 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 28/39] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 30/39] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
                   ` (11 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Sometimes we don't want to use common allocator provided by main IGT
process.  Verify it is able to "detach" from it and initialize its own
version of allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/api_intel_allocator.c | 42 ++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
index 95a5082aa..7ff92a174 100644
--- a/tests/i915/api_intel_allocator.c
+++ b/tests/i915/api_intel_allocator.c
@@ -288,6 +288,45 @@ static void parallel_one(int fd, uint8_t type)
 	igt_assert_eq(intel_allocator_close(ahnd), true);
 }
 
+static void standalone(int fd)
+{
+	uint64_t ahnd, offset, size = 4096;
+	uint32_t handle = 1, child_handle = 2;
+	uint64_t *shared;
+
+	shared = mmap(0, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+
+	intel_allocator_multiprocess_start();
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	offset = intel_allocator_alloc(ahnd, handle, size, 0);
+
+	igt_fork(child, 2) {
+		/*
+		 * Use standalone allocator for child 1, detach from parent,
+		 * child 2 use allocator from parent.
+		 */
+		if (child == 1)
+			intel_allocator_init();
+
+		ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+		shared[child] = intel_allocator_alloc(ahnd, child_handle, size, 0);
+
+		intel_allocator_free(ahnd, child_handle);
+		intel_allocator_close(ahnd);
+	}
+	igt_waitchildren();
+	igt_assert_eq(offset, shared[1]);
+	igt_assert_neq(offset, shared[2]);
+
+	intel_allocator_free(ahnd, handle);
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+
+	intel_allocator_multiprocess_stop();
+
+	munmap(shared, 4096);
+}
+
 #define SIMPLE_GROUP_ALLOCS 8
 static void __simple_allocs(int fd)
 {
@@ -626,6 +665,9 @@ igt_main
 		}
 	}
 
+	igt_subtest_f("standalone")
+		standalone(fd);
+
 	igt_subtest_f("fork-simple-once")
 		fork_simple_once(fd);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 30/39] tests/gem_softpin: Verify allocator and execbuf pair work together
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (28 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 29/39] tests/api_intel_allocator: Verify child can use its standalone allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 31/39] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
                   ` (10 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Exercise objects offsets produced by the allocator for execbuf
are valid and no EINVAL/ENOSPC occurs. Check it works properly
also for multiprocess allocations/execbufs for same context.
As we're in full-ppgtt we disable softpin to verify offsets produced
by the allocator are valid and kernel doesn't want to relocate them.

Add allocator-basic and allocator-basic-reserve to BAT.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/gem_softpin.c              | 194 ++++++++++++++++++++++++++
 tests/intel-ci/fast-feedback.testlist |   2 +
 2 files changed, 196 insertions(+)

diff --git a/tests/i915/gem_softpin.c b/tests/i915/gem_softpin.c
index aba060a42..c3bfd10a9 100644
--- a/tests/i915/gem_softpin.c
+++ b/tests/i915/gem_softpin.c
@@ -28,6 +28,7 @@
 
 #include "i915/gem.h"
 #include "igt.h"
+#include "intel_allocator.h"
 
 #define EXEC_OBJECT_PINNED	(1<<4)
 #define EXEC_OBJECT_SUPPORTS_48B_ADDRESS (1<<3)
@@ -697,6 +698,184 @@ static void test_noreloc(int fd, enum sleep sleep, unsigned flags)
 		gem_close(fd, object[i].handle);
 }
 
+static void __reserve(uint64_t ahnd, int i915, bool pinned,
+		      struct drm_i915_gem_exec_object2 *objects,
+		      int num_obj, uint64_t size)
+{
+	uint64_t gtt = gem_aperture_size(i915);
+	unsigned int flags;
+	int i;
+
+	igt_assert(num_obj > 1);
+
+	flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	if (pinned)
+		flags |= EXEC_OBJECT_PINNED;
+
+	memset(objects, 0, sizeof(objects) * num_obj);
+
+	for (i = 0; i < num_obj; i++) {
+		objects[i].handle = gem_create(i915, size);
+		if (i < num_obj/2)
+			objects[i].offset = i * size;
+		else
+			objects[i].offset = gtt - (i + 1 - num_obj/2) * size;
+		objects[i].flags = flags;
+
+		intel_allocator_reserve(ahnd, objects[i].handle,
+					size, objects[i].offset);
+		igt_debug("Reserve i: %d, handle: %u, offset: %llx\n", i,
+			  objects[i].handle, (long long) objects[i].offset);
+	}
+}
+
+static void __unreserve(uint64_t ahnd, int i915,
+			struct drm_i915_gem_exec_object2 *objects,
+			int num_obj, uint64_t size)
+{
+	int i;
+
+	for (i = 0; i < num_obj; i++) {
+		intel_allocator_unreserve(ahnd, objects[i].handle,
+					  size, objects[i].offset);
+		igt_debug("Unreserve i: %d, handle: %u, offset: %llx\n", i,
+			  objects[i].handle, (long long) objects[i].offset);
+		gem_close(i915, objects[i].handle);
+	}
+}
+
+static void __exec_using_allocator(uint64_t ahnd, int i915, int num_obj,
+				   bool pinned)
+{
+	const uint32_t bbe = MI_BATCH_BUFFER_END;
+	struct drm_i915_gem_execbuffer2 execbuf;
+	struct drm_i915_gem_exec_object2 object[num_obj];
+	uint64_t stored_offsets[num_obj];
+	unsigned int flags;
+	uint64_t sz = 4096;
+	int i;
+
+	igt_assert(num_obj > 10);
+
+	flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	if (pinned)
+		flags |= EXEC_OBJECT_PINNED;
+
+	memset(object, 0, sizeof(object));
+
+	for (i = 0; i < num_obj; i++) {
+		sz = (rand() % 15 + 1) * 4096;
+		if (i == num_obj - 1)
+			sz = 4096;
+		object[i].handle = gem_create(i915, sz);
+		object[i].offset =
+			intel_allocator_alloc(ahnd, object[i].handle, sz, 0);
+	}
+	gem_write(i915, object[--i].handle, 0, &bbe, sizeof(bbe));
+
+	for (i = 0; i < num_obj; i++) {
+		object[i].flags = flags;
+		object[i].offset = gen8_canonical_addr(object[i].offset);
+		stored_offsets[i] = object[i].offset;
+	}
+
+	memset(&execbuf, 0, sizeof(execbuf));
+	execbuf.buffers_ptr = to_user_pointer(object);
+	execbuf.buffer_count = num_obj;
+	gem_execbuf(i915, &execbuf);
+
+	for (i = 0; i < num_obj; i++) {
+		igt_assert(intel_allocator_free(ahnd, object[i].handle));
+		gem_close(i915, object[i].handle);
+	}
+
+	/* Check kernel will keep offsets even if pinned is not set. */
+	for (i = 0; i < num_obj; i++)
+		igt_assert_eq_u64(stored_offsets[i], object[i].offset);
+}
+
+static void test_allocator_basic(int fd, bool reserve)
+{
+	const int num_obj = 257, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Check that we can place objects at start/end
+	 * of the GTT using the allocator.
+	 */
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	if (reserve)
+		__reserve(ahnd, fd, true, objects, num_reserved, ressize);
+	__exec_using_allocator(ahnd, fd, num_obj, true);
+	if (reserve)
+		__unreserve(ahnd, fd, objects, num_reserved, ressize);
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
+static void test_allocator_nopin(int fd, bool reserve)
+{
+	const int num_obj = 257, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Check that we can combine manual placement with automatic
+	 * GTT placement.
+	 *
+	 * This will also check that we agree with this small sampling of
+	 * allocator placements -- that is the given the same restrictions
+	 * in execobj[] the kernel does not reject the placement due
+	 * to overlaps or invalid addresses.
+	 */
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	if (reserve)
+		__reserve(ahnd, fd, false, objects, num_reserved, ressize);
+
+	__exec_using_allocator(ahnd, fd, num_obj, false);
+
+	if (reserve)
+		__unreserve(ahnd, fd, objects, num_reserved, ressize);
+
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
+static void test_allocator_fork(int fd)
+{
+	const int num_obj = 17, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Must be called before opening allocator in multiprocess environment
+	 * due to freeing previous allocator infrastructure and proper setup
+	 * of data structures and allocation thread.
+	 */
+	intel_allocator_multiprocess_start();
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	__reserve(ahnd, fd, true, objects, num_reserved, ressize);
+
+	igt_fork(child, 8) {
+		ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+		igt_until_timeout(2)
+			__exec_using_allocator(ahnd, fd, num_obj, true);
+		intel_allocator_close(ahnd);
+	}
+
+	igt_waitchildren();
+
+	__unreserve(ahnd, fd, objects, num_reserved, ressize);
+	igt_assert(intel_allocator_close(ahnd) == true);
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	igt_assert(intel_allocator_close(ahnd) == true);
+
+	intel_allocator_multiprocess_stop();
+}
+
 igt_main
 {
 	int fd = -1;
@@ -727,6 +906,21 @@ igt_main
 
 		igt_subtest("full")
 			test_full(fd);
+
+		igt_subtest("allocator-basic")
+			test_allocator_basic(fd, false);
+
+		igt_subtest("allocator-basic-reserve")
+			test_allocator_basic(fd, true);
+
+		igt_subtest("allocator-nopin")
+			test_allocator_nopin(fd, false);
+
+		igt_subtest("allocator-nopin-reserve")
+			test_allocator_nopin(fd, true);
+
+		igt_subtest("allocator-fork")
+			test_allocator_fork(fd);
 	}
 
 	igt_subtest("softpin")
diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
index eaa904fa7..fa5006d2e 100644
--- a/tests/intel-ci/fast-feedback.testlist
+++ b/tests/intel-ci/fast-feedback.testlist
@@ -39,6 +39,8 @@ igt@gem_mmap_gtt@basic
 igt@gem_render_linear_blits@basic
 igt@gem_render_tiled_blits@basic
 igt@gem_ringfill@basic-all
+igt@gem_softpin@allocator-basic
+igt@gem_softpin@allocator-basic-reserve
 igt@gem_sync@basic-all
 igt@gem_sync@basic-each
 igt@gem_tiled_blits@basic
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 31/39] tests/gem|kms: Remove intel_bb from fixture
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (29 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 30/39] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 32/39] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
                   ` (9 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

As intel_bb "opens" connection to allocator when test completes it can
leave allocator in unknown state (mostly in failed tests). As igt_core
was armed in resetting allocator infrastructure connection to it inside
intel_bb is not valid anymore. Trying to use it leads to catastrofic
errors.

Migrate intel_bb out of fixture and create it inside tests individually.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/gem_caching.c              | 14 ++++++++--
 tests/i915/gem_partial_pwrite_pread.c | 40 +++++++++++++++++----------
 tests/i915/gem_render_copy.c          | 31 ++++++++++-----------
 tests/kms_big_fb.c                    | 12 +++++---
 4 files changed, 61 insertions(+), 36 deletions(-)

diff --git a/tests/i915/gem_caching.c b/tests/i915/gem_caching.c
index bdaff68a0..4e844952f 100644
--- a/tests/i915/gem_caching.c
+++ b/tests/i915/gem_caching.c
@@ -158,7 +158,6 @@ igt_main
 			flags = 0;
 		}
 		data.bops = buf_ops_create(data.fd);
-		ibb = intel_bb_create(data.fd, PAGE_SIZE);
 
 		scratch_buf = intel_buf_create(data.bops, BO_SIZE/4, 1,
 					       32, 0, I915_TILING_NONE, 0);
@@ -174,6 +173,8 @@ igt_main
 
 		igt_info("checking partial reads\n");
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		for (i = 0; i < ROUNDS; i++) {
 			uint8_t val0 = i;
 			int start, len;
@@ -195,11 +196,15 @@ igt_main
 
 			igt_progress("partial reads test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_subtest("writes") {
 		igt_require(flags & TEST_WRITE);
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		igt_info("checking partial writes\n");
 
 		for (i = 0; i < ROUNDS; i++) {
@@ -240,11 +245,15 @@ igt_main
 
 			igt_progress("partial writes test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_subtest("read-writes") {
 		igt_require((flags & TEST_BOTH) == TEST_BOTH);
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		igt_info("checking partial writes after partial reads\n");
 
 		for (i = 0; i < ROUNDS; i++) {
@@ -307,10 +316,11 @@ igt_main
 
 			igt_progress("partial read/writes test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_fixture {
-		intel_bb_destroy(ibb);
 		intel_buf_destroy(scratch_buf);
 		intel_buf_destroy(staging_buf);
 		buf_ops_destroy(data.bops);
diff --git a/tests/i915/gem_partial_pwrite_pread.c b/tests/i915/gem_partial_pwrite_pread.c
index 5a14d424b..4f81d34b8 100644
--- a/tests/i915/gem_partial_pwrite_pread.c
+++ b/tests/i915/gem_partial_pwrite_pread.c
@@ -53,7 +53,6 @@ IGT_TEST_DESCRIPTION("Test pwrite/pread consistency when touching partial"
 #define PAGE_SIZE 4096
 #define BO_SIZE (4*4096)
 
-struct intel_bb *ibb;
 struct intel_buf *scratch_buf;
 struct intel_buf *staging_buf;
 
@@ -77,7 +76,8 @@ static void *__try_gtt_map_first(data_t *data, struct intel_buf *buf,
 	return ptr;
 }
 
-static void copy_bo(struct intel_buf *src, struct intel_buf *dst)
+static void copy_bo(struct intel_bb *ibb,
+		    struct intel_buf *src, struct intel_buf *dst)
 {
 	bool has_64b_reloc;
 
@@ -109,8 +109,8 @@ static void copy_bo(struct intel_buf *src, struct intel_buf *dst)
 }
 
 static void
-blt_bo_fill(data_t *data, struct intel_buf *tmp_bo,
-		struct intel_buf *bo, uint8_t val)
+blt_bo_fill(data_t *data, struct intel_bb *ibb,
+	    struct intel_buf *tmp_bo, struct intel_buf *bo, uint8_t val)
 {
 	uint8_t *gtt_ptr;
 	int i;
@@ -124,7 +124,7 @@ blt_bo_fill(data_t *data, struct intel_buf *tmp_bo,
 
 	igt_drop_caches_set(data->drm_fd, DROP_BOUND);
 
-	copy_bo(tmp_bo, bo);
+	copy_bo(ibb, tmp_bo, bo);
 }
 
 #define MAX_BLT_SIZE 128
@@ -139,14 +139,17 @@ static void get_range(int *start, int *len)
 
 static void test_partial_reads(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial reads\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		get_range(&start, &len);
 		gem_read(data->drm_fd, scratch_buf->handle, start, tmp, len);
@@ -159,26 +162,31 @@ static void test_partial_reads(data_t *data)
 
 		igt_progress("partial reads test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void test_partial_writes(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 	uint8_t *gtt_ptr;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial writes\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		memset(tmp, i + 63, BO_SIZE);
 
 		get_range(&start, &len);
 		gem_write(data->drm_fd, scratch_buf->handle, start, tmp, len);
 
-		copy_bo(scratch_buf, staging_buf);
+		copy_bo(ibb, scratch_buf, staging_buf);
 		gtt_ptr = __try_gtt_map_first(data, staging_buf, 0);
 
 		for (j = 0; j < start; j++) {
@@ -200,19 +208,24 @@ static void test_partial_writes(data_t *data)
 
 		igt_progress("partial writes test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void test_partial_read_writes(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 	uint8_t *gtt_ptr;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial writes after partial reads\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		/* partial read */
 		get_range(&start, &len);
@@ -226,7 +239,7 @@ static void test_partial_read_writes(data_t *data)
 		/* Change contents through gtt to make the pread cachelines
 		 * stale. */
 		val += 17;
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		/* partial write */
 		memset(tmp, i + 63, BO_SIZE);
@@ -234,7 +247,7 @@ static void test_partial_read_writes(data_t *data)
 		get_range(&start, &len);
 		gem_write(data->drm_fd, scratch_buf->handle, start, tmp, len);
 
-		copy_bo(scratch_buf, staging_buf);
+		copy_bo(ibb, scratch_buf, staging_buf);
 		gtt_ptr = __try_gtt_map_first(data, staging_buf, 0);
 
 		for (j = 0; j < start; j++) {
@@ -256,6 +269,8 @@ static void test_partial_read_writes(data_t *data)
 
 		igt_progress("partial read/writes test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void do_tests(data_t *data, int cache_level, const char *suffix)
@@ -289,8 +304,6 @@ igt_main
 		data.devid = intel_get_drm_devid(data.drm_fd);
 		data.bops = buf_ops_create(data.drm_fd);
 
-		ibb = intel_bb_create(data.drm_fd, PAGE_SIZE);
-
 		/* overallocate the buffers we're actually using because */	
 		scratch_buf = intel_buf_create(data.bops, BO_SIZE/4, 1, 32, 0, I915_TILING_NONE, 0);
 		staging_buf = intel_buf_create(data.bops, BO_SIZE/4, 1, 32, 0, I915_TILING_NONE, 0);
@@ -304,7 +317,6 @@ igt_main
 	do_tests(&data, 2, "-display");
 
 	igt_fixture {
-		intel_bb_destroy(ibb);
 		intel_buf_destroy(scratch_buf);
 		intel_buf_destroy(staging_buf);
 		buf_ops_destroy(data.bops);
diff --git a/tests/i915/gem_render_copy.c b/tests/i915/gem_render_copy.c
index afc490f1a..e48b5b996 100644
--- a/tests/i915/gem_render_copy.c
+++ b/tests/i915/gem_render_copy.c
@@ -58,7 +58,6 @@ typedef struct {
 	int drm_fd;
 	uint32_t devid;
 	struct buf_ops *bops;
-	struct intel_bb *ibb;
 	igt_render_copyfunc_t render_copy;
 	igt_vebox_copyfunc_t vebox_copy;
 } data_t;
@@ -341,6 +340,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		 enum i915_compression dst_compression,
 		 int flags)
 {
+	struct intel_bb *ibb;
 	struct intel_buf ref, src_tiled, src_ccs, dst_ccs, dst;
 	struct {
 		struct intel_buf buf;
@@ -397,6 +397,8 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	    src_compressed || dst_compressed)
 		igt_require(intel_gen(data->devid) >= 9);
 
+	ibb = intel_bb_create(data->drm_fd, 4096);
+
 	for (int i = 0; i < num_src; i++)
 		scratch_buf_init(data, &src[i].buf, WIDTH, HEIGHT, src[i].tiling,
 				 I915_COMPRESSION_NONE);
@@ -456,12 +458,12 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	 */
 	if (src_mixed_tiled) {
 		if (dst_compressed)
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst, 0, 0, WIDTH, HEIGHT,
 					  &dst_ccs, 0, 0);
 
 		for (int i = 0; i < num_src; i++) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &src[i].buf,
 					  WIDTH/4, HEIGHT/4, WIDTH/2-2, HEIGHT/2-2,
 					  dst_compressed ? &dst_ccs : &dst,
@@ -469,13 +471,13 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		}
 
 		if (dst_compressed)
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst_ccs, 0, 0, WIDTH, HEIGHT,
 					  &dst, 0, 0);
 
 	} else {
 		if (src_compression == I915_COMPRESSION_RENDER) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &src_tiled, 0, 0, WIDTH, HEIGHT,
 					  &src_ccs,
 					  0, 0);
@@ -486,7 +488,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 						       "render-src_ccs.bin");
 			}
 		} else if (src_compression == I915_COMPRESSION_MEDIA) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 &src_tiled, WIDTH, HEIGHT,
 					 &src_ccs);
 			if (dump_compressed_src_buf) {
@@ -498,34 +500,34 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		}
 
 		if (dst_compression == I915_COMPRESSION_RENDER) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  src_compressed ? &src_ccs : &src_tiled,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst_ccs,
 					  0, 0);
 
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst_ccs,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst,
 					  0, 0);
 		} else if (dst_compression == I915_COMPRESSION_MEDIA) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 src_compressed ? &src_ccs : &src_tiled,
 					 WIDTH, HEIGHT,
 					 &dst_ccs);
 
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 &dst_ccs,
 					 WIDTH, HEIGHT,
 					 &dst);
 		} else if (force_vebox_dst_copy) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 src_compressed ? &src_ccs : &src_tiled,
 					 WIDTH, HEIGHT,
 					 &dst);
 		} else {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  src_compressed ? &src_ccs : &src_tiled,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst,
@@ -572,8 +574,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	for (int i = 0; i < num_src; i++)
 		scratch_buf_fini(&src[i].buf);
 
-	/* handles gone, need to clean the objects cache within intel_bb */
-	intel_bb_reset(data->ibb, true);
+	intel_bb_destroy(ibb);
 }
 
 static int opt_handler(int opt, int opt_index, void *data)
@@ -796,7 +797,6 @@ igt_main_args("dac", NULL, help_str, opt_handler, NULL)
 		data.vebox_copy = igt_get_vebox_copyfunc(data.devid);
 
 		data.bops = buf_ops_create(data.drm_fd);
-		data.ibb = intel_bb_create(data.drm_fd, 4096);
 
 		igt_fork_hang_detector(data.drm_fd);
 	}
@@ -849,7 +849,6 @@ igt_main_args("dac", NULL, help_str, opt_handler, NULL)
 
 	igt_fixture {
 		igt_stop_hang_detector();
-		intel_bb_destroy(data.ibb);
 		buf_ops_destroy(data.bops);
 	}
 }
diff --git a/tests/kms_big_fb.c b/tests/kms_big_fb.c
index 5260176e1..b6707a5a4 100644
--- a/tests/kms_big_fb.c
+++ b/tests/kms_big_fb.c
@@ -664,7 +664,6 @@ igt_main
 			data.render_copy = igt_get_render_copyfunc(data.devid);
 
 		data.bops = buf_ops_create(data.drm_fd);
-		data.ibb = intel_bb_create(data.drm_fd, 4096);
 	}
 
 	/*
@@ -677,7 +676,9 @@ igt_main
 		igt_subtest_f("%s-addfb-size-overflow",
 			      modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_size_overflow(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
@@ -685,15 +686,18 @@ igt_main
 		igt_subtest_f("%s-addfb-size-offset-overflow",
 			      modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_size_offset_overflow(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
 	for (int i = 0; i < ARRAY_SIZE(modifiers); i++) {
 		igt_subtest_f("%s-addfb", modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
-
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_addfb(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
@@ -711,7 +715,9 @@ igt_main
 					igt_require(data.format == DRM_FORMAT_C8 ||
 						    igt_fb_supported_format(data.format));
 					igt_require(igt_display_has_format_mod(&data.display, data.format, data.modifier));
+					data.ibb = intel_bb_create(data.drm_fd, 4096);
 					test_scanout(&data);
+					intel_bb_destroy(data.ibb);
 				}
 			}
 
@@ -722,8 +728,6 @@ igt_main
 
 	igt_fixture {
 		igt_display_fini(&data.display);
-
-		intel_bb_destroy(data.ibb);
 		buf_ops_destroy(data.bops);
 	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 32/39] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (30 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 31/39] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 33/39] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
                   ` (8 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Generally when playing with intel_buf we should use wrapper code
instead of adding it to intel_bb directly. Code checks required
alignment on specific gens so protects us from passing unaligned
properly addresses.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/gem_mmap_offset.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/i915/gem_mmap_offset.c b/tests/i915/gem_mmap_offset.c
index c7145832f..95c934dbf 100644
--- a/tests/i915/gem_mmap_offset.c
+++ b/tests/i915/gem_mmap_offset.c
@@ -614,8 +614,8 @@ static void blt_coherency(int i915)
 	dst = create_bo(bops, 1, width, height);
 	size = src->surface[0].size;
 
-	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, 0, false);
-	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, 0, true);
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
 
 	intel_bb_blt_copy(ibb,
 			  src, 0, 0, src->surface[0].stride,
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 33/39] tests/gem_ppgtt: Adopt test to use intel_bb with allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (31 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 32/39] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 34/39] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
                   ` (7 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Initialize allocator within child processes to avoid creation allocator
thread and unnecessary communication to it.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/gem_ppgtt.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_ppgtt.c b/tests/i915/gem_ppgtt.c
index 5f8b34224..da83484a6 100644
--- a/tests/i915/gem_ppgtt.c
+++ b/tests/i915/gem_ppgtt.c
@@ -104,12 +104,14 @@ static void fork_rcs_copy(int timeout, uint32_t final,
 		struct intel_buf *src;
 		unsigned long i;
 
+		/* Standalone allocator */
+		intel_allocator_init();
+
 		if (flags & CREATE_CONTEXT)
 			ctx = gem_context_create(buf_ops_get_fd(dst[child]->bops));
 
 		ibb = intel_bb_create_with_context(buf_ops_get_fd(dst[child]->bops),
 						   ctx, 4096);
-
 		i = 0;
 		igt_until_timeout(timeout) {
 			src = create_bo(dst[child]->bops,
@@ -151,6 +153,9 @@ static void fork_bcs_copy(int timeout, uint32_t final,
 		struct intel_bb *ibb;
 		unsigned long i;
 
+		/* Standalone allocator */
+		intel_allocator_init();
+
 		ibb = intel_bb_create(buf_ops_get_fd(dst[child]->bops), 4096);
 
 		i = 0;
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 34/39] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (32 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 33/39] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 35/39] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
                   ` (6 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

As intel_bb has some strong requirements when allocator is in use
(addresses cannot move when simple allocator is used) ensure gem
buffer which is created from flink will reacquire new address.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/gem_render_copy_redux.c | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/tests/i915/gem_render_copy_redux.c b/tests/i915/gem_render_copy_redux.c
index 40308a6e6..6c8d7f755 100644
--- a/tests/i915/gem_render_copy_redux.c
+++ b/tests/i915/gem_render_copy_redux.c
@@ -63,7 +63,6 @@ typedef struct {
 	int fd;
 	uint32_t devid;
 	struct buf_ops *bops;
-	struct intel_bb *ibb;
 	igt_render_copyfunc_t render_copy;
 	uint32_t linear[WIDTH * HEIGHT];
 } data_t;
@@ -77,13 +76,10 @@ static void data_init(data_t *data)
 	data->render_copy = igt_get_render_copyfunc(data->devid);
 	igt_require_f(data->render_copy,
 		      "no render-copy function\n");
-
-	data->ibb = intel_bb_create(data->fd, 4096);
 }
 
 static void data_fini(data_t *data)
 {
-	intel_bb_destroy(data->ibb);
 	buf_ops_destroy(data->bops);
 	close(data->fd);
 }
@@ -126,15 +122,17 @@ scratch_buf_check(data_t *data, struct intel_buf *buf, int x, int y,
 
 static void copy(data_t *data)
 {
+	struct intel_bb *ibb;
 	struct intel_buf src, dst;
 
+	ibb = intel_bb_create(data->fd, 4096);
 	scratch_buf_init(data, &src, WIDTH, HEIGHT, STRIDE, SRC_COLOR);
 	scratch_buf_init(data, &dst, WIDTH, HEIGHT, STRIDE, DST_COLOR);
 
 	scratch_buf_check(data, &src, WIDTH / 2, HEIGHT / 2, SRC_COLOR);
 	scratch_buf_check(data, &dst, WIDTH / 2, HEIGHT / 2, DST_COLOR);
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &src, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH / 2, HEIGHT / 2);
 
@@ -143,11 +141,13 @@ static void copy(data_t *data)
 
 	scratch_buf_fini(data, &src);
 	scratch_buf_fini(data, &dst);
+	intel_bb_destroy(ibb);
 }
 
 static void copy_flink(data_t *data)
 {
 	data_t local;
+	struct intel_bb *ibb, *local_ibb;
 	struct intel_buf src, dst;
 	struct intel_buf local_src, local_dst;
 	struct intel_buf flink;
@@ -155,32 +155,36 @@ static void copy_flink(data_t *data)
 
 	data_init(&local);
 
+	ibb = intel_bb_create(data->fd, 4096);
+	local_ibb = intel_bb_create(local.fd, 4096);
 	scratch_buf_init(data, &src, WIDTH, HEIGHT, STRIDE, 0);
 	scratch_buf_init(data, &dst, WIDTH, HEIGHT, STRIDE, DST_COLOR);
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &src, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH, HEIGHT);
 
 	scratch_buf_init(&local, &local_src, WIDTH, HEIGHT, STRIDE, 0);
 	scratch_buf_init(&local, &local_dst, WIDTH, HEIGHT, STRIDE, SRC_COLOR);
 
-	local.render_copy(local.ibb,
+	local.render_copy(local_ibb,
 			  &local_src, 0, 0, WIDTH, HEIGHT,
 			  &local_dst, WIDTH, HEIGHT);
 
 	name = gem_flink(local.fd, local_dst.handle);
 	flink = local_dst;
 	flink.handle = gem_open(data->fd, name);
+	flink.ibb = ibb;
+	flink.addr.offset = INTEL_BUF_INVALID_ADDRESS;
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &flink, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH / 2, HEIGHT / 2);
 
 	scratch_buf_check(data, &dst, 10, 10, DST_COLOR);
 	scratch_buf_check(data, &dst, WIDTH - 10, HEIGHT - 10, SRC_COLOR);
 
-	intel_bb_reset(data->ibb, true);
+	intel_bb_reset(ibb, true);
 	scratch_buf_fini(data, &src);
 	scratch_buf_fini(data, &flink);
 	scratch_buf_fini(data, &dst);
@@ -188,6 +192,8 @@ static void copy_flink(data_t *data)
 	scratch_buf_fini(&local, &local_src);
 	scratch_buf_fini(&local, &local_dst);
 
+	intel_bb_destroy(local_ibb);
+	intel_bb_destroy(ibb);
 	data_fini(&local);
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 35/39] tests/perf.c: Remove buffer from batch
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (33 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 34/39] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 36/39] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
                   ` (5 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Currently we need to ensure intel_buf is a part of single ibb due
to fact it is acquiring address from allocator so we cannot keep
it in two separate ibbs (theoretically it is possible when different
ibbs would be in same context but it is not currently implemented).

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/perf.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tests/i915/perf.c b/tests/i915/perf.c
index 664fd0a94..e641d5d2d 100644
--- a/tests/i915/perf.c
+++ b/tests/i915/perf.c
@@ -3527,6 +3527,9 @@ gen8_test_single_ctx_render_target_writes_a_counter(void)
 			/* Another redundant flush to clarify batch bo is free to reuse */
 			intel_bb_flush_render(ibb0);
 
+			/* Remove intel_buf from ibb0 added implicitly in rendercopy */
+			intel_bb_remove_intel_buf(ibb0, dst_buf);
+
 			/* submit two copies on the other context to avoid a false
 			 * positive in case the driver somehow ended up filtering for
 			 * context1
@@ -3919,6 +3922,9 @@ static void gen12_single_ctx_helper(void)
 				     BO_REPORT_ID0);
 	intel_bb_flush_render(ibb0);
 
+	/* Remove intel_buf from ibb0 added implicitly in rendercopy */
+	intel_bb_remove_intel_buf(ibb0, dst_buf);
+
 	/* This is the work/context that is measured for counter increments */
 	render_copy(ibb0,
 		    &src[0], 0, 0, width, height,
@@ -3965,6 +3971,9 @@ static void gen12_single_ctx_helper(void)
 				     BO_REPORT_ID3);
 	intel_bb_flush_render(ibb1);
 
+	/* Remove intel_buf from ibb1 added implicitly in rendercopy */
+	intel_bb_remove_intel_buf(ibb1, dst_buf);
+
 	/* Submit an mi-rpc to context0 after all measurable work */
 #define BO_TIMESTAMP_OFFSET1 1032
 #define BO_REPORT_OFFSET1 256
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 36/39] tests/gem_linear_blits: Use intel allocator
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (34 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 35/39] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 37/39] lib/intel_allocator: drop kill_children() Zbigniew Kempczyński
                   ` (4 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Use intel allocator directly, without intel-bb infrastructure.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 tests/i915/gem_linear_blits.c | 90 ++++++++++++++++++++++++++---------
 1 file changed, 68 insertions(+), 22 deletions(-)

diff --git a/tests/i915/gem_linear_blits.c b/tests/i915/gem_linear_blits.c
index cae42d52a..7b7cf05a5 100644
--- a/tests/i915/gem_linear_blits.c
+++ b/tests/i915/gem_linear_blits.c
@@ -53,10 +53,13 @@ IGT_TEST_DESCRIPTION("Test doing many blits with a working set larger than the"
 #define WIDTH 512
 #define HEIGHT 512
 
+/* We don't have alignment detection yet, so assume worst case scenario */
+#define ALIGNMENT (2048*1024)
+
 static uint32_t linear[WIDTH*HEIGHT];
 
-static void
-copy(int fd, uint32_t dst, uint32_t src)
+static void copy(int fd, uint64_t ahnd, uint32_t dst, uint32_t src,
+		 uint64_t dst_offset, uint64_t src_offset, bool do_relocs)
 {
 	uint32_t batch[12];
 	struct drm_i915_gem_relocation_entry reloc[2];
@@ -64,6 +67,20 @@ copy(int fd, uint32_t dst, uint32_t src)
 	struct drm_i915_gem_execbuffer2 exec;
 	int i = 0;
 
+	memset(obj, 0, sizeof(obj));
+	obj[0].handle = dst;
+	obj[0].offset = CANONICAL(dst_offset);
+	obj[0].flags = EXEC_OBJECT_WRITE | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	obj[1].handle = src;
+	obj[1].offset = CANONICAL(src_offset);
+	obj[1].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+	obj[2].handle = gem_create(fd, 4096);
+	obj[2].offset = intel_allocator_alloc(ahnd, obj[2].handle,
+			4096, ALIGNMENT);
+	obj[2].offset = CANONICAL(obj[2].offset);
+	obj[2].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
 	batch[i++] = XY_SRC_COPY_BLT_CMD |
 		  XY_SRC_COPY_BLT_WRITE_ALPHA |
 		  XY_SRC_COPY_BLT_WRITE_RGB;
@@ -77,22 +94,24 @@ copy(int fd, uint32_t dst, uint32_t src)
 		  WIDTH*4;
 	batch[i++] = 0; /* dst x1,y1 */
 	batch[i++] = (HEIGHT << 16) | WIDTH; /* dst x2,y2 */
-	batch[i++] = 0; /* dst reloc */
+	batch[i++] = obj[0].offset;
 	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
-		batch[i++] = 0;
+		batch[i++] = obj[0].offset >> 32;
 	batch[i++] = 0; /* src x1,y1 */
 	batch[i++] = WIDTH*4;
-	batch[i++] = 0; /* src reloc */
+	batch[i++] = obj[1].offset;
 	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
-		batch[i++] = 0;
+		batch[i++] = obj[1].offset >> 32;
 	batch[i++] = MI_BATCH_BUFFER_END;
 	batch[i++] = MI_NOOP;
 
+	gem_write(fd, obj[2].handle, 0, batch, i * sizeof(batch[0]));
+
 	memset(reloc, 0, sizeof(reloc));
 	reloc[0].target_handle = dst;
 	reloc[0].delta = 0;
 	reloc[0].offset = 4 * sizeof(batch[0]);
-	reloc[0].presumed_offset = 0;
+	reloc[0].presumed_offset = obj[0].offset;
 	reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
 	reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
 
@@ -101,25 +120,27 @@ copy(int fd, uint32_t dst, uint32_t src)
 	reloc[1].offset = 7 * sizeof(batch[0]);
 	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
 		reloc[1].offset += sizeof(batch[0]);
-	reloc[1].presumed_offset = 0;
+	reloc[1].presumed_offset = obj[1].offset;
 	reloc[1].read_domains = I915_GEM_DOMAIN_RENDER;
 	reloc[1].write_domain = 0;
 
-	memset(obj, 0, sizeof(obj));
-	obj[0].handle = dst;
-	obj[1].handle = src;
-	obj[2].handle = gem_create(fd, 4096);
-	gem_write(fd, obj[2].handle, 0, batch, i * sizeof(batch[0]));
-	obj[2].relocation_count = 2;
-	obj[2].relocs_ptr = to_user_pointer(reloc);
+	if (do_relocs) {
+		obj[2].relocation_count = ARRAY_SIZE(reloc);
+		obj[2].relocs_ptr = to_user_pointer(reloc);
+	} else {
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		obj[2].flags |= EXEC_OBJECT_PINNED;
+	}
 
 	memset(&exec, 0, sizeof(exec));
 	exec.buffers_ptr = to_user_pointer(obj);
-	exec.buffer_count = 3;
+	exec.buffer_count = ARRAY_SIZE(obj);
 	exec.batch_len = i * sizeof(batch[0]);
 	exec.flags = gem_has_blt(fd) ? I915_EXEC_BLT : 0;
-
 	gem_execbuf(fd, &exec);
+
+	intel_allocator_free(ahnd, obj[2].handle);
 	gem_close(fd, obj[2].handle);
 }
 
@@ -157,17 +178,28 @@ check_bo(int fd, uint32_t handle, uint32_t val)
 	igt_assert_eq(num_errors, 0);
 }
 
-static void run_test(int fd, int count)
+static void run_test(int fd, int count, bool do_relocs)
 {
 	uint32_t *handle, *start_val;
+	uint64_t *offset, ahnd;
 	uint32_t start = 0;
 	int i;
 
+	ahnd = intel_allocator_open(fd, 0, do_relocs ?
+					    INTEL_ALLOCATOR_RELOC :
+					    INTEL_ALLOCATOR_SIMPLE);
+
 	handle = malloc(sizeof(uint32_t) * count * 2);
+	offset = calloc(1, sizeof(uint64_t) * count);
+	igt_assert_f(handle && offset, "Allocation failed\n");
 	start_val = handle + count;
 
 	for (i = 0; i < count; i++) {
 		handle[i] = create_bo(fd, start);
+
+		offset[i] = intel_allocator_alloc(ahnd, handle[i],
+						  sizeof(linear), ALIGNMENT);
+
 		start_val[i] = start;
 		start += 1024 * 1024 / 4;
 	}
@@ -178,17 +210,22 @@ static void run_test(int fd, int count)
 
 		if (src == dst)
 			continue;
+		copy(fd, ahnd, handle[dst], handle[src],
+		     offset[dst], offset[src], do_relocs);
 
-		copy(fd, handle[dst], handle[src]);
 		start_val[dst] = start_val[src];
 	}
 
 	for (i = 0; i < count; i++) {
 		check_bo(fd, handle[i], start_val[i]);
+		intel_allocator_free(ahnd, handle[i]);
 		gem_close(fd, handle[i]);
 	}
 
 	free(handle);
+	free(offset);
+
+	intel_allocator_close(ahnd);
 }
 
 #define MAX_32b ((1ull << 32) - 4096)
@@ -197,16 +234,21 @@ igt_main
 {
 	const int ncpus = sysconf(_SC_NPROCESSORS_ONLN);
 	uint64_t count = 0;
+	bool do_relocs;
 	int fd = -1;
 
 	igt_fixture {
 		fd = drm_open_driver(DRIVER_INTEL);
 		igt_require_gem(fd);
 		gem_require_blitter(fd);
+		do_relocs = !gem_uses_ppgtt(fd);
 
 		count = gem_aperture_size(fd);
 		if (count >> 32)
 			count = MAX_32b;
+		else
+			do_relocs = true;
+
 		count = 3 + count / (1024*1024);
 		igt_require(count > 1);
 		intel_require_memory(count, sizeof(linear), CHECK_RAM);
@@ -216,19 +258,23 @@ igt_main
 	}
 
 	igt_subtest("basic")
-		run_test(fd, 2);
+		run_test(fd, 2, do_relocs);
 
 	igt_subtest("normal") {
+		intel_allocator_multiprocess_start();
 		igt_fork(child, ncpus)
-			run_test(fd, count);
+			run_test(fd, count, do_relocs);
 		igt_waitchildren();
+		intel_allocator_multiprocess_stop();
 	}
 
 	igt_subtest("interruptible") {
+		intel_allocator_multiprocess_start();
 		igt_fork_signal_helper();
 		igt_fork(child, ncpus)
-			run_test(fd, count);
+			run_test(fd, count, do_relocs);
 		igt_waitchildren();
 		igt_stop_signal_helper();
+		intel_allocator_multiprocess_stop();
 	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 37/39] lib/intel_allocator: drop kill_children()
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (35 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 36/39] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 38/39] lib/intel_allocator: Add alloc function which allows passing strategy argument Zbigniew Kempczyński
                   ` (3 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Use igt_waitchildren_timeout() instead of custom kill_children()
function to avoid killing handlers (hang, ...)

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
---
 lib/intel_allocator.c | 15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index 6298a5a20..0b33b8b24 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -715,16 +715,6 @@ static int handle_request(struct alloc_req *req, struct alloc_resp *resp)
 	return ret;
 }
 
-static void kill_children(int sig)
-{
-	sighandler_t old;
-
-	old = signal(sig, SIG_IGN);
-	igt_assert(old != SIG_ERR);
-	kill(-getpgrp(), sig);
-	igt_assert(signal(sig, old) != SIG_ERR);
-}
-
 static void *allocator_thread_loop(void *data)
 {
 	struct alloc_req req;
@@ -744,7 +734,7 @@ static void *allocator_thread_loop(void *data)
 		if (ret == -1) {
 			igt_warn("Error receiving request in thread, ret = %d [%s]\n",
 				 ret, strerror(errno));
-			kill_children(SIGINT);
+			igt_waitchildren_timeout(1, "Stopping children, error receiving request\n");
 			return (void *) -1;
 		}
 
@@ -766,7 +756,7 @@ static void *allocator_thread_loop(void *data)
 			igt_warn("Error sending response in thread, ret = %d [%s]\n",
 				 ret, strerror(errno));
 
-			kill_children(SIGINT);
+			igt_waitchildren_timeout(1, "Stopping children, error sending response\n");
 			return (void *) -1;
 		}
 	}
@@ -858,7 +848,6 @@ void intel_allocator_multiprocess_stop(void)
 		pthread_join(allocator_thread, NULL);
 
 		/* But we're not sure does child will stuck */
-		kill_children(SIGINT);
 		igt_waitchildren_timeout(5, "Stopping children");
 		multiprocess = false;
 	}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 38/39] lib/intel_allocator: Add alloc function which allows passing strategy argument
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (36 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 37/39] lib/intel_allocator: drop kill_children() Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 39/39] tests/api_intel_allocator: Check alloc with strategy API Zbigniew Kempczyński
                   ` (2 subsequent siblings)
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

To use spinners with no-reloc we need to alloc offsets for them
from already opened allocator. As we don't know what strategy
is chosen for open (likely HIGH_TO_LOW for SIMPLE allocator) we
want to overwrite it for spinners (there's expectation they
will reside on low addresses).

Extend allocator API adding intel_allocator_alloc_with_strategy()
to support spinners rewriting.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 lib/intel_allocator.c            | 47 +++++++++++++++++++++++++++-----
 lib/intel_allocator.h            | 10 +++++--
 lib/intel_allocator_msgchannel.h |  1 +
 lib/intel_allocator_random.c     |  4 ++-
 lib/intel_allocator_reloc.c      |  4 ++-
 lib/intel_allocator_simple.c     | 39 ++++++++++++++------------
 6 files changed, 76 insertions(+), 29 deletions(-)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index 0b33b8b24..8a2e607c3 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -579,15 +579,17 @@ static int handle_request(struct alloc_req *req, struct alloc_resp *resp)
 			resp->alloc.offset = ial->alloc(ial,
 							req->alloc.handle,
 							req->alloc.size,
-							req->alloc.alignment);
+							req->alloc.alignment,
+							req->alloc.strategy);
 			alloc_info("<alloc> [tid: %ld] ahnd: %" PRIx64
 				   ", ctx: %u, vm: %u, handle: %u"
 				   ", size: 0x%" PRIx64 ", offset: 0x%" PRIx64
-				   ", alignment: 0x%" PRIx64 "\n",
+				   ", alignment: 0x%" PRIx64 ", strategy: %u\n",
 				   (long) req->tid, req->allocator_handle,
 				   al->ctx, al->vm,
 				   req->alloc.handle, req->alloc.size,
-				   resp->alloc.offset, req->alloc.alignment);
+				   resp->alloc.offset, req->alloc.alignment,
+				   req->alloc.strategy);
 			break;
 
 		case REQ_FREE:
@@ -1040,13 +1042,15 @@ void intel_allocator_get_address_range(uint64_t allocator_handle,
  * range returns ALLOC_INVALID_ADDRESS.
  */
 uint64_t __intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
-				 uint64_t size, uint64_t alignment)
+				 uint64_t size, uint64_t alignment,
+				 enum allocator_strategy strategy)
 {
 	struct alloc_req req = { .request_type = REQ_ALLOC,
 				 .allocator_handle = allocator_handle,
 				 .alloc.handle = handle,
 				 .alloc.size = size,
-				 .alloc.alignment = alignment };
+				 .alloc.alignment = alignment,
+				 .alloc.strategy = strategy };
 	struct alloc_resp resp;
 
 	igt_assert(handle_request(&req, &resp) == 0);
@@ -1063,7 +1067,8 @@ uint64_t __intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
  * @alignment: determines object alignment
  *
  * Same as __intel_allocator_alloc() but asserts if allocator can't return
- * valid address.
+ * valid address. Uses default allocation strategy chosen during opening
+ * the allocator.
  */
 uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
 			       uint64_t size, uint64_t alignment)
@@ -1071,12 +1076,40 @@ uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
 	uint64_t offset;
 
 	offset = __intel_allocator_alloc(allocator_handle, handle,
-					 size, alignment);
+					 size, alignment,
+					 ALLOC_STRATEGY_NONE);
 	igt_assert(offset != ALLOC_INVALID_ADDRESS);
 
 	return offset;
 }
 
+/**
+ * intel_allocator_alloc_with_strategy:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @alignment: determines object alignment
+ * @strategy: strategy of allocation
+ *
+ * Same as __intel_allocator_alloc() but asserts if allocator can't return
+ * valid address. Use @strategy instead of default chosen during opening
+ * the allocator.
+ */
+uint64_t intel_allocator_alloc_with_strategy(uint64_t allocator_handle,
+					     uint32_t handle,
+					     uint64_t size, uint64_t alignment,
+					     enum allocator_strategy strategy)
+{
+	uint64_t offset;
+
+	offset = __intel_allocator_alloc(allocator_handle, handle,
+					 size, alignment, strategy);
+	igt_assert(offset != ALLOC_INVALID_ADDRESS);
+
+	return offset;
+}
+
+
 /**
  * intel_allocator_free:
  * @allocator_handle: handle to an allocator
diff --git a/lib/intel_allocator.h b/lib/intel_allocator.h
index 9b7bd0908..c14f57b4d 100644
--- a/lib/intel_allocator.h
+++ b/lib/intel_allocator.h
@@ -141,7 +141,8 @@ struct intel_allocator {
 	void (*get_address_range)(struct intel_allocator *ial,
 				  uint64_t *startp, uint64_t *endp);
 	uint64_t (*alloc)(struct intel_allocator *ial, uint32_t handle,
-			  uint64_t size, uint64_t alignment);
+			  uint64_t size, uint64_t alignment,
+			  enum allocator_strategy strategy);
 	bool (*is_allocated)(struct intel_allocator *ial, uint32_t handle,
 			     uint64_t size, uint64_t alignment);
 	bool (*reserve)(struct intel_allocator *ial,
@@ -181,9 +182,14 @@ bool intel_allocator_close(uint64_t allocator_handle);
 void intel_allocator_get_address_range(uint64_t allocator_handle,
 				       uint64_t *startp, uint64_t *endp);
 uint64_t __intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
-				 uint64_t size, uint64_t alignment);
+				 uint64_t size, uint64_t alignment,
+				 enum allocator_strategy strategy);
 uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
 			       uint64_t size, uint64_t alignment);
+uint64_t intel_allocator_alloc_with_strategy(uint64_t allocator_handle,
+					     uint32_t handle,
+					     uint64_t size, uint64_t alignment,
+					     enum allocator_strategy strategy);
 bool intel_allocator_free(uint64_t allocator_handle, uint32_t handle);
 bool intel_allocator_is_allocated(uint64_t allocator_handle, uint32_t handle,
 				  uint64_t size, uint64_t offset);
diff --git a/lib/intel_allocator_msgchannel.h b/lib/intel_allocator_msgchannel.h
index ac6edfb9e..c7a738a08 100644
--- a/lib/intel_allocator_msgchannel.h
+++ b/lib/intel_allocator_msgchannel.h
@@ -65,6 +65,7 @@ struct alloc_req {
 			uint32_t handle;
 			uint64_t size;
 			uint64_t alignment;
+			uint8_t strategy;
 		} alloc;
 
 		struct {
diff --git a/lib/intel_allocator_random.c b/lib/intel_allocator_random.c
index d804e3318..3d9a78f17 100644
--- a/lib/intel_allocator_random.c
+++ b/lib/intel_allocator_random.c
@@ -45,12 +45,14 @@ static void intel_allocator_random_get_address_range(struct intel_allocator *ial
 
 static uint64_t intel_allocator_random_alloc(struct intel_allocator *ial,
 					     uint32_t handle, uint64_t size,
-					     uint64_t alignment)
+					     uint64_t alignment,
+					     enum allocator_strategy strategy)
 {
 	struct intel_allocator_random *ialr = ial->priv;
 	uint64_t offset;
 
 	(void) handle;
+	(void) strategy;
 
 	/* randomize the address, we try to avoid relocations */
 	do {
diff --git a/lib/intel_allocator_reloc.c b/lib/intel_allocator_reloc.c
index abf9c30cd..e8af787b0 100644
--- a/lib/intel_allocator_reloc.c
+++ b/lib/intel_allocator_reloc.c
@@ -46,12 +46,14 @@ static void intel_allocator_reloc_get_address_range(struct intel_allocator *ial,
 
 static uint64_t intel_allocator_reloc_alloc(struct intel_allocator *ial,
 					    uint32_t handle, uint64_t size,
-					    uint64_t alignment)
+					    uint64_t alignment,
+					    enum allocator_strategy strategy)
 {
 	struct intel_allocator_reloc *ialr = ial->priv;
 	uint64_t offset, aligned_offset;
 
 	(void) handle;
+	(void) strategy;
 
 	alignment = max(alignment, 4096);
 	aligned_offset = ALIGN(ialr->offset, alignment);
diff --git a/lib/intel_allocator_simple.c b/lib/intel_allocator_simple.c
index a419955af..963d8d257 100644
--- a/lib/intel_allocator_simple.c
+++ b/lib/intel_allocator_simple.c
@@ -25,12 +25,7 @@ intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
 
 struct simple_vma_heap {
 	struct igt_list_head holes;
-
-	/* If true, simple_vma_heap_alloc will prefer high addresses
-	 *
-	 * Default is true.
-	 */
-	bool alloc_high;
+	enum allocator_strategy strategy;
 };
 
 struct simple_vma_hole {
@@ -220,14 +215,11 @@ static void simple_vma_heap_init(struct simple_vma_heap *heap,
 	IGT_INIT_LIST_HEAD(&heap->holes);
 	simple_vma_heap_free(heap, start, size);
 
-	switch (strategy) {
-	case ALLOC_STRATEGY_LOW_TO_HIGH:
-		heap->alloc_high = false;
-		break;
-	case ALLOC_STRATEGY_HIGH_TO_LOW:
-	default:
-		heap->alloc_high = true;
-	}
+	/* Use LOW_TO_HIGH or HIGH_TO_LOW strategy only */
+	if (strategy == ALLOC_STRATEGY_LOW_TO_HIGH)
+		heap->strategy = strategy;
+	else
+		heap->strategy = ALLOC_STRATEGY_HIGH_TO_LOW;
 }
 
 static void simple_vma_heap_finish(struct simple_vma_heap *heap)
@@ -294,7 +286,8 @@ static void simple_vma_hole_alloc(struct simple_vma_hole *hole,
 
 static bool simple_vma_heap_alloc(struct simple_vma_heap *heap,
 				  uint64_t *offset, uint64_t size,
-				  uint64_t alignment)
+				  uint64_t alignment,
+				  enum allocator_strategy strategy)
 {
 	struct simple_vma_hole *hole, *tmp;
 	uint64_t misalign;
@@ -305,7 +298,16 @@ static bool simple_vma_heap_alloc(struct simple_vma_heap *heap,
 
 	simple_vma_heap_validate(heap);
 
-	if (heap->alloc_high) {
+	/* Ensure we support only NONE/LOW_TO_HIGH/HIGH_TO_LOW strategies */
+	igt_assert(strategy == ALLOC_STRATEGY_NONE ||
+		   strategy == ALLOC_STRATEGY_LOW_TO_HIGH ||
+		   strategy == ALLOC_STRATEGY_HIGH_TO_LOW);
+
+	/* Use default strategy chosen on open */
+	if (strategy == ALLOC_STRATEGY_NONE)
+		strategy = heap->strategy;
+
+	if (strategy == ALLOC_STRATEGY_HIGH_TO_LOW) {
 		simple_vma_foreach_hole_safe(hole, heap, tmp) {
 			if (size > hole->size)
 				continue;
@@ -412,7 +414,8 @@ static bool simple_vma_heap_alloc_addr(struct intel_allocator_simple *ials,
 
 static uint64_t intel_allocator_simple_alloc(struct intel_allocator *ial,
 					     uint32_t handle, uint64_t size,
-					     uint64_t alignment)
+					     uint64_t alignment,
+					     enum allocator_strategy strategy)
 {
 	struct intel_allocator_record *rec;
 	struct intel_allocator_simple *ials;
@@ -430,7 +433,7 @@ static uint64_t intel_allocator_simple_alloc(struct intel_allocator *ial,
 		igt_assert(rec->size == size);
 	} else {
 		if (!simple_vma_heap_alloc(&ials->heap, &offset,
-					   size, alignment))
+					   size, alignment, strategy))
 			return ALLOC_INVALID_ADDRESS;
 
 		rec = malloc(sizeof(*rec));
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] [PATCH i-g-t v30 39/39] tests/api_intel_allocator: Check alloc with strategy API
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (37 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 38/39] lib/intel_allocator: Add alloc function which allows passing strategy argument Zbigniew Kempczyński
@ 2021-04-02  9:38 ` Zbigniew Kempczyński
  2021-04-02 10:16 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev33) Patchwork
  2021-04-02 11:13 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
  40 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-02  9:38 UTC (permalink / raw)
  To: igt-dev

Verify strategy is properly handled in allocator alloc() call.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 tests/i915/api_intel_allocator.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
index 7ff92a174..182d9ba79 100644
--- a/tests/i915/api_intel_allocator.c
+++ b/tests/i915/api_intel_allocator.c
@@ -50,10 +50,19 @@ static void alloc_simple(int fd)
 
 	intel_allocator_get_address_range(ahnd, &start, &end);
 	offset0 = intel_allocator_alloc(ahnd, 1, end - start, 0);
-	offset1 = __intel_allocator_alloc(ahnd, 2, 4096, 0);
+	offset1 = __intel_allocator_alloc(ahnd, 2, 4096, 0, ALLOC_STRATEGY_NONE);
 	igt_assert(offset1 == ALLOC_INVALID_ADDRESS);
 	intel_allocator_free(ahnd, 1);
 
+	offset0 = intel_allocator_alloc_with_strategy(ahnd, 1, 4096, 0,
+						      ALLOC_STRATEGY_HIGH_TO_LOW);
+	offset1 = intel_allocator_alloc_with_strategy(ahnd, 2, 4096, 0,
+						      ALLOC_STRATEGY_LOW_TO_HIGH);
+	igt_assert(offset0 > offset1);
+
+	intel_allocator_free(ahnd, 1);
+	intel_allocator_free(ahnd, 2);
+
 	igt_assert_eq(intel_allocator_close(ahnd), true);
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev33)
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (38 preceding siblings ...)
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 39/39] tests/api_intel_allocator: Check alloc with strategy API Zbigniew Kempczyński
@ 2021-04-02 10:16 ` Patchwork
  2021-04-02 11:13 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
  40 siblings, 0 replies; 43+ messages in thread
From: Patchwork @ 2021-04-02 10:16 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 7445 bytes --]

== Series Details ==

Series: Introduce IGT allocator (rev33)
URL   : https://patchwork.freedesktop.org/series/82954/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9926 -> IGTPW_5689
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/index.html

New tests
---------

  New tests have been introduced between CI_DRM_9926 and IGTPW_5689:

### New IGT tests (2) ###

  * igt@gem_softpin@allocator-basic:
    - Statuses : 29 pass(s) 9 skip(s)
    - Exec time: [0.0, 0.94] s

  * igt@gem_softpin@allocator-basic-reserve:
    - Statuses : 29 pass(s) 9 skip(s)
    - Exec time: [0.0, 1.00] s

  

Known issues
------------

  Here are the changes found in IGTPW_5689 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_flink_basic@bad-open:
    - fi-tgl-y:           [PASS][1] -> [DMESG-WARN][2] ([i915#402])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/fi-tgl-y/igt@gem_flink_basic@bad-open.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-tgl-y/igt@gem_flink_basic@bad-open.html

  * {igt@gem_softpin@allocator-basic} (NEW):
    - fi-pnv-d510:        NOTRUN -> [SKIP][3] ([fdo#109271]) +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-pnv-d510/igt@gem_softpin@allocator-basic.html
    - fi-bwr-2160:        NOTRUN -> [SKIP][4] ([fdo#109271]) +1 similar issue
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-bwr-2160/igt@gem_softpin@allocator-basic.html
    - {fi-hsw-gt1}:       NOTRUN -> [SKIP][5] ([fdo#109271]) +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-hsw-gt1/igt@gem_softpin@allocator-basic.html
    - fi-snb-2520m:       NOTRUN -> [SKIP][6] ([fdo#109271]) +1 similar issue
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-snb-2520m/igt@gem_softpin@allocator-basic.html
    - fi-ivb-3770:        NOTRUN -> [SKIP][7] ([fdo#109271]) +1 similar issue
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-ivb-3770/igt@gem_softpin@allocator-basic.html
    - fi-snb-2600:        NOTRUN -> [SKIP][8] ([fdo#109271]) +1 similar issue
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-snb-2600/igt@gem_softpin@allocator-basic.html

  * {igt@gem_softpin@allocator-basic-reserve} (NEW):
    - fi-elk-e7500:       NOTRUN -> [SKIP][9] ([fdo#109271]) +1 similar issue
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-elk-e7500/igt@gem_softpin@allocator-basic-reserve.html
    - fi-hsw-4770:        NOTRUN -> [SKIP][10] ([fdo#109271]) +1 similar issue
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-hsw-4770/igt@gem_softpin@allocator-basic-reserve.html
    - fi-ilk-650:         NOTRUN -> [SKIP][11] ([fdo#109271]) +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-ilk-650/igt@gem_softpin@allocator-basic-reserve.html

  * igt@i915_selftest@live@hangcheck:
    - fi-snb-2600:        [PASS][12] -> [INCOMPLETE][13] ([i915#2782])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/fi-snb-2600/igt@i915_selftest@live@hangcheck.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-snb-2600/igt@i915_selftest@live@hangcheck.html

  
#### Possible fixes ####

  * igt@gem_linear_blits@basic:
    - {fi-rkl-11500t}:    [FAIL][14] ([i915#3277]) -> [PASS][15]
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/fi-rkl-11500t/igt@gem_linear_blits@basic.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-rkl-11500t/igt@gem_linear_blits@basic.html

  * igt@gem_ringfill@basic-all:
    - fi-tgl-y:           [DMESG-WARN][16] ([i915#402]) -> [PASS][17]
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/fi-tgl-y/igt@gem_ringfill@basic-all.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-tgl-y/igt@gem_ringfill@basic-all.html

  * igt@gem_tiled_blits@basic:
    - {fi-rkl-11500t}:    [FAIL][18] ([i915#3278]) -> [PASS][19] +3 similar issues
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/fi-rkl-11500t/igt@gem_tiled_blits@basic.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-rkl-11500t/igt@gem_tiled_blits@basic.html

  * igt@i915_pm_rpm@module-reload:
    - fi-kbl-soraka:      [DMESG-WARN][20] ([i915#1982]) -> [PASS][21]
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/fi-kbl-soraka/igt@i915_pm_rpm@module-reload.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/fi-kbl-soraka/igt@i915_pm_rpm@module-reload.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#1222]: https://gitlab.freedesktop.org/drm/intel/issues/1222
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2782]: https://gitlab.freedesktop.org/drm/intel/issues/2782
  [i915#3277]: https://gitlab.freedesktop.org/drm/intel/issues/3277
  [i915#3278]: https://gitlab.freedesktop.org/drm/intel/issues/3278
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402


Participating hosts (47 -> 40)
------------------------------

  Missing    (7): fi-ilk-m540 fi-hsw-4200u fi-tgl-u2 fi-bsw-cyan fi-ctg-p8600 fi-dg1-1 fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_6056 -> IGTPW_5689

  CI-20190529: 20190529
  CI_DRM_9926: c73cd6993305be64f07fc110883a12025ce3d3d3 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_5689: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/index.html
  IGT_6056: 84e6a7e19ccc7fafc46f372e756cad9d4aa093f7 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools



== Testlist changes ==

+igt@api_intel_allocator@alloc-simple
+igt@api_intel_allocator@execbuf-with-allocator
+igt@api_intel_allocator@fork-simple-once
+igt@api_intel_allocator@fork-simple-stress
+igt@api_intel_allocator@fork-simple-stress-signal
+igt@api_intel_allocator@open-vm
+igt@api_intel_allocator@random-allocator
+igt@api_intel_allocator@reloc-allocator
+igt@api_intel_allocator@reopen
+igt@api_intel_allocator@reopen-fork
+igt@api_intel_allocator@reserve
+igt@api_intel_allocator@reserve-simple
+igt@api_intel_allocator@reuse
+igt@api_intel_allocator@simple-allocator
+igt@api_intel_allocator@standalone
+igt@api_intel_allocator@two-level-inception
+igt@api_intel_allocator@two-level-inception-interruptible
+igt@api_intel_bb@add-remove-objects
+igt@api_intel_bb@bb-with-allocator
+igt@api_intel_bb@bb-with-vm
+igt@api_intel_bb@blit-noreloc-keep-cache-random
+igt@api_intel_bb@blit-noreloc-purge-cache-random
+igt@api_intel_bb@destroy-bb
+igt@api_intel_bb@object-noreloc-keep-cache-random
+igt@api_intel_bb@object-noreloc-keep-cache-simple
+igt@api_intel_bb@object-noreloc-purge-cache-random
+igt@api_intel_bb@object-noreloc-purge-cache-simple
+igt@api_intel_bb@object-reloc-keep-cache
+igt@api_intel_bb@object-reloc-purge-cache
+igt@api_intel_bb@purge-bb
+igt@api_intel_bb@reset-bb
+igt@gem_softpin@allocator-basic
+igt@gem_softpin@allocator-basic-reserve
+igt@gem_softpin@allocator-fork
+igt@gem_softpin@allocator-nopin
+igt@gem_softpin@allocator-nopin-reserve
-igt@api_intel_bb@check-canonical

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/index.html

[-- Attachment #1.2: Type: text/html, Size: 9170 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [igt-dev] ✓ Fi.CI.IGT: success for Introduce IGT allocator (rev33)
  2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
                   ` (39 preceding siblings ...)
  2021-04-02 10:16 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev33) Patchwork
@ 2021-04-02 11:13 ` Patchwork
  40 siblings, 0 replies; 43+ messages in thread
From: Patchwork @ 2021-04-02 11:13 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 30249 bytes --]

== Series Details ==

Series: Introduce IGT allocator (rev33)
URL   : https://patchwork.freedesktop.org/series/82954/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9926_full -> IGTPW_5689_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/index.html

New tests
---------

  New tests have been introduced between CI_DRM_9926_full and IGTPW_5689_full:

### New IGT tests (47) ###

  * igt@api_intel_allocator@alloc-simple:
    - Statuses : 6 pass(s)
    - Exec time: [0.0, 0.00] s

  * igt@api_intel_allocator@execbuf-with-allocator:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_allocator@fork-simple-once:
    - Statuses : 5 pass(s)
    - Exec time: [0.01, 0.04] s

  * igt@api_intel_allocator@fork-simple-stress:
    - Statuses : 5 pass(s)
    - Exec time: [5.41, 5.53] s

  * igt@api_intel_allocator@fork-simple-stress-signal:
    - Statuses : 6 pass(s)
    - Exec time: [5.41, 5.54] s

  * igt@api_intel_allocator@open-vm:
    - Statuses : 4 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@random-allocator:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@random-allocator@basic:
    - Statuses : 5 pass(s)
    - Exec time: [0.0, 0.00] s

  * igt@api_intel_allocator@random-allocator@parallel-one:
    - Statuses : 5 pass(s)
    - Exec time: [0.00] s

  * igt@api_intel_allocator@random-allocator@print:
    - Statuses : 5 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reloc-allocator:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@reloc-allocator@basic:
    - Statuses : 4 pass(s)
    - Exec time: [0.0, 0.00] s

  * igt@api_intel_allocator@reloc-allocator@parallel-one:
    - Statuses : 4 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_allocator@reloc-allocator@print:
    - Statuses : 4 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reopen:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.02] s

  * igt@api_intel_allocator@reopen-fork:
    - Statuses : 6 pass(s)
    - Exec time: [3.24, 3.30] s

  * igt@api_intel_allocator@reserve:
    - Statuses : 5 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reserve-simple:
    - Statuses : 5 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reuse:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@simple-allocator:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@simple-allocator@basic:
    - Statuses : 4 pass(s)
    - Exec time: [0.00] s

  * igt@api_intel_allocator@simple-allocator@parallel-one:
    - Statuses : 4 pass(s)
    - Exec time: [0.07, 0.13] s

  * igt@api_intel_allocator@simple-allocator@print:
    - Statuses : 4 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@simple-allocator@reserve:
    - Statuses : 4 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@simple-allocator@reuse:
    - Statuses : 4 pass(s)
    - Exec time: [0.0, 0.00] s

  * igt@api_intel_allocator@standalone:
    - Statuses : 4 pass(s)
    - Exec time: [0.02, 0.06] s

  * igt@api_intel_allocator@two-level-inception:
    - Statuses : 5 pass(s)
    - Exec time: [5.41, 5.58] s

  * igt@api_intel_allocator@two-level-inception-interruptible:
    - Statuses : 5 pass(s)
    - Exec time: [5.41, 5.58] s

  * igt@api_intel_bb@add-remove-objects:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.02] s

  * igt@api_intel_bb@bb-with-allocator:
    - Statuses : 5 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@bb-with-vm:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.03] s

  * igt@api_intel_bb@blit-noreloc-keep-cache-random:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@blit-noreloc-purge-cache-random:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.03] s

  * igt@api_intel_bb@destroy-bb:
    - Statuses : 6 pass(s)
    - Exec time: [0.01, 0.03] s

  * igt@api_intel_bb@object-noreloc-keep-cache-random:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@object-noreloc-keep-cache-simple:
    - Statuses : 5 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@object-noreloc-purge-cache-random:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@object-noreloc-purge-cache-simple:
    - Statuses : 5 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@object-reloc-keep-cache:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.02] s

  * igt@api_intel_bb@object-reloc-purge-cache:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@purge-bb:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@reset-bb:
    - Statuses : 6 pass(s)
    - Exec time: [0.00] s

  * igt@gem_softpin@allocator-basic:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.27] s

  * igt@gem_softpin@allocator-basic-reserve:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.27] s

  * igt@gem_softpin@allocator-fork:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 2.43] s

  * igt@gem_softpin@allocator-nopin:
    - Statuses : 4 pass(s)
    - Exec time: [0.07, 0.29] s

  * igt@gem_softpin@allocator-nopin-reserve:
    - Statuses : 4 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.28] s

  

Known issues
------------

  Here are the changes found in IGTPW_5689_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@api_intel_bb@blit-noreloc-keep-cache:
    - shard-snb:          [PASS][1] -> [SKIP][2] ([fdo#109271])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-snb6/igt@api_intel_bb@blit-noreloc-keep-cache.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-snb7/igt@api_intel_bb@blit-noreloc-keep-cache.html

  * igt@drm_import_export@prime:
    - shard-glk:          [PASS][3] -> [INCOMPLETE][4] ([i915#2055] / [i915#2944])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-glk9/igt@drm_import_export@prime.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk6/igt@drm_import_export@prime.html

  * igt@gem_create@create-clear:
    - shard-iclb:         [PASS][5] -> [FAIL][6] ([i915#3160])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb8/igt@gem_create@create-clear.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb5/igt@gem_create@create-clear.html

  * igt@gem_create@create-massive:
    - shard-snb:          NOTRUN -> [DMESG-WARN][7] ([i915#3002]) +1 similar issue
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-snb6/igt@gem_create@create-massive.html

  * igt@gem_ctx_param@set-priority-not-supported:
    - shard-iclb:         NOTRUN -> [SKIP][8] ([fdo#109314])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb4/igt@gem_ctx_param@set-priority-not-supported.html

  * igt@gem_ctx_persistence@legacy-engines-mixed:
    - shard-snb:          NOTRUN -> [SKIP][9] ([fdo#109271] / [i915#1099]) +4 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-snb7/igt@gem_ctx_persistence@legacy-engines-mixed.html

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [PASS][10] -> [TIMEOUT][11] ([i915#2369] / [i915#3063])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-tglb5/igt@gem_eio@unwedge-stress.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-tglb6/igt@gem_eio@unwedge-stress.html
    - shard-iclb:         NOTRUN -> [TIMEOUT][12] ([i915#2369] / [i915#2481] / [i915#3070])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb8/igt@gem_eio@unwedge-stress.html
    - shard-snb:          NOTRUN -> [FAIL][13] ([i915#3354])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-snb7/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-kbl:          [PASS][14] -> [FAIL][15] ([i915#2851])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-kbl2/igt@gem_exec_fair@basic-pace@rcs0.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl4/igt@gem_exec_fair@basic-pace@rcs0.html
    - shard-tglb:         [PASS][16] -> [FAIL][17] ([i915#2842])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-tglb5/igt@gem_exec_fair@basic-pace@rcs0.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-tglb8/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs0:
    - shard-kbl:          [PASS][18] -> [SKIP][19] ([fdo#109271])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-kbl2/igt@gem_exec_fair@basic-pace@vcs0.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl4/igt@gem_exec_fair@basic-pace@vcs0.html

  * igt@gem_exec_fair@basic-pace@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][20] ([i915#2842])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb4/igt@gem_exec_fair@basic-pace@vcs1.html
    - shard-kbl:          [PASS][21] -> [FAIL][22] ([i915#2842])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-kbl2/igt@gem_exec_fair@basic-pace@vcs1.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl4/igt@gem_exec_fair@basic-pace@vcs1.html

  * igt@gem_exec_reloc@basic-many-active@rcs0:
    - shard-apl:          [PASS][23] -> [FAIL][24] ([i915#2389])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-apl6/igt@gem_exec_reloc@basic-many-active@rcs0.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl1/igt@gem_exec_reloc@basic-many-active@rcs0.html

  * igt@gem_exec_reloc@basic-many-active@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][25] ([i915#2389])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@gem_exec_reloc@basic-many-active@vcs1.html

  * igt@gem_mmap_gtt@cpuset-big-copy-xy:
    - shard-iclb:         [PASS][26] -> [FAIL][27] ([i915#307])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb3/igt@gem_mmap_gtt@cpuset-big-copy-xy.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@gem_mmap_gtt@cpuset-big-copy-xy.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-apl:          NOTRUN -> [WARN][28] ([i915#2658])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl6/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-apl:          NOTRUN -> [SKIP][29] ([fdo#109271] / [i915#3323])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl1/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@gem_userptr_blits@input-checking:
    - shard-apl:          NOTRUN -> [DMESG-WARN][30] ([i915#3002])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl8/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@process-exit-mmap@gtt:
    - shard-kbl:          NOTRUN -> [SKIP][31] ([fdo#109271] / [i915#1699]) +3 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl4/igt@gem_userptr_blits@process-exit-mmap@gtt.html

  * igt@gem_userptr_blits@set-cache-level:
    - shard-snb:          NOTRUN -> [FAIL][32] ([i915#3324])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-snb2/igt@gem_userptr_blits@set-cache-level.html

  * igt@i915_suspend@forcewake:
    - shard-apl:          NOTRUN -> [DMESG-WARN][33] ([i915#180])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl6/igt@i915_suspend@forcewake.html

  * igt@kms_big_fb@linear-8bpp-rotate-90:
    - shard-tglb:         NOTRUN -> [SKIP][34] ([fdo#111614]) +1 similar issue
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-tglb2/igt@kms_big_fb@linear-8bpp-rotate-90.html
    - shard-iclb:         NOTRUN -> [SKIP][35] ([fdo#110725] / [fdo#111614]) +1 similar issue
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb5/igt@kms_big_fb@linear-8bpp-rotate-90.html

  * igt@kms_big_joiner@basic:
    - shard-kbl:          NOTRUN -> [SKIP][36] ([fdo#109271] / [i915#2705])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl7/igt@kms_big_joiner@basic.html

  * igt@kms_chamelium@hdmi-edid-change-during-suspend:
    - shard-apl:          NOTRUN -> [SKIP][37] ([fdo#109271] / [fdo#111827]) +22 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl1/igt@kms_chamelium@hdmi-edid-change-during-suspend.html

  * igt@kms_chamelium@vga-hpd-without-ddc:
    - shard-snb:          NOTRUN -> [SKIP][38] ([fdo#109271] / [fdo#111827]) +23 similar issues
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-snb6/igt@kms_chamelium@vga-hpd-without-ddc.html

  * igt@kms_color_chamelium@pipe-a-ctm-0-75:
    - shard-kbl:          NOTRUN -> [SKIP][39] ([fdo#109271] / [fdo#111827]) +11 similar issues
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl4/igt@kms_color_chamelium@pipe-a-ctm-0-75.html

  * igt@kms_color_chamelium@pipe-a-ctm-limited-range:
    - shard-iclb:         NOTRUN -> [SKIP][40] ([fdo#109284] / [fdo#111827]) +4 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb7/igt@kms_color_chamelium@pipe-a-ctm-limited-range.html

  * igt@kms_color_chamelium@pipe-b-ctm-0-5:
    - shard-glk:          NOTRUN -> [SKIP][41] ([fdo#109271] / [fdo#111827]) +3 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk1/igt@kms_color_chamelium@pipe-b-ctm-0-5.html
    - shard-tglb:         NOTRUN -> [SKIP][42] ([fdo#109284] / [fdo#111827]) +3 similar issues
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-tglb2/igt@kms_color_chamelium@pipe-b-ctm-0-5.html

  * igt@kms_content_protection@lic:
    - shard-apl:          NOTRUN -> [TIMEOUT][43] ([i915#1319]) +1 similar issue
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl7/igt@kms_content_protection@lic.html

  * igt@kms_cursor_crc@pipe-c-cursor-128x128-random:
    - shard-glk:          [PASS][44] -> [FAIL][45] ([i915#54])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-glk1/igt@kms_cursor_crc@pipe-c-cursor-128x128-random.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk7/igt@kms_cursor_crc@pipe-c-cursor-128x128-random.html
    - shard-kbl:          NOTRUN -> [FAIL][46] ([i915#54])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl6/igt@kms_cursor_crc@pipe-c-cursor-128x128-random.html

  * igt@kms_cursor_crc@pipe-c-cursor-512x170-onscreen:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([fdo#109279] / [i915#3359])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-tglb6/igt@kms_cursor_crc@pipe-c-cursor-512x170-onscreen.html
    - shard-iclb:         NOTRUN -> [SKIP][48] ([fdo#109278] / [fdo#109279])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb8/igt@kms_cursor_crc@pipe-c-cursor-512x170-onscreen.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][49] ([i915#180])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl7/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_cursor_crc@pipe-d-cursor-suspend:
    - shard-kbl:          NOTRUN -> [SKIP][50] ([fdo#109271]) +122 similar issues
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl1/igt@kms_cursor_crc@pipe-d-cursor-suspend.html

  * igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy:
    - shard-iclb:         NOTRUN -> [SKIP][51] ([fdo#109274] / [fdo#109278])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb5/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html

  * igt@kms_draw_crc@draw-method-rgb565-pwrite-ytiled:
    - shard-glk:          [PASS][52] -> [FAIL][53] ([i915#52] / [i915#54]) +3 similar issues
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-glk9/igt@kms_draw_crc@draw-method-rgb565-pwrite-ytiled.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk3/igt@kms_draw_crc@draw-method-rgb565-pwrite-ytiled.html

  * igt@kms_flip@2x-flip-vs-suspend:
    - shard-iclb:         NOTRUN -> [SKIP][54] ([fdo#109274]) +2 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@kms_flip@2x-flip-vs-suspend.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@b-hdmi-a1:
    - shard-glk:          [PASS][55] -> [FAIL][56] ([i915#79])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-glk8/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-hdmi-a1.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk1/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-hdmi-a1.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs:
    - shard-apl:          NOTRUN -> [SKIP][57] ([fdo#109271] / [i915#2672])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-msflip-blt:
    - shard-snb:          NOTRUN -> [SKIP][58] ([fdo#109271]) +394 similar issues
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-snb2/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-move:
    - shard-iclb:         NOTRUN -> [SKIP][59] ([fdo#109280]) +9 similar issues
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb4/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-move.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render:
    - shard-tglb:         NOTRUN -> [SKIP][60] ([fdo#111825]) +7 similar issues
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-tglb7/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-render:
    - shard-glk:          NOTRUN -> [SKIP][61] ([fdo#109271]) +9 similar issues
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk8/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-render.html

  * igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence:
    - shard-apl:          NOTRUN -> [SKIP][62] ([fdo#109271] / [i915#533]) +1 similar issue
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl1/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence.html

  * igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence:
    - shard-iclb:         NOTRUN -> [SKIP][63] ([fdo#109278]) +1 similar issue
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes:
    - shard-kbl:          [PASS][64] -> [DMESG-WARN][65] ([i915#180] / [i915#533])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-kbl7/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl1/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb:
    - shard-apl:          NOTRUN -> [FAIL][66] ([fdo#108145] / [i915#265]) +3 similar issues
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl1/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb:
    - shard-apl:          NOTRUN -> [FAIL][67] ([i915#265])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl6/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb:
    - shard-kbl:          NOTRUN -> [FAIL][68] ([i915#265])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl7/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html

  * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping:
    - shard-kbl:          NOTRUN -> [SKIP][69] ([fdo#109271] / [i915#2733])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl3/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-3:
    - shard-kbl:          NOTRUN -> [SKIP][70] ([fdo#109271] / [i915#658]) +5 similar issues
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl7/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-3.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-4:
    - shard-apl:          NOTRUN -> [SKIP][71] ([fdo#109271] / [i915#658]) +5 similar issues
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl8/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-4.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5:
    - shard-iclb:         NOTRUN -> [SKIP][72] ([i915#658])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb5/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5.html

  * igt@kms_psr@psr2_basic:
    - shard-iclb:         [PASS][73] -> [SKIP][74] ([fdo#109441])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb2/igt@kms_psr@psr2_basic.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb6/igt@kms_psr@psr2_basic.html

  * igt@kms_psr@psr2_cursor_plane_move:
    - shard-iclb:         NOTRUN -> [SKIP][75] ([fdo#109441])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb4/igt@kms_psr@psr2_cursor_plane_move.html

  * igt@kms_vblank@pipe-d-ts-continuation-idle:
    - shard-apl:          NOTRUN -> [SKIP][76] ([fdo#109271]) +187 similar issues
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl1/igt@kms_vblank@pipe-d-ts-continuation-idle.html

  * igt@kms_writeback@writeback-invalid-parameters:
    - shard-kbl:          NOTRUN -> [SKIP][77] ([fdo#109271] / [i915#2437]) +1 similar issue
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl7/igt@kms_writeback@writeback-invalid-parameters.html

  * igt@nouveau_crc@pipe-b-source-outp-inactive:
    - shard-iclb:         NOTRUN -> [SKIP][78] ([i915#2530])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb5/igt@nouveau_crc@pipe-b-source-outp-inactive.html

  * igt@nouveau_crc@pipe-d-source-outp-inactive:
    - shard-iclb:         NOTRUN -> [SKIP][79] ([fdo#109278] / [i915#2530])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@nouveau_crc@pipe-d-source-outp-inactive.html

  * igt@prime_nv_api@nv_i915_import_twice_check_flink_name:
    - shard-iclb:         NOTRUN -> [SKIP][80] ([fdo#109291])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb3/igt@prime_nv_api@nv_i915_import_twice_check_flink_name.html

  * igt@sysfs_clients@busy:
    - shard-tglb:         NOTRUN -> [SKIP][81] ([i915#2994])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-tglb3/igt@sysfs_clients@busy.html
    - shard-iclb:         NOTRUN -> [SKIP][82] ([i915#2994])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb6/igt@sysfs_clients@busy.html
    - shard-glk:          NOTRUN -> [SKIP][83] ([fdo#109271] / [i915#2994])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk3/igt@sysfs_clients@busy.html

  * igt@sysfs_clients@split-25:
    - shard-apl:          NOTRUN -> [SKIP][84] ([fdo#109271] / [i915#2994]) +2 similar issues
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl8/igt@sysfs_clients@split-25.html

  * igt@sysfs_clients@split-50:
    - shard-kbl:          NOTRUN -> [SKIP][85] ([fdo#109271] / [i915#2994]) +3 similar issues
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl6/igt@sysfs_clients@split-50.html

  
#### Possible fixes ####

  * igt@gem_ctx_persistence@smoketest:
    - shard-iclb:         [FAIL][86] ([i915#2896]) -> [PASS][87]
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb5/igt@gem_ctx_persistence@smoketest.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb7/igt@gem_ctx_persistence@smoketest.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-iclb:         [FAIL][88] ([i915#2842]) -> [PASS][89] +1 similar issue
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb1/igt@gem_exec_fair@basic-none-share@rcs0.html
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@gem_exec_fair@basic-none-share@rcs0.html
    - shard-glk:          [FAIL][90] ([i915#2842]) -> [PASS][91]
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-glk6/igt@gem_exec_fair@basic-none-share@rcs0.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk6/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-tglb:         [FAIL][92] ([i915#2842]) -> [PASS][93] +1 similar issue
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-tglb1/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-tglb7/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-kbl:          [FAIL][94] ([i915#2842]) -> [PASS][95]
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-kbl2/igt@gem_exec_fair@basic-pace@vecs0.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl4/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_exec_whisper@basic-contexts-all:
    - shard-glk:          [DMESG-WARN][96] ([i915#118] / [i915#95]) -> [PASS][97]
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-glk6/igt@gem_exec_whisper@basic-contexts-all.html
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk2/igt@gem_exec_whisper@basic-contexts-all.html

  * igt@gem_exec_whisper@basic-contexts-forked:
    - shard-iclb:         [INCOMPLETE][98] ([i915#1895]) -> [PASS][99]
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb7/igt@gem_exec_whisper@basic-contexts-forked.html
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb1/igt@gem_exec_whisper@basic-contexts-forked.html

  * igt@gem_workarounds@suspend-resume-context:
    - shard-apl:          [DMESG-WARN][100] ([i915#180]) -> [PASS][101]
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-apl1/igt@gem_workarounds@suspend-resume-context.html
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-apl7/igt@gem_workarounds@suspend-resume-context.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-snb:          [DMESG-WARN][102] -> [PASS][103]
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-snb2/igt@i915_module_load@reload-with-fault-injection.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-snb2/igt@i915_module_load@reload-with-fault-injection.html

  * igt@kms_draw_crc@draw-method-rgb565-pwrite-untiled:
    - shard-glk:          [FAIL][104] ([i915#52] / [i915#54]) -> [PASS][105] +3 similar issues
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-glk9/igt@kms_draw_crc@draw-method-rgb565-pwrite-untiled.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-glk2/igt@kms_draw_crc@draw-method-rgb565-pwrite-untiled.html

  * igt@kms_flip@flip-vs-suspend@c-dp1:
    - shard-kbl:          [DMESG-WARN][106] ([i915#180]) -> [PASS][107] +8 similar issues
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-kbl7/igt@kms_flip@flip-vs-suspend@c-dp1.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-kbl4/igt@kms_flip@flip-vs-suspend@c-dp1.html

  * igt@kms_psr2_su@frontbuffer:
    - shard-iclb:         [SKIP][108] ([fdo#109642] / [fdo#111068] / [i915#658]) -> [PASS][109]
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb7/igt@kms_psr2_su@frontbuffer.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@kms_psr2_su@frontbuffer.html

  * igt@kms_psr@psr2_cursor_plane_onoff:
    - shard-iclb:         [SKIP][110] ([fdo#109441]) -> [PASS][111]
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb3/igt@kms_psr@psr2_cursor_plane_onoff.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@kms_psr@psr2_cursor_plane_onoff.html

  
#### Warnings ####

  * igt@i915_pm_rc6_residency@rc6-fence:
    - shard-iclb:         [WARN][112] ([i915#2681] / [i915#2684]) -> [WARN][113] ([i915#2684])
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb8/igt@i915_pm_rc6_residency@rc6-fence.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb5/igt@i915_pm_rc6_residency@rc6-fence.html

  * igt@i915_pm_rc6_residency@rc6-idle:
    - shard-iclb:         [WARN][114] ([i915#2681] / [i915#2684]) -> [WARN][115] ([i915#1804] / [i915#2684])
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb8/igt@i915_pm_rc6_residency@rc6-idle.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb4/igt@i915_pm_rc6_residency@rc6-idle.html

  * igt@kms_psr2_sf@cursor-plane-update-sf:
    - shard-iclb:         [SKIP][116] ([i915#658]) -> [SKIP][117] ([i915#2920])
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb4/igt@kms_psr2_sf@cursor-plane-update-sf.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/shard-iclb2/igt@kms_psr2_sf@cursor-plane-update-sf.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-1:
    - shard-iclb:         [SKIP][118] ([i915#2920]) -> [SKIP][119] ([i915#658]) +1 similar issue
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9926/shard-iclb2/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html
   [119]: https://intel-gfx-

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5689/index.html

[-- Attachment #1.2: Type: text/html, Size: 34609 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v30 02/39] lib/igt_list: Add igt_list_for_each_entry_safe_reverse
  2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 02/39] lib/igt_list: Add igt_list_for_each_entry_safe_reverse Zbigniew Kempczyński
@ 2021-04-08  7:44   ` Zbigniew Kempczyński
  0 siblings, 0 replies; 43+ messages in thread
From: Zbigniew Kempczyński @ 2021-04-08  7:44 UTC (permalink / raw)
  To: igt-dev

On Fri, Apr 02, 2021 at 11:38:07AM +0200, Zbigniew Kempczyński wrote:
> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> 
> Added safe version of reversed iteration over igt_list.
> 
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>

This looks ok.

Reviewed-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>

> ---
>  lib/igt_list.h | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/lib/igt_list.h b/lib/igt_list.h
> index cc93d7a0d..be63fd806 100644
> --- a/lib/igt_list.h
> +++ b/lib/igt_list.h
> @@ -108,6 +108,12 @@ bool igt_list_empty(const struct igt_list_head *head);
>  	     &pos->member != (head);					\
>  	     pos = igt_container_of((pos)->member.prev, pos, member))
>  
> +#define igt_list_for_each_entry_safe_reverse(pos, tmp, head, member)	\
> +	for (pos = igt_container_of((head)->prev, pos, member),		\
> +	     tmp = igt_container_of((pos)->member.prev, tmp, member);	\
> +	     &pos->member != (head);					\
> +	     pos = tmp,							\
> +	     tmp = igt_container_of((pos)->member.prev, tmp, member))
>  
>  /* IGT custom helpers */
>  
> -- 
> 2.26.0
> 
> _______________________________________________
> igt-dev mailing list
> igt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/igt-dev
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2021-04-08  7:44 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-02  9:38 [igt-dev] [PATCH i-g-t v30 00/39] Introduce IGT allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 01/39] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 02/39] lib/igt_list: Add igt_list_for_each_entry_safe_reverse Zbigniew Kempczyński
2021-04-08  7:44   ` Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 03/39] lib/igt_map: Adopt Mesa hash table Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 04/39] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 05/39] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 06/39] lib/intel_allocator_reloc: Add reloc allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 07/39] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 08/39] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 09/39] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 10/39] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 11/39] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 12/39] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 13/39] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 14/39] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 15/39] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 16/39] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 17/39] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 18/39] lib/intel_batchbuffer: Don't collect relocations for newer gens Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 19/39] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 20/39] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 21/39] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 22/39] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 23/39] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 24/39] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 25/39] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 26/39] tests/api_intel_bb: Check switching vm in intel-bb Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 27/39] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 28/39] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 29/39] tests/api_intel_allocator: Verify child can use its standalone allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 30/39] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 31/39] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 32/39] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 33/39] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 34/39] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 35/39] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 36/39] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 37/39] lib/intel_allocator: drop kill_children() Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 38/39] lib/intel_allocator: Add alloc function which allows passing strategy argument Zbigniew Kempczyński
2021-04-02  9:38 ` [igt-dev] [PATCH i-g-t v30 39/39] tests/api_intel_allocator: Check alloc with strategy API Zbigniew Kempczyński
2021-04-02 10:16 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev33) Patchwork
2021-04-02 11:13 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.