All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator
@ 2021-03-17 14:45 Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 01/35] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
                   ` (36 more replies)
  0 siblings, 37 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

This series introduce intel-allocator inside IGT.

...

v24: address review comment:
     - change gem_linear_blits to show the way we can use during 
       rewriting other tests to use/not use relocations
v25: changes:
     - fix bug introduced during last review refactor in gem_linear_blits
     - remove api_intel_bb@last-page test to not to hang gpu, we can decide
       to get this test later
v6: resend for review (Jason)

Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>

Dominik Grzegorzek (5):
  lib/igt_list: igt_hlist implementation.
  lib/igt_map: Introduce igt_map
  lib/intel_allocator_simple: Add simple allocator
  tests/api_intel_allocator: Simple allocator test suite
  tests/gem_linear_blits: Use intel allocator

Zbigniew Kempczyński (30):
  lib/igt_list: Add igt_list_del_init()
  lib/igt_core: Track child process pid and tid
  lib/intel_allocator_reloc: Add reloc allocator
  lib/intel_allocator_random: Add random allocator
  lib/intel_allocator: Add intel_allocator core
  lib/intel_allocator: Try to stop smoothly instead of deinit
  lib/intel_allocator_msgchannel: Scale to 4k of parallel clients
  lib/intel_allocator: Separate allocator multiprocess start
  lib/intel_bufops: Change size from 32->64 bit
  lib/intel_bufops: Add init with handle and size function
  lib/intel_batchbuffer: Integrate intel_bb with allocator
  lib/intel_batchbuffer: Use relocations in intel-bb up to gen12
  lib/intel_batchbuffer: Create bb with strategy / vm ranges
  lib/intel_batchbuffer: Add tracking intel_buf to intel_bb
  lib/igt_fb: Initialize intel_buf with same size as fb
  tests/api_intel_bb: Remove check-canonical test
  tests/api_intel_bb: Modify test to verify intel_bb with allocator
  tests/api_intel_bb: Add compressed->compressed copy
  tests/api_intel_bb: Add purge-bb test
  tests/api_intel_bb: Add simple intel-bb which uses allocator
  tests/api_intel_bb: Use allocator in delta-check test
  tests/api_intel_bb: Check switching vm in intel-bb
  tests/api_intel_allocator: Add execbuf with allocator example
  tests/api_intel_allocator: Verify child can use its standalone
    allocator
  tests/gem_softpin: Verify allocator and execbuf pair work together
  tests/gem|kms: Remove intel_bb from fixture
  tests/gem_mmap_offset: Use intel_buf wrapper code instead direct
  tests/gem_ppgtt: Adopt test to use intel_bb with allocator
  tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator
  tests/perf.c: Remove buffer from batch

 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |    2 +
 lib/Makefile.sources                          |    8 +
 lib/igt_core.c                                |   20 +
 lib/igt_fb.c                                  |   10 +-
 lib/igt_list.c                                |   78 +
 lib/igt_list.h                                |   51 +-
 lib/igt_map.c                                 |  131 ++
 lib/igt_map.h                                 |  104 ++
 lib/intel_allocator.c                         | 1360 +++++++++++++++++
 lib/intel_allocator.h                         |  222 +++
 lib/intel_allocator_msgchannel.c              |  195 +++
 lib/intel_allocator_msgchannel.h              |  156 ++
 lib/intel_allocator_random.c                  |  188 +++
 lib/intel_allocator_reloc.c                   |  190 +++
 lib/intel_allocator_simple.c                  |  744 +++++++++
 lib/intel_aux_pgtable.c                       |   26 +-
 lib/intel_batchbuffer.c                       |  729 ++++++---
 lib/intel_batchbuffer.h                       |   54 +-
 lib/intel_bufops.c                            |   63 +-
 lib/intel_bufops.h                            |   20 +-
 lib/media_spin.c                              |    2 -
 lib/meson.build                               |    6 +
 tests/i915/api_intel_allocator.c              |  700 +++++++++
 tests/i915/api_intel_bb.c                     |  729 +++++++--
 tests/i915/gem_caching.c                      |   14 +-
 tests/i915/gem_linear_blits.c                 |   90 +-
 tests/i915/gem_mmap_offset.c                  |    4 +-
 tests/i915/gem_partial_pwrite_pread.c         |   40 +-
 tests/i915/gem_ppgtt.c                        |    7 +-
 tests/i915/gem_render_copy.c                  |   31 +-
 tests/i915/gem_render_copy_redux.c            |   24 +-
 tests/i915/gem_softpin.c                      |  194 +++
 tests/i915/perf.c                             |    9 +
 tests/intel-ci/fast-feedback.testlist         |    2 +
 tests/kms_big_fb.c                            |   12 +-
 tests/meson.build                             |    1 +
 36 files changed, 5734 insertions(+), 482 deletions(-)
 create mode 100644 lib/igt_map.c
 create mode 100644 lib/igt_map.h
 create mode 100644 lib/intel_allocator.c
 create mode 100644 lib/intel_allocator.h
 create mode 100644 lib/intel_allocator_msgchannel.c
 create mode 100644 lib/intel_allocator_msgchannel.h
 create mode 100644 lib/intel_allocator_random.c
 create mode 100644 lib/intel_allocator_reloc.c
 create mode 100644 lib/intel_allocator_simple.c
 create mode 100644 tests/i915/api_intel_allocator.c

-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 01/35] lib/igt_list: Add igt_list_del_init()
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 16:40   ` Jason Ekstrand
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation Zbigniew Kempczyński
                   ` (35 subsequent siblings)
  36 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Add helper function for delete and reinitialize list element.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_list.c | 6 ++++++
 lib/igt_list.h | 1 +
 2 files changed, 7 insertions(+)

diff --git a/lib/igt_list.c b/lib/igt_list.c
index 5e30b19b6..37ae139c4 100644
--- a/lib/igt_list.c
+++ b/lib/igt_list.c
@@ -46,6 +46,12 @@ void igt_list_del(struct igt_list_head *elem)
 	elem->prev = NULL;
 }
 
+void igt_list_del_init(struct igt_list_head *elem)
+{
+	igt_list_del(elem);
+	IGT_INIT_LIST_HEAD(elem);
+}
+
 void igt_list_move(struct igt_list_head *elem, struct igt_list_head *list)
 {
 	igt_list_del(elem);
diff --git a/lib/igt_list.h b/lib/igt_list.h
index dbf5f802c..cc93d7a0d 100644
--- a/lib/igt_list.h
+++ b/lib/igt_list.h
@@ -75,6 +75,7 @@ struct igt_list_head {
 void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
 void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
 void igt_list_del(struct igt_list_head *elem);
+void igt_list_del_init(struct igt_list_head *elem);
 void igt_list_move(struct igt_list_head *elem, struct igt_list_head *list);
 void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
 int igt_list_length(const struct igt_list_head *head);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation.
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 01/35] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 16:43   ` Jason Ekstrand
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 03/35] lib/igt_map: Introduce igt_map Zbigniew Kempczyński
                   ` (34 subsequent siblings)
  36 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Double linked lists with a single pointer list head implementation,
based on similar in the kernel.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 lib/igt_list.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++
 lib/igt_list.h | 50 +++++++++++++++++++++++++++++++++--
 2 files changed, 120 insertions(+), 2 deletions(-)

diff --git a/lib/igt_list.c b/lib/igt_list.c
index 37ae139c4..43200f9b3 100644
--- a/lib/igt_list.c
+++ b/lib/igt_list.c
@@ -22,6 +22,7 @@
  *
  */
 
+#include "assert.h"
 #include "igt_list.h"
 
 void IGT_INIT_LIST_HEAD(struct igt_list_head *list)
@@ -81,3 +82,74 @@ bool igt_list_empty(const struct igt_list_head *head)
 {
 	return head->next == head;
 }
+
+void igt_hlist_init(struct igt_hlist_node *h)
+{
+	h->next = NULL;
+	h->pprev = NULL;
+}
+
+int igt_hlist_unhashed(const struct igt_hlist_node *h)
+{
+	return !h->pprev;
+}
+
+int igt_hlist_empty(const struct igt_hlist_head *h)
+{
+	return !h->first;
+}
+
+static void __igt_hlist_del(struct igt_hlist_node *n)
+{
+	struct igt_hlist_node *next = n->next;
+	struct igt_hlist_node **pprev = n->pprev;
+
+	*pprev = next;
+	if (next)
+		next->pprev = pprev;
+}
+
+void igt_hlist_del(struct igt_hlist_node *n)
+{
+	__igt_hlist_del(n);
+	n->next = NULL;
+	n->pprev = NULL;
+}
+
+void igt_hlist_del_init(struct igt_hlist_node *n)
+{
+	if (!igt_hlist_unhashed(n)) {
+		__igt_hlist_del(n);
+		igt_hlist_init(n);
+	}
+}
+
+void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h)
+{
+	struct igt_hlist_node *first = h->first;
+
+	n->next = first;
+	if (first)
+		first->pprev = &n->next;
+	h->first = n;
+	n->pprev = &h->first;
+}
+
+void igt_hlist_add_before(struct igt_hlist_node *n, struct igt_hlist_node *next)
+{
+	assert(next);
+	n->pprev = next->pprev;
+	n->next = next;
+	next->pprev = &n->next;
+	*(n->pprev) = n;
+}
+
+void igt_hlist_add_behind(struct igt_hlist_node *n, struct igt_hlist_node *prev)
+{
+	n->next = prev->next;
+	prev->next = n;
+	n->pprev = &prev->next;
+
+	if (n->next)
+		n->next->pprev  = &n->next;
+}
diff --git a/lib/igt_list.h b/lib/igt_list.h
index cc93d7a0d..78e761e05 100644
--- a/lib/igt_list.h
+++ b/lib/igt_list.h
@@ -40,6 +40,10 @@
  * igt_list is a doubly-linked list where an instance of igt_list_head is a
  * head sentinel and has to be initialized.
  *
+ * igt_hist is also an double linked lists, but with a single pointer list head.
+ * Mostly useful for hash tables where the two pointer list head is
+ * too wasteful. You lose the ability to access the tail in O(1).
+ *
  * Example usage:
  *
  * |[<!-- language="C" -->
@@ -71,6 +75,13 @@ struct igt_list_head {
 	struct igt_list_head *next;
 };
 
+struct igt_hlist_head {
+	struct igt_hlist_node *first;
+};
+
+struct igt_hlist_node {
+	struct igt_hlist_node *next, **pprev;
+};
 
 void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
 void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
@@ -81,6 +92,17 @@ void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
 int igt_list_length(const struct igt_list_head *head);
 bool igt_list_empty(const struct igt_list_head *head);
 
+void igt_hlist_init(struct igt_hlist_node *h);
+int igt_hlist_unhashed(const struct igt_hlist_node *h);
+int igt_hlist_empty(const struct igt_hlist_head *h);
+void igt_hlist_del(struct igt_hlist_node *n);
+void igt_hlist_del_init(struct igt_hlist_node *n);
+void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h);
+void igt_hlist_add_before(struct igt_hlist_node *n,
+			  struct igt_hlist_node *next);
+void igt_hlist_add_behind(struct igt_hlist_node *n,
+			  struct igt_hlist_node *prev);
+
 #define igt_container_of(ptr, sample, member)				\
 	(__typeof__(sample))((char *)(ptr) -				\
 				offsetof(__typeof__(*sample), member))
@@ -96,9 +118,10 @@ bool igt_list_empty(const struct igt_list_head *head);
  * Safe against removel of the *current* list element. To achive this it
  * requires an extra helper variable `tmp` with the same type as `pos`.
  */
-#define igt_list_for_each_entry_safe(pos, tmp, head, member)			\
+
+#define igt_list_for_each_entry_safe(pos, tmp, head, member)		\
 	for (pos = igt_container_of((head)->next, pos, member),		\
-	     tmp = igt_container_of((pos)->member.next, tmp, member); 	\
+	     tmp = igt_container_of((pos)->member.next, tmp, member);	\
 	     &pos->member != (head);					\
 	     pos = tmp,							\
 	     tmp = igt_container_of((pos)->member.next, tmp, member))
@@ -108,6 +131,27 @@ bool igt_list_empty(const struct igt_list_head *head);
 	     &pos->member != (head);					\
 	     pos = igt_container_of((pos)->member.prev, pos, member))
 
+#define igt_list_for_each_entry_safe_reverse(pos, tmp, head, member)		\
+	for (pos = igt_container_of((head)->prev, pos, member),		\
+	     tmp = igt_container_of((pos)->member.prev, tmp, member);	\
+	     &pos->member != (head);					\
+	     pos = tmp,							\
+	     tmp = igt_container_of((pos)->member.prev, tmp, member))
+
+#define igt_hlist_entry_safe(ptr, sample, member) \
+	({ typeof(ptr) ____ptr = (ptr); \
+	   ____ptr ? igt_container_of(____ptr, sample, member) : NULL; \
+	})
+
+#define igt_hlist_for_each_entry(pos, head, member)			\
+	for (pos = igt_hlist_entry_safe((head)->first, pos, member);	\
+	     pos;							\
+	     pos = igt_hlist_entry_safe((pos)->member.next, pos, member))
+
+#define igt_hlist_for_each_entry_safe(pos, n, head, member)		\
+	for (pos = igt_hlist_entry_safe((head)->first, pos, member);	\
+	     pos && ({ n = pos->member.next; 1; });			\
+	     pos = igt_hlist_entry_safe(n, pos, member))
 
 /* IGT custom helpers */
 
@@ -127,4 +171,6 @@ bool igt_list_empty(const struct igt_list_head *head);
 #define igt_list_last_entry(head, type, member) \
 	igt_container_of((head)->prev, (type), member)
 
+#define IGT_INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
+
 #endif /* IGT_LIST_H */
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 03/35] lib/igt_map: Introduce igt_map
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 01/35] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 04/35] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
                   ` (33 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Implementation of generic, non-thread safe, dynamic size hash map.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |   1 +
 lib/Makefile.sources                          |   2 +
 lib/igt_map.c                                 | 131 ++++++++++++++++++
 lib/igt_map.h                                 | 104 ++++++++++++++
 lib/meson.build                               |   1 +
 5 files changed, 239 insertions(+)
 create mode 100644 lib/igt_map.c
 create mode 100644 lib/igt_map.h

diff --git a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
index 9c9aa8f1d..bf5ac5428 100644
--- a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
+++ b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
@@ -33,6 +33,7 @@
     <xi:include href="xml/igt_kmod.xml"/>
     <xi:include href="xml/igt_kms.xml"/>
     <xi:include href="xml/igt_list.xml"/>
+    <xi:include href="xml/igt_map.xml"/>
     <xi:include href="xml/igt_pm.xml"/>
     <xi:include href="xml/igt_primes.xml"/>
     <xi:include href="xml/igt_rand.xml"/>
diff --git a/lib/Makefile.sources b/lib/Makefile.sources
index 4f6389f8a..84fd7b49c 100644
--- a/lib/Makefile.sources
+++ b/lib/Makefile.sources
@@ -48,6 +48,8 @@ lib_source_list =	 	\
 	igt_infoframe.h		\
 	igt_list.c		\
 	igt_list.h		\
+	igt_map.c		\
+	igt_map.h		\
 	igt_matrix.c		\
 	igt_matrix.h		\
 	igt_params.c		\
diff --git a/lib/igt_map.c b/lib/igt_map.c
new file mode 100644
index 000000000..12909ed5a
--- /dev/null
+++ b/lib/igt_map.c
@@ -0,0 +1,131 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt_map.h"
+#include "igt.h"
+#include "igt_x86.h"
+
+#define MIN_CAPACITY_BITS 2
+
+static bool igt_map_filled(struct igt_map *map)
+{
+	return IGT_MAP_CAPACITY(map) * 4 / 5 <= map->size;
+}
+
+static void igt_map_extend(struct igt_map *map)
+{
+	struct igt_hlist_head *new_heads;
+	struct igt_map_entry *pos;
+	struct igt_hlist_node *tmp;
+	uint32_t new_bits = map->capacity_bits + 1;
+	int i;
+
+	new_heads = calloc(1ULL << new_bits,
+			   sizeof(struct igt_hlist_head));
+	igt_assert(new_heads);
+
+	igt_map_for_each_safe(map, i, tmp, pos)
+		igt_hlist_add_head(&pos->link, &new_heads[map->hash_fn(pos->key, new_bits)]);
+
+	free(map->heads);
+	map->capacity_bits++;
+
+	map->heads = new_heads;
+}
+
+static bool equal_4bytes(const void *key1, const void *key2)
+{
+	const uint32_t *k1 = key1, *k2 = key2;
+	return *k1 == *k2;
+}
+
+/*  2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
+#define GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001ULL
+
+static inline uint64_t hash_64_4bytes(const void *val, unsigned int bits)
+{
+	uint64_t hash = *(uint32_t *)val;
+
+	hash = hash * GOLDEN_RATIO_PRIME_64;
+	/* High bits are more random, so use them. */
+	return hash >> (64 - bits);
+}
+
+void __igt_map_init(struct igt_map *map, igt_map_equal_fn eq_fn,
+		    igt_map_hash_fn hash_fn, uint32_t initial_bits)
+{
+	map->equal_fn = eq_fn == NULL ? equal_4bytes : eq_fn;
+	map->hash_fn = hash_fn == NULL ? hash_64_4bytes : hash_fn;
+	map->capacity_bits = initial_bits > 0 ? initial_bits
+					      : MIN_CAPACITY_BITS;
+	map->heads = calloc(IGT_MAP_CAPACITY(map),
+			    sizeof(struct igt_hlist_head));
+
+	igt_assert(map->heads);
+	map->size = 0;
+}
+
+void igt_map_add(struct igt_map *map, const void *key, void *value)
+{
+	struct igt_map_entry *entry;
+
+	if (igt_map_filled(map))
+		igt_map_extend(map);
+
+	entry = malloc(sizeof(struct igt_map_entry));
+	entry->value = value;
+	entry->key = key;
+	igt_hlist_add_head(&entry->link,
+			   &map->heads[map->hash_fn(key, map->capacity_bits)]);
+	map->size++;
+}
+
+void *igt_map_del(struct igt_map *map, const void *key)
+{
+	struct igt_map_entry *pos;
+	struct igt_hlist_node *tmp;
+	void *val = NULL;
+
+	igt_map_for_each_possible_safe(map, pos, tmp, key) {
+		if (map->equal_fn(pos->key, key)) {
+			igt_hlist_del(&pos->link);
+			val = pos->value;
+			free(pos);
+		}
+	}
+	return val;
+}
+
+void *igt_map_find(struct igt_map *map, const void *key)
+{
+	struct igt_map_entry *pos = NULL;
+
+	igt_map_for_each_possible(map, pos, key)
+		if (map->equal_fn(pos->key, key))
+			break;
+
+	return pos ? pos->value : NULL;
+}
+
+void igt_map_free(struct igt_map *map)
+{
+	struct igt_map_entry *pos;
+	struct igt_hlist_node *tmp;
+	int i;
+
+	igt_map_for_each_safe(map, i, tmp, pos) {
+		igt_hlist_del(&pos->link);
+		free(pos);
+	}
+
+	free(map->heads);
+}
+
+bool igt_map_empty(struct igt_map *map)
+{
+	return map->size;
+}
diff --git a/lib/igt_map.h b/lib/igt_map.h
new file mode 100644
index 000000000..d47afdbbe
--- /dev/null
+++ b/lib/igt_map.h
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef IGT_MAP_H
+#define IGT_MAP_H
+
+#include <stdint.h>
+#include "igt_list.h"
+
+/**
+ * SECTION:igt_map
+ * @short_description: a dynamic sized hashmap implementation
+ * @title: IGT Map
+ * @include: igt_map.h
+ *
+ * igt_map is a dynamic sized, non-thread-safe implementation of a hashmap.
+ * This map grows exponentially when it's over 80% filled. The structure allows
+ * indexing records by any key through the hash and equal functions.
+ * By default, hashmap compares and hashes the first four bytes of a key.
+ *
+ * Example usage:
+ *
+ * |[<!-- language="C" -->
+ * struct igt_map *map;
+ *
+ * struct record {
+ *      int foo;
+ *      uint32_t unique_identifier;
+ * };
+ *
+ * struct record r1, r2, *r_ptr;
+ *
+ * map = malloc(sizeof(*map));
+ * igt_map_init(map); // initiaize the map with default parameters
+ * igt_map_add(map, &r1.unique_identifier, &r1);
+ * igt_map_add(map, &r2.unique_identifier, &r2);
+ *
+ * struct igt_map_entry *pos; int i;
+ * igt_map_for_each(map, i, pos) {
+ *      r_ptr = pos->value;
+ *      printf("key: %u, foo: %d\n", *(uint32_t*) pos->key, r_ptr->foo);
+ * }
+ *
+ * uint32_t key = r1.unique_identifier;
+ * r_ptr = igt_map_find(map, &key); // get r1
+ *
+ * r_ptr = igt_map_del(map, &r2.unique_identifier);
+ * if (r_ptr)
+ *      printf("record with key %u deleted\n", r_ptr->unique_identifier);
+ *
+ * igt_map_free(map);
+ * free(map);
+ * ]|
+ */
+
+typedef bool (*igt_map_equal_fn)(const void *key1, const void *key2);
+typedef uint64_t (*igt_map_hash_fn)(const void *val, unsigned int bits);
+struct igt_map {
+	uint32_t size;
+	uint32_t capacity_bits;
+	igt_map_equal_fn equal_fn;
+	igt_map_hash_fn hash_fn;
+	struct igt_hlist_head *heads;
+};
+
+struct igt_map_entry {
+	const void *key;
+	void *value;
+	struct igt_hlist_node link;
+};
+
+void __igt_map_init(struct igt_map *map, igt_map_equal_fn eq_fn,
+		  igt_map_hash_fn hash_fn, uint32_t initial_bits);
+void igt_map_add(struct igt_map *map, const void *key, void *value);
+void *igt_map_del(struct igt_map *map, const void *key);
+void *igt_map_find(struct igt_map *map, const void *key);
+void igt_map_free(struct igt_map *map);
+bool igt_map_empty(struct igt_map *map);
+
+#define igt_map_init(map) __igt_map_init(map, NULL, NULL, 8)
+
+#define IGT_MAP_CAPACITY(map) (1ULL << map->capacity_bits)
+
+#define igt_map_for_each_safe(map, bkt, tmp, obj) \
+	for ((bkt) = 0, obj = NULL; obj == NULL && (bkt) < IGT_MAP_CAPACITY(map); \
+	(bkt)++)\
+	igt_hlist_for_each_entry_safe(obj, tmp, &map->heads[bkt], link)
+
+#define igt_map_for_each(map, bkt, obj) \
+	for ((bkt) = 0, obj = NULL; obj == NULL && (bkt) < IGT_MAP_CAPACITY(map); \
+	(bkt)++)\
+	igt_hlist_for_each_entry(obj, &map->heads[bkt], link)
+
+#define igt_map_for_each_possible(map, obj, key) \
+	igt_hlist_for_each_entry(obj, \
+		&map->heads[map->hash_fn(key, map->capacity_bits)], link)
+
+#define igt_map_for_each_possible_safe(map, obj, tmp, key) \
+	igt_hlist_for_each_entry_safe(obj, tmp, \
+		&map->heads[map->hash_fn(key, map->capacity_bits)], link)
+
+#endif /* IGT_MAP_H */
diff --git a/lib/meson.build b/lib/meson.build
index 672b42062..7254faeac 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -61,6 +61,7 @@ lib_sources = [
 	'igt_core.c',
 	'igt_draw.c',
 	'igt_list.c',
+	'igt_map.c',
 	'igt_pm.c',
 	'igt_dummyload.c',
 	'uwildmat/uwildmat.c',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 04/35] lib/igt_core: Track child process pid and tid
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (2 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 03/35] lib/igt_map: Introduce igt_map Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-18  9:07   ` Petri Latvala
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 05/35] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
                   ` (32 subsequent siblings)
  36 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Introduce variables which can decrease number of getpid()/gettid()
calls, especially for allocator which must be aware of method
acquiring addresses.

When child is spawned using igt_fork() we can control its initialization
and prepare child_pid implicitly. Tracking child_tid requires our
intervention in the code and do something like this:

if (child_tid == -1)
	child_tid = gettid()

Variable was created for using in TLS so each thread is created
with variable set to -1. This will give each thread it's own "copy"
and there's no risk to use other thread tid. For each forked child
we reassign -1 to child_tid to avoid using already set variable.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
---
 lib/igt_core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/igt_core.c b/lib/igt_core.c
index f9dfaa0dd..2b4182f16 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -306,6 +306,10 @@ int num_test_children;
 int test_children_sz;
 bool test_child;
 
+/* For allocator purposes */
+pid_t child_pid  = -1;
+__thread pid_t child_tid  = -1;
+
 enum {
 	/*
 	 * Let the first values be used by individual tests so options don't
@@ -2302,6 +2306,8 @@ bool __igt_fork(void)
 	case 0:
 		test_child = true;
 		pthread_mutex_init(&print_mutex, NULL);
+		child_pid = getpid();
+		child_tid = -1;
 		exit_handler_count = 0;
 		reset_helper_process_list();
 		oom_adjust_for_doom();
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 05/35] lib/intel_allocator_simple: Add simple allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (3 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 04/35] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 19:38   ` Jason Ekstrand
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 06/35] lib/intel_allocator_reloc: Add reloc allocator Zbigniew Kempczyński
                   ` (31 subsequent siblings)
  36 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Simple allocator borrowed from Mesa adopted for IGT use.

From default we prefer an allocation from the top of vm address space
(we can catch addressing issues pro-actively). When function
intel_allocator_simple_create() is used we exclude last page as HW
tends to hang on the render engine when full 3D pipeline is executed from
the last page. For more control of vm range user can specify range using
intel_allocator_simple_create_full() (with the respect of the gtt size).

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator_simple.c | 744 +++++++++++++++++++++++++++++++++++
 1 file changed, 744 insertions(+)
 create mode 100644 lib/intel_allocator_simple.c

diff --git a/lib/intel_allocator_simple.c b/lib/intel_allocator_simple.c
new file mode 100644
index 000000000..cc207c8e9
--- /dev/null
+++ b/lib/intel_allocator_simple.c
@@ -0,0 +1,744 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt.h"
+#include "igt_x86.h"
+#include "intel_allocator.h"
+#include "intel_bufops.h"
+#include "igt_map.h"
+
+/*
+ * We limit allocator space to avoid hang when batch would be
+ * pinned in the last page.
+ */
+#define RESERVED 4096
+
+/* Avoid compilation warning */
+struct intel_allocator *intel_allocator_simple_create(int fd);
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy);
+
+struct simple_vma_heap {
+	struct igt_list_head holes;
+
+	/* If true, simple_vma_heap_alloc will prefer high addresses
+	 *
+	 * Default is true.
+	 */
+	bool alloc_high;
+};
+
+struct simple_vma_hole {
+	struct igt_list_head link;
+	uint64_t offset;
+	uint64_t size;
+};
+
+struct intel_allocator_simple {
+	struct igt_map objects;
+	struct igt_map reserved;
+	struct simple_vma_heap heap;
+
+	uint64_t start;
+	uint64_t end;
+
+	/* statistics */
+	uint64_t total_size;
+	uint64_t allocated_size;
+	uint64_t allocated_objects;
+	uint64_t reserved_size;
+	uint64_t reserved_areas;
+};
+
+struct intel_allocator_record {
+	uint32_t handle;
+	uint64_t offset;
+	uint64_t size;
+};
+
+#define simple_vma_foreach_hole(_hole, _heap) \
+	igt_list_for_each_entry(_hole, &(_heap)->holes, link)
+
+#define simple_vma_foreach_hole_safe(_hole, _heap, _tmp) \
+	igt_list_for_each_entry_safe(_hole, _tmp,  &(_heap)->holes, link)
+
+#define simple_vma_foreach_hole_safe_rev(_hole, _heap, _tmp) \
+	igt_list_for_each_entry_safe_reverse(_hole, _tmp,  &(_heap)->holes, link)
+
+#define GEN8_GTT_ADDRESS_WIDTH 48
+#define DECANONICAL(offset) (offset & ((1ull << GEN8_GTT_ADDRESS_WIDTH) - 1))
+
+static void simple_vma_heap_validate(struct simple_vma_heap *heap)
+{
+	uint64_t prev_offset = 0;
+	struct simple_vma_hole *hole;
+
+	simple_vma_foreach_hole(hole, heap) {
+		igt_assert(hole->size > 0);
+
+		if (&hole->link == heap->holes.next) {
+		/* This must be the top-most hole.  Assert that,
+		 * if it overflows, it overflows to 0, i.e. 2^64.
+		 */
+			igt_assert(hole->size + hole->offset == 0 ||
+				   hole->size + hole->offset > hole->offset);
+		} else {
+		/* This is not the top-most hole so it must not overflow and,
+		 * in fact, must be strictly lower than the top-most hole.  If
+		 * hole->size + hole->offset == prev_offset, then we failed to
+		 * join holes during a simple_vma_heap_free.
+		 */
+			igt_assert(hole->size + hole->offset > hole->offset &&
+				   hole->size + hole->offset < prev_offset);
+		}
+		prev_offset = hole->offset;
+	}
+}
+
+
+static void simple_vma_heap_free(struct simple_vma_heap *heap,
+				 uint64_t offset, uint64_t size)
+{
+	struct simple_vma_hole *high_hole = NULL, *low_hole = NULL, *hole;
+	bool high_adjacent, low_adjacent;
+
+	/* Freeing something with a size of 0 is not valid. */
+	igt_assert(size > 0);
+
+	/* It's possible for offset + size to wrap around if we touch the top of
+	 * the 64-bit address space, but we cannot go any higher than 2^64.
+	 */
+	igt_assert(offset + size == 0 || offset + size > offset);
+
+	simple_vma_heap_validate(heap);
+
+	/* Find immediately higher and lower holes if they exist. */
+	simple_vma_foreach_hole(hole, heap) {
+		if (hole->offset <= offset) {
+			low_hole = hole;
+			break;
+		}
+		high_hole = hole;
+	}
+
+	if (high_hole)
+		igt_assert(offset + size <= high_hole->offset);
+	high_adjacent = high_hole && offset + size == high_hole->offset;
+
+	if (low_hole) {
+		igt_assert(low_hole->offset + low_hole->size > low_hole->offset);
+		igt_assert(low_hole->offset + low_hole->size <= offset);
+	}
+	low_adjacent = low_hole && low_hole->offset + low_hole->size == offset;
+
+	if (low_adjacent && high_adjacent) {
+		/* Merge the two holes */
+		low_hole->size += size + high_hole->size;
+		igt_list_del(&high_hole->link);
+		free(high_hole);
+	} else if (low_adjacent) {
+		/* Merge into the low hole */
+		low_hole->size += size;
+	} else if (high_adjacent) {
+		/* Merge into the high hole */
+		high_hole->offset = offset;
+		high_hole->size += size;
+	} else {
+		/* Neither hole is adjacent; make a new one */
+		hole = calloc(1, sizeof(*hole));
+		igt_assert(hole);
+
+		hole->offset = offset;
+		hole->size = size;
+		/* Add it after the high hole so we maintain high-to-low
+		 *  ordering
+		 */
+		if (high_hole)
+			igt_list_add(&hole->link, &high_hole->link);
+		else
+			igt_list_add(&hole->link, &heap->holes);
+	}
+
+	simple_vma_heap_validate(heap);
+}
+
+static void simple_vma_heap_init(struct simple_vma_heap *heap,
+				 uint64_t start, uint64_t size,
+				 enum allocator_strategy strategy)
+{
+	IGT_INIT_LIST_HEAD(&heap->holes);
+	simple_vma_heap_free(heap, start, size);
+
+	switch (strategy) {
+	case ALLOC_STRATEGY_LOW_TO_HIGH:
+		heap->alloc_high = false;
+		break;
+	case ALLOC_STRATEGY_HIGH_TO_LOW:
+	default:
+		heap->alloc_high = true;
+	}
+}
+
+static void simple_vma_heap_finish(struct simple_vma_heap *heap)
+{
+	struct simple_vma_hole *hole, *tmp;
+
+	simple_vma_foreach_hole_safe(hole, heap, tmp)
+		free(hole);
+}
+
+static void simple_vma_hole_alloc(struct simple_vma_hole *hole,
+				  uint64_t offset, uint64_t size)
+{
+	struct simple_vma_hole *high_hole;
+	uint64_t waste;
+
+	igt_assert(hole->offset <= offset);
+	igt_assert(hole->size >= offset - hole->offset + size);
+
+	if (offset == hole->offset && size == hole->size) {
+		/* Just get rid of the hole. */
+		igt_list_del(&hole->link);
+		free(hole);
+		return;
+	}
+
+	igt_assert(offset - hole->offset <= hole->size - size);
+	waste = (hole->size - size) - (offset - hole->offset);
+	if (waste == 0) {
+		/* We allocated at the top->  Shrink the hole down. */
+		hole->size -= size;
+		return;
+	}
+
+	if (offset == hole->offset) {
+		/* We allocated at the bottom. Shrink the hole up-> */
+		hole->offset += size;
+		hole->size -= size;
+		return;
+	}
+
+   /* We allocated in the middle.  We need to split the old hole into two
+    * holes, one high and one low.
+    */
+	high_hole = calloc(1, sizeof(*hole));
+	igt_assert(high_hole);
+
+	high_hole->offset = offset + size;
+	high_hole->size = waste;
+
+   /* Adjust the hole to be the amount of space left at he bottom of the
+    * original hole.
+    */
+	hole->size = offset - hole->offset;
+
+   /* Place the new hole before the old hole so that the list is in order
+    * from high to low.
+    */
+	igt_list_add_tail(&high_hole->link, &hole->link);
+}
+
+static bool simple_vma_heap_alloc(struct simple_vma_heap *heap,
+				  uint64_t *offset, uint64_t size,
+				  uint64_t alignment)
+{
+	struct simple_vma_hole *hole, *tmp;
+	uint64_t misalign;
+
+	/* The caller is expected to reject zero-size allocations */
+	igt_assert(size > 0);
+	igt_assert(alignment > 0);
+
+	simple_vma_heap_validate(heap);
+
+	if (heap->alloc_high) {
+		simple_vma_foreach_hole_safe(hole, heap, tmp) {
+			if (size > hole->size)
+				continue;
+
+	/* Compute the offset as the highest address where a chunk of the
+	 * given size can be without going over the top of the hole.
+	 *
+	 * This calculation is known to not overflow because we know that
+	 * hole->size + hole->offset can only overflow to 0 and size > 0.
+	 */
+			*offset = (hole->size - size) + hole->offset;
+
+	  /* Align the offset.  We align down and not up because we are
+	   *
+	   * allocating from the top of the hole and not the bottom.
+	   */
+			*offset = (*offset / alignment) * alignment;
+
+			if (*offset < hole->offset)
+				continue;
+
+			simple_vma_hole_alloc(hole, *offset, size);
+			simple_vma_heap_validate(heap);
+			return true;
+		}
+	} else {
+		simple_vma_foreach_hole_safe_rev(hole, heap, tmp) {
+			if (size > hole->size)
+				continue;
+
+			*offset = hole->offset;
+
+			/* Align the offset */
+			misalign = *offset % alignment;
+			if (misalign) {
+				uint64_t pad = alignment - misalign;
+
+				if (pad > hole->size - size)
+					continue;
+
+				*offset += pad;
+			}
+
+			simple_vma_hole_alloc(hole, *offset, size);
+			simple_vma_heap_validate(heap);
+			return true;
+		}
+	}
+
+	/* Failed to allocate */
+	return false;
+}
+
+static void intel_allocator_simple_get_address_range(struct intel_allocator *ial,
+						     uint64_t *startp,
+						     uint64_t *endp)
+{
+	struct intel_allocator_simple *ials = ial->priv;
+
+	if (startp)
+		*startp = ials->start;
+
+	if (endp)
+		*endp = ials->end;
+}
+
+static bool simple_vma_heap_alloc_addr(struct intel_allocator_simple *ials,
+				       uint64_t offset, uint64_t size)
+{
+	struct simple_vma_heap *heap = &ials->heap;
+	struct simple_vma_hole *hole, *tmp;
+	/* Allocating something with a size of 0 is not valid. */
+	igt_assert(size > 0);
+
+	/* It's possible for offset + size to wrap around if we touch the top of
+	 * the 64-bit address space, but we cannot go any higher than 2^64.
+	 */
+	igt_assert(offset + size == 0 || offset + size > offset);
+
+	/* Find the hole if one exists. */
+	simple_vma_foreach_hole_safe(hole, heap, tmp) {
+		if (hole->offset > offset)
+			continue;
+
+	/* Holes are ordered high-to-low so the first hole we find with
+	 * hole->offset <= is our hole.  If it's not big enough to contain the
+	 * requested range, then the allocation fails.
+	 */
+		igt_assert(hole->offset <= offset);
+		if (hole->size < offset - hole->offset + size)
+			return false;
+
+		simple_vma_hole_alloc(hole, offset, size);
+		return true;
+	}
+
+	/* We didn't find a suitable hole */
+	return false;
+}
+
+static uint64_t intel_allocator_simple_alloc(struct intel_allocator *ial,
+					     uint32_t handle, uint64_t size,
+					     uint64_t alignment)
+{
+	struct intel_allocator_record *rec;
+	struct intel_allocator_simple *ials;
+	uint64_t offset;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	igt_assert(handle);
+	alignment = alignment > 0 ? alignment : 1;
+	rec = igt_map_find(&ials->objects, &handle);
+	if (rec) {
+		offset = rec->offset;
+		igt_assert(rec->size == size);
+	} else {
+		igt_assert(simple_vma_heap_alloc(&ials->heap, &offset,
+						 size, alignment));
+		rec = malloc(sizeof(*rec));
+		rec->handle = handle;
+		rec->offset = offset;
+		rec->size = size;
+
+		igt_map_add(&ials->objects, &rec->handle, rec);
+		ials->allocated_objects++;
+		ials->allocated_size += size;
+	}
+
+	return offset;
+}
+
+static bool intel_allocator_simple_free(struct intel_allocator *ial, uint32_t handle)
+{
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	rec = igt_map_del(&ials->objects, &handle);
+	if (rec) {
+		simple_vma_heap_free(&ials->heap, rec->offset, rec->size);
+		ials->allocated_objects--;
+		ials->allocated_size -= rec->size;
+		free(rec);
+
+		return true;
+	}
+
+	return false;
+}
+
+static inline bool __same(const struct intel_allocator_record *rec,
+			  uint32_t handle, uint64_t size, uint64_t offset)
+{
+	return rec->handle == handle && rec->size == size &&
+			DECANONICAL(rec->offset) == DECANONICAL(offset);
+}
+
+static bool intel_allocator_simple_is_allocated(struct intel_allocator *ial,
+						uint32_t handle, uint64_t size,
+						uint64_t offset)
+{
+	struct intel_allocator_record *rec;
+	struct intel_allocator_simple *ials;
+	bool same = false;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	igt_assert(handle);
+
+	rec = igt_map_find(&ials->objects, &handle);
+	if (rec && __same(rec, handle, size, offset))
+		same = true;
+
+	return same;
+}
+
+static bool intel_allocator_simple_reserve(struct intel_allocator *ial,
+					   uint32_t handle,
+					   uint64_t start, uint64_t end)
+{
+	uint64_t size = end - start;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+	igt_assert(end > start || end == 0);
+
+	if (simple_vma_heap_alloc_addr(ials, start, size)) {
+		rec = malloc(sizeof(*rec));
+		rec->handle = handle;
+		rec->offset = start;
+		rec->size = size;
+
+		igt_map_add(&ials->reserved, &rec->offset, rec);
+
+		ials->reserved_areas++;
+		ials->reserved_size += rec->size;
+		return true;
+	}
+
+	igt_debug("Failed to reserve %llx + %llx\n", (long long)start, (long long)size);
+	return false;
+}
+
+static bool intel_allocator_simple_unreserve(struct intel_allocator *ial,
+					     uint32_t handle,
+					     uint64_t start, uint64_t end)
+{
+	uint64_t size = end - start;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+
+	igt_assert(end > start || end == 0);
+
+	rec = igt_map_find(&ials->reserved, &start);
+
+	if (!rec) {
+		igt_debug("Only reserved blocks can be unreserved\n");
+		return false;
+	}
+
+	if (rec->size != size) {
+		igt_debug("Only the whole block unreservation allowed\n");
+		return false;
+	}
+
+	if (rec->handle != handle) {
+		igt_debug("Handle %u doesn't match reservation handle: %u\n",
+			 rec->handle, handle);
+		return false;
+	}
+
+	igt_map_del(&ials->reserved, &start);
+
+	ials->reserved_areas--;
+	ials->reserved_size -= rec->size;
+	free(rec);
+	simple_vma_heap_free(&ials->heap, start, size);
+
+	return true;
+}
+
+static bool intel_allocator_simple_is_reserved(struct intel_allocator *ial,
+					       uint64_t start, uint64_t end)
+{
+	uint64_t size = end - start;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+
+	igt_assert(end > start || end == 0);
+
+	rec = igt_map_find(&ials->reserved, &start);
+
+	if (!rec)
+		return false;
+
+	if (rec->offset == start && rec->size == size)
+		return true;
+
+	return false;
+}
+
+static bool equal_8bytes(const void *key1, const void *key2)
+{
+	const uint64_t *k1 = key1, *k2 = key2;
+	return *k1 == *k2;
+}
+
+static void intel_allocator_simple_destroy(struct intel_allocator *ial)
+{
+	struct intel_allocator_simple *ials;
+	struct igt_map_entry *pos;
+	struct igt_map *map;
+	int i;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	simple_vma_heap_finish(&ials->heap);
+
+	map = &ials->objects;
+	igt_map_for_each(map, i, pos)
+		free(pos->value);
+	igt_map_free(&ials->objects);
+
+	map = &ials->reserved;
+	igt_map_for_each(map, i, pos)
+		free(pos->value);
+	igt_map_free(&ials->reserved);
+
+	free(ial->priv);
+	free(ial);
+}
+
+static bool intel_allocator_simple_is_empty(struct intel_allocator *ial)
+{
+	struct intel_allocator_simple *ials = ial->priv;
+
+	igt_debug("<ial: %p, fd: %d> objects: %" PRId64
+		  ", reserved_areas: %" PRId64 "\n",
+		  ial, ial->fd,
+		  ials->allocated_objects, ials->reserved_areas);
+
+	return !ials->allocated_objects && !ials->reserved_areas;
+}
+
+static void intel_allocator_simple_print(struct intel_allocator *ial, bool full)
+{
+	struct intel_allocator_simple *ials;
+	struct simple_vma_hole *hole;
+	struct simple_vma_heap *heap;
+	struct igt_map_entry *pos;
+	struct igt_map *map;
+	uint64_t total_free = 0, allocated_size = 0, allocated_objects = 0;
+	uint64_t reserved_size = 0, reserved_areas = 0;
+	int i;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	heap = &ials->heap;
+
+	igt_info("intel_allocator_simple <ial: %p, fd: %d> on "
+		 "[0x%"PRIx64" : 0x%"PRIx64"]:\n", ial, ial->fd,
+		 ials->start, ials->end);
+
+	if (full) {
+		igt_info("holes:\n");
+		simple_vma_foreach_hole(hole, heap) {
+			igt_info("offset = %"PRIu64" (0x%"PRIx64", "
+				 "size = %"PRIu64" (0x%"PRIx64")\n",
+				 hole->offset, hole->offset, hole->size,
+				 hole->size);
+			total_free += hole->size;
+		}
+		igt_assert(total_free <= ials->total_size);
+		igt_info("total_free: %" PRIx64
+			 ", total_size: %" PRIx64
+			 ", allocated_size: %" PRIx64
+			 ", reserved_size: %" PRIx64 "\n",
+			 total_free, ials->total_size, ials->allocated_size,
+			 ials->reserved_size);
+		igt_assert(total_free ==
+			   ials->total_size - ials->allocated_size - ials->reserved_size);
+
+		igt_info("objects:\n");
+		map = &ials->objects;
+		igt_map_for_each(map, i, pos) {
+			struct intel_allocator_record *rec = pos->value;
+
+			igt_info("handle = %d, offset = %"PRIu64" "
+				"(0x%"PRIx64", size = %"PRIu64" (0x%"PRIx64")\n",
+				 rec->handle, rec->offset, rec->offset,
+				 rec->size, rec->size);
+			allocated_objects++;
+			allocated_size += rec->size;
+		}
+		igt_assert(ials->allocated_size == allocated_size);
+		igt_assert(ials->allocated_objects == allocated_objects);
+
+		igt_info("reserved areas:\n");
+		map = &ials->reserved;
+		igt_map_for_each(map, i, pos) {
+			struct intel_allocator_record *rec = pos->value;
+
+			igt_info("offset = %"PRIu64" (0x%"PRIx64", "
+				 "size = %"PRIu64" (0x%"PRIx64")\n",
+				 rec->offset, rec->offset,
+				 rec->size, rec->size);
+			reserved_areas++;
+			reserved_size += rec->size;
+		}
+		igt_assert(ials->reserved_areas == reserved_areas);
+		igt_assert(ials->reserved_size == reserved_size);
+	} else {
+		simple_vma_foreach_hole(hole, heap)
+			total_free += hole->size;
+	}
+
+	igt_info("free space: %"PRIu64"B (0x%"PRIx64") (%.2f%% full)\n"
+		 "allocated objects: %"PRIu64", reserved areas: %"PRIu64"\n",
+		 total_free, total_free,
+		 ((double) (ials->total_size - total_free) /
+		  (double) ials->total_size) * 100,
+		 ials->allocated_objects, ials->reserved_areas);
+}
+
+static struct intel_allocator *
+__intel_allocator_simple_create(int fd, uint64_t start, uint64_t end,
+				enum allocator_strategy strategy)
+{
+	struct intel_allocator *ial;
+	struct intel_allocator_simple *ials;
+
+	igt_debug("Using simple allocator\n");
+
+	ial = calloc(1, sizeof(*ial));
+	igt_assert(ial);
+
+	ial->fd = fd;
+	ial->get_address_range = intel_allocator_simple_get_address_range;
+	ial->alloc = intel_allocator_simple_alloc;
+	ial->free = intel_allocator_simple_free;
+	ial->is_allocated = intel_allocator_simple_is_allocated;
+	ial->reserve = intel_allocator_simple_reserve;
+	ial->unreserve = intel_allocator_simple_unreserve;
+	ial->is_reserved = intel_allocator_simple_is_reserved;
+	ial->destroy = intel_allocator_simple_destroy;
+	ial->is_empty = intel_allocator_simple_is_empty;
+	ial->print = intel_allocator_simple_print;
+	ials = ial->priv = malloc(sizeof(struct intel_allocator_simple));
+	igt_assert(ials);
+
+	igt_map_init(&ials->objects);
+	/* Reserved addresses hashtable is indexed by an offset */
+	__igt_map_init(&ials->reserved, equal_8bytes, NULL, 3);
+
+	ials->start = start;
+	ials->end = end;
+	ials->total_size = end - start;
+	simple_vma_heap_init(&ials->heap, ials->start, ials->total_size,
+			     strategy);
+
+	ials->allocated_size = 0;
+	ials->allocated_objects = 0;
+	ials->reserved_size = 0;
+	ials->reserved_areas = 0;
+
+	return ial;
+}
+
+struct intel_allocator *
+intel_allocator_simple_create(int fd)
+{
+	uint64_t gtt_size = gem_aperture_size(fd);
+
+	if (!gem_uses_full_ppgtt(fd))
+		gtt_size /= 2;
+	else
+		gtt_size -= RESERVED;
+
+	return __intel_allocator_simple_create(fd, 0, gtt_size,
+					       ALLOC_STRATEGY_HIGH_TO_LOW);
+}
+
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy)
+{
+	uint64_t gtt_size = gem_aperture_size(fd);
+
+	igt_assert(end <= gtt_size);
+	if (!gem_uses_full_ppgtt(fd))
+		gtt_size /= 2;
+	igt_assert(end - start <= gtt_size);
+
+	return __intel_allocator_simple_create(fd, start, end, strategy);
+}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 06/35] lib/intel_allocator_reloc: Add reloc allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (4 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 05/35] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 07/35] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
                   ` (30 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

As relocations won't be accessible on discrete we need to support
IGT with an allocator. So tests which have to cover all generations
have to diverge the code and use conditional constructs.
On the long term this can be cumbersome and confusing. So we can
try to avoid such and use pseudo-reloc allocator which main task
is to return incremented offsets. This way we can skip mentioned
conditionals and just acquire offsets from an allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator_reloc.c | 190 ++++++++++++++++++++++++++++++++++++
 1 file changed, 190 insertions(+)
 create mode 100644 lib/intel_allocator_reloc.c

diff --git a/lib/intel_allocator_reloc.c b/lib/intel_allocator_reloc.c
new file mode 100644
index 000000000..abf9c30cd
--- /dev/null
+++ b/lib/intel_allocator_reloc.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt.h"
+#include "igt_x86.h"
+#include "igt_rand.h"
+#include "intel_allocator.h"
+
+struct intel_allocator *intel_allocator_reloc_create(int fd);
+
+struct intel_allocator_reloc {
+	uint64_t bias;
+	uint32_t prng;
+	uint64_t gtt_size;
+	uint64_t start;
+	uint64_t end;
+	uint64_t offset;
+
+	/* statistics */
+	uint64_t allocated_objects;
+};
+
+static uint64_t get_bias(int fd)
+{
+	(void) fd;
+
+	return 256 << 10;
+}
+
+static void intel_allocator_reloc_get_address_range(struct intel_allocator *ial,
+						    uint64_t *startp,
+						    uint64_t *endp)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+
+	if (startp)
+		*startp = ialr->start;
+
+	if (endp)
+		*endp = ialr->end;
+}
+
+static uint64_t intel_allocator_reloc_alloc(struct intel_allocator *ial,
+					    uint32_t handle, uint64_t size,
+					    uint64_t alignment)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+	uint64_t offset, aligned_offset;
+
+	(void) handle;
+
+	alignment = max(alignment, 4096);
+	aligned_offset = ALIGN(ialr->offset, alignment);
+
+	/* Check we won't exceed end */
+	if (aligned_offset + size > ialr->end)
+		aligned_offset = ALIGN(ialr->start, alignment);
+
+	offset = aligned_offset;
+	ialr->offset = offset + size;
+	ialr->allocated_objects++;
+
+	return offset;
+}
+
+static bool intel_allocator_reloc_free(struct intel_allocator *ial,
+				       uint32_t handle)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+
+	(void) handle;
+
+	ialr->allocated_objects--;
+
+	return false;
+}
+
+static bool intel_allocator_reloc_is_allocated(struct intel_allocator *ial,
+					       uint32_t handle, uint64_t size,
+					       uint64_t offset)
+{
+	(void) ial;
+	(void) handle;
+	(void) size;
+	(void) offset;
+
+	return false;
+}
+
+static void intel_allocator_reloc_destroy(struct intel_allocator *ial)
+{
+	igt_assert(ial);
+
+	free(ial->priv);
+	free(ial);
+}
+
+static bool intel_allocator_reloc_reserve(struct intel_allocator *ial,
+					  uint32_t handle,
+					  uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_reloc_unreserve(struct intel_allocator *ial,
+					    uint32_t handle,
+					    uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_reloc_is_reserved(struct intel_allocator *ial,
+					      uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static void intel_allocator_reloc_print(struct intel_allocator *ial, bool full)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+
+	(void) full;
+
+	igt_info("<ial: %p, fd: %d> allocated objects: %" PRIx64 "\n",
+		 ial, ial->fd, ialr->allocated_objects);
+}
+
+static bool intel_allocator_reloc_is_empty(struct intel_allocator *ial)
+{
+	struct intel_allocator_reloc *ialr = ial->priv;
+
+	return !ialr->allocated_objects;
+}
+
+#define RESERVED 4096
+struct intel_allocator *intel_allocator_reloc_create(int fd)
+{
+	struct intel_allocator *ial;
+	struct intel_allocator_reloc *ialr;
+
+	igt_debug("Using reloc allocator\n");
+	ial = calloc(1, sizeof(*ial));
+	igt_assert(ial);
+
+	ial->fd = fd;
+	ial->get_address_range = intel_allocator_reloc_get_address_range;
+	ial->alloc = intel_allocator_reloc_alloc;
+	ial->free = intel_allocator_reloc_free;
+	ial->is_allocated = intel_allocator_reloc_is_allocated;
+	ial->reserve = intel_allocator_reloc_reserve;
+	ial->unreserve = intel_allocator_reloc_unreserve;
+	ial->is_reserved = intel_allocator_reloc_is_reserved;
+	ial->destroy = intel_allocator_reloc_destroy;
+	ial->print = intel_allocator_reloc_print;
+	ial->is_empty = intel_allocator_reloc_is_empty;
+
+	ialr = ial->priv = calloc(1, sizeof(*ialr));
+	igt_assert(ial->priv);
+	ialr->prng = (uint32_t) to_user_pointer(ial);
+	ialr->gtt_size = gem_aperture_size(fd);
+	igt_debug("Gtt size: %" PRId64 "\n", ialr->gtt_size);
+	if (!gem_uses_full_ppgtt(fd))
+		ialr->gtt_size /= 2;
+
+	ialr->bias = ialr->offset = get_bias(fd);
+	ialr->start = ialr->bias;
+	ialr->end = ialr->gtt_size - RESERVED;
+
+	ialr->allocated_objects = 0;
+
+	return ial;
+}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 07/35] lib/intel_allocator_random: Add random allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (5 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 06/35] lib/intel_allocator_reloc: Add reloc allocator Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 08/35] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
                   ` (29 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Sometimes we want to experiment with addresses so randomizing can help
us a little.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator_random.c | 188 +++++++++++++++++++++++++++++++++++
 1 file changed, 188 insertions(+)
 create mode 100644 lib/intel_allocator_random.c

diff --git a/lib/intel_allocator_random.c b/lib/intel_allocator_random.c
new file mode 100644
index 000000000..d804e3318
--- /dev/null
+++ b/lib/intel_allocator_random.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt.h"
+#include "igt_x86.h"
+#include "igt_rand.h"
+#include "intel_allocator.h"
+
+struct intel_allocator *intel_allocator_random_create(int fd);
+
+struct intel_allocator_random {
+	uint64_t bias;
+	uint32_t prng;
+	uint64_t gtt_size;
+	uint64_t start;
+	uint64_t end;
+
+	/* statistics */
+	uint64_t allocated_objects;
+};
+
+static uint64_t get_bias(int fd)
+{
+	(void) fd;
+
+	return 256 << 10;
+}
+
+static void intel_allocator_random_get_address_range(struct intel_allocator *ial,
+						     uint64_t *startp,
+						     uint64_t *endp)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	if (startp)
+		*startp = ialr->start;
+
+	if (endp)
+		*endp = ialr->end;
+}
+
+static uint64_t intel_allocator_random_alloc(struct intel_allocator *ial,
+					     uint32_t handle, uint64_t size,
+					     uint64_t alignment)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+	uint64_t offset;
+
+	(void) handle;
+
+	/* randomize the address, we try to avoid relocations */
+	do {
+		offset = hars_petruska_f54_1_random64(&ialr->prng);
+		offset += ialr->bias; /* Keep the low 256k clear, for negative deltas */
+		offset &= ialr->gtt_size - 1;
+		offset &= ~(alignment - 1);
+	} while (offset + size > ialr->end);
+
+	ialr->allocated_objects++;
+
+	return offset;
+}
+
+static bool intel_allocator_random_free(struct intel_allocator *ial,
+					uint32_t handle)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	(void) handle;
+
+	ialr->allocated_objects--;
+
+	return false;
+}
+
+static bool intel_allocator_random_is_allocated(struct intel_allocator *ial,
+						uint32_t handle, uint64_t size,
+						uint64_t offset)
+{
+	(void) ial;
+	(void) handle;
+	(void) size;
+	(void) offset;
+
+	return false;
+}
+
+static void intel_allocator_random_destroy(struct intel_allocator *ial)
+{
+	igt_assert(ial);
+
+	free(ial->priv);
+	free(ial);
+}
+
+static bool intel_allocator_random_reserve(struct intel_allocator *ial,
+					   uint32_t handle,
+					   uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_random_unreserve(struct intel_allocator *ial,
+					     uint32_t handle,
+					     uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_random_is_reserved(struct intel_allocator *ial,
+					       uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static void intel_allocator_random_print(struct intel_allocator *ial, bool full)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	(void) full;
+
+	igt_info("<ial: %p, fd: %d> allocated objects: %" PRIx64 "\n",
+		 ial, ial->fd, ialr->allocated_objects);
+}
+
+static bool intel_allocator_random_is_empty(struct intel_allocator *ial)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	return !ialr->allocated_objects;
+}
+
+#define RESERVED 4096
+struct intel_allocator *intel_allocator_random_create(int fd)
+{
+	struct intel_allocator *ial;
+	struct intel_allocator_random *ialr;
+
+	igt_debug("Using random allocator\n");
+	ial = calloc(1, sizeof(*ial));
+	igt_assert(ial);
+
+	ial->fd = fd;
+	ial->get_address_range = intel_allocator_random_get_address_range;
+	ial->alloc = intel_allocator_random_alloc;
+	ial->free = intel_allocator_random_free;
+	ial->is_allocated = intel_allocator_random_is_allocated;
+	ial->reserve = intel_allocator_random_reserve;
+	ial->unreserve = intel_allocator_random_unreserve;
+	ial->is_reserved = intel_allocator_random_is_reserved;
+	ial->destroy = intel_allocator_random_destroy;
+	ial->print = intel_allocator_random_print;
+	ial->is_empty = intel_allocator_random_is_empty;
+
+	ialr = ial->priv = calloc(1, sizeof(*ialr));
+	igt_assert(ial->priv);
+	ialr->prng = (uint32_t) to_user_pointer(ial);
+	ialr->gtt_size = gem_aperture_size(fd);
+	igt_debug("Gtt size: %" PRId64 "\n", ialr->gtt_size);
+	if (!gem_uses_full_ppgtt(fd))
+		ialr->gtt_size /= 2;
+
+	ialr->bias = get_bias(fd);
+	ialr->start = ialr->bias;
+	ialr->end = ialr->gtt_size - RESERVED;
+
+	ialr->allocated_objects = 0;
+
+	return ial;
+}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 08/35] lib/intel_allocator: Add intel_allocator core
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (6 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 07/35] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 09/35] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
                   ` (28 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

For discrete gens we have to cease of using relocations when batch
buffers are submitted to GPU. On cards which have ppgtt we can use
softpin establishing addresses on our own.

We added simple allocator (taken from Mesa; works on lists) and
random allocator to exercise batches with different addresses. All
of that works for single VM (context) so we have to add additional
layer (intel_allocator) to support multiprocessing / multithreading.

For main IGT process (also for threads created in it) intel_allocator
resolves addresses "locally", just by mutexing access to global
allocator data (ctx/vm map). When fork() is in use children cannot
establish addresses on they own and have to contact to the thread
spawned within main IGT process. Currently SysV IPC message queue was
chosen as a communication channel between children and allocator thread.
Child calls same functions as main IGT process, only communication path
will be chosen instead of acquiring addresses locally.

v2:

Add intel_allocator_open_full() to allow user pass vm range.
Add strategy: NONE, LOW_TO_HIGH, HIGH_TO_LOW passed to allocator backend.

v3:

Child is now able to use allocator directly as standalone. It only need
to call intel_allocator_init() to reinitialize appropriate structures.

v4:

Add pseudo allocator - INTEL_ALLOCATOR_RELOC which just increments
offsets to avoid unnecessary conditional code.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
---
 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |    1 +
 lib/Makefile.sources                          |    6 +
 lib/igt_core.c                                |   14 +
 lib/intel_allocator.c                         | 1319 +++++++++++++++++
 lib/intel_allocator.h                         |  220 +++
 lib/intel_allocator_msgchannel.c              |  187 +++
 lib/intel_allocator_msgchannel.h              |  156 ++
 lib/meson.build                               |    5 +
 8 files changed, 1908 insertions(+)
 create mode 100644 lib/intel_allocator.c
 create mode 100644 lib/intel_allocator.h
 create mode 100644 lib/intel_allocator_msgchannel.c
 create mode 100644 lib/intel_allocator_msgchannel.h

diff --git a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
index bf5ac5428..192d1df7a 100644
--- a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
+++ b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
@@ -43,6 +43,7 @@
     <xi:include href="xml/igt_vc4.xml"/>
     <xi:include href="xml/igt_vgem.xml"/>
     <xi:include href="xml/igt_x86.xml"/>
+    <xi:include href="xml/intel_allocator.xml"/>
     <xi:include href="xml/intel_batchbuffer.xml"/>
     <xi:include href="xml/intel_bufops.xml"/>
     <xi:include href="xml/intel_chipset.xml"/>
diff --git a/lib/Makefile.sources b/lib/Makefile.sources
index 84fd7b49c..d11876cce 100644
--- a/lib/Makefile.sources
+++ b/lib/Makefile.sources
@@ -121,6 +121,12 @@ lib_source_list =	 	\
 	surfaceformat.h		\
 	sw_sync.c		\
 	sw_sync.h		\
+	intel_allocator.c	\
+	intel_allocator.h	\
+	intel_allocator_random.c	\
+	intel_allocator_simple.c	\
+	intel_allocator_msgchannel.c	\
+	intel_allocator_msgchannel.h	\
 	intel_aux_pgtable.c	\
 	intel_reg_map.c		\
 	intel_iosf.c		\
diff --git a/lib/igt_core.c b/lib/igt_core.c
index 2b4182f16..6597acfaa 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -58,6 +58,7 @@
 #include <glib.h>
 
 #include "drmtest.h"
+#include "intel_allocator.h"
 #include "intel_chipset.h"
 #include "intel_io.h"
 #include "igt_debugfs.h"
@@ -1412,6 +1413,19 @@ __noreturn static void exit_subtest(const char *result)
 	}
 	num_test_children = 0;
 
+	/*
+	 * When test completes - mostly in fail state it can leave allocated
+	 * objects. An allocator is not an exception as it is global IGT
+	 * entity and when test will allocate some ranges and then it will
+	 * fail no free/close likely will be called (controling potential
+	 * fails and clearing before assertions in IGT is not common).
+	 *
+	 * We call intel_allocator_init() then to prepare the allocator
+	 * infrastructure from scratch for each test. Init also removes
+	 * remnants from previous allocator run (if any).
+	 */
+	intel_allocator_init();
+
 	if (!in_dynamic_subtest)
 		_igt_dynamic_tests_executed = -1;
 
diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
new file mode 100644
index 000000000..de86c57e9
--- /dev/null
+++ b/lib/intel_allocator.c
@@ -0,0 +1,1319 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <fcntl.h>
+#include <pthread.h>
+#include <signal.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include "igt.h"
+#include "igt_map.h"
+#include "intel_allocator.h"
+#include "intel_allocator_msgchannel.h"
+
+//#define ALLOCDBG
+#ifdef ALLOCDBG
+#define alloc_info igt_info
+#define alloc_debug igt_debug
+static const char *reqtype_str[] = {
+	[REQ_STOP]		= "stop",
+	[REQ_OPEN]		= "open",
+	[REQ_OPEN_AS]		= "open as",
+	[REQ_CLOSE]		= "close",
+	[REQ_ADDRESS_RANGE]	= "address range",
+	[REQ_ALLOC]		= "alloc",
+	[REQ_FREE]		= "free",
+	[REQ_IS_ALLOCATED]	= "is allocated",
+	[REQ_RESERVE]		= "reserve",
+	[REQ_UNRESERVE]		= "unreserve",
+	[REQ_RESERVE_IF_NOT_ALLOCATED] = "reserve-ina",
+	[REQ_IS_RESERVED]	= "is reserved",
+};
+static inline const char *reqstr(enum reqtype request_type)
+{
+	igt_assert(request_type >= REQ_STOP && request_type <= REQ_IS_RESERVED);
+	return reqtype_str[request_type];
+}
+#else
+#define alloc_info(...) {}
+#define alloc_debug(...) {}
+#endif
+
+struct allocator {
+	int fd;
+	uint32_t ctx;
+	uint32_t vm;
+	struct intel_allocator *ial;
+	uint64_t handle;
+};
+
+struct handle_entry {
+	uint64_t handle;
+	struct allocator *al;
+};
+
+struct intel_allocator *intel_allocator_reloc_create(int fd);
+struct intel_allocator *intel_allocator_random_create(int fd);
+struct intel_allocator *intel_allocator_simple_create(int fd);
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy);
+
+/*
+ * Instead of trying to find first empty handle just get new one. Assuming
+ * our counter is incremented 2^32 times per second (4GHz clock and handle
+ * assignment takes single clock) 64-bit counter would wrap around after
+ * ~68 years.
+ *
+ *                   allocator
+ * handles           <fd, ctx>           intel allocator
+ * +-----+           +--------+          +-------------+
+ * |  1  +---------->+  fd: 3 +----+---->+ data: ...   |
+ * +-----+           | ctx: 1 |    |     | refcount: 2 |
+ * |  2  +------     +--------+    |     +-------------+
+ * +-----+     +---->+  fd: 3 +----+
+ * |  3  +--+        | ctx: 1 |          intel allocator
+ * +-----+  |        +--------+          +-------------+
+ * | ... |  +------->+  fd: 3 +--------->+ data: ...   |
+ * +-----+           | ctx: 2 |          | refcount: 1 |
+ * |  n  +--------+  +--------+          +-------------+
+ * +-----+        |
+ * | ... +-----+  |  allocator
+ * +-----+     |  |  <fd, vm>            intel allocator
+ * | ... +--+  |  |  +--------+          +-------------+
+ * +     +  |  |  +->+  fd: 3 +--+--+--->+ data: ...   |
+ *          |  |     |  vm: 1 |  |  |    | refcount: 3 |
+ *          |  |     +--------+  |  |    +-------------+
+ *          |  +---->+  fd: 3 +--+  |
+ *          |        |  vm: 1 |     |
+ *          |        +--------+     |
+ *          +------->+  fd: 3 +-----+
+ *                   |  vm: 2 |
+ *                   +--------+
+ */
+static _Atomic(uint64_t) next_handle;
+static struct igt_map *handles;
+static struct igt_map *ctx_map;
+static struct igt_map *vm_map;
+static pthread_mutex_t map_mutex = PTHREAD_MUTEX_INITIALIZER;
+#define GET_MAP(vm) ((vm) ? vm_map : ctx_map)
+
+static bool multiprocess;
+static pthread_t allocator_thread;
+
+static bool warn_if_not_empty;
+
+/* For allocator purposes we need to track pid/tid */
+static pid_t allocator_pid = -1;
+extern pid_t child_pid;
+extern __thread pid_t child_tid;
+
+/*
+ * - for parent process we have child_pid == -1
+ * - for child which calls intel_allocator_init() allocator_pid == child_pid
+ */
+static inline bool is_same_process(void)
+{
+	return child_pid == -1 || allocator_pid == child_pid;
+}
+
+static struct msg_channel *channel;
+
+static int send_alloc_stop(struct msg_channel *msgchan)
+{
+	struct alloc_req req = {0};
+
+	req.request_type = REQ_STOP;
+
+	return msgchan->send_req(msgchan, &req);
+}
+
+static int send_req(struct msg_channel *msgchan, pid_t tid,
+		    struct alloc_req *request)
+{
+	request->tid = tid;
+	return msgchan->send_req(msgchan, request);
+}
+
+static int recv_req(struct msg_channel *msgchan, struct alloc_req *request)
+{
+	return msgchan->recv_req(msgchan, request);
+}
+
+static int send_resp(struct msg_channel *msgchan,
+		     pid_t tid, struct alloc_resp *response)
+{
+	response->tid = tid;
+	return msgchan->send_resp(msgchan, response);
+}
+
+static int recv_resp(struct msg_channel *msgchan,
+		     pid_t tid, struct alloc_resp *response)
+{
+	response->tid = tid;
+	return msgchan->recv_resp(msgchan, response);
+}
+
+static uint64_t __handle_create(struct allocator *al)
+{
+	struct handle_entry *h = malloc(sizeof(*h));
+
+	igt_assert(h);
+	h->handle = atomic_fetch_add(&next_handle, 1);
+	h->al = al;
+	igt_map_add(handles, h, h);
+
+	return h->handle;
+}
+
+static void __handle_destroy(uint64_t handle)
+{
+	struct handle_entry *h, he = { .handle = handle };
+
+	h = igt_map_find(handles, &he);
+	igt_assert(h);
+	igt_map_del(handles, &he);
+	free(h);
+}
+
+static struct allocator *__allocator_find(int fd, uint32_t ctx, uint32_t vm)
+{
+	struct allocator al = { .fd = fd, .ctx = ctx, .vm = vm };
+	struct igt_map *map = GET_MAP(vm);
+
+	return igt_map_find(map, &al);
+}
+
+static struct allocator *__allocator_find_by_handle(uint64_t handle)
+{
+	struct handle_entry *h, he = { .handle = handle };
+
+	h = igt_map_find(handles, &he);
+	if (!h)
+		return NULL;
+
+	return h->al;
+}
+
+static struct allocator *__allocator_create(int fd, uint32_t ctx, uint32_t vm,
+					    struct intel_allocator *ial)
+{
+	struct igt_map *map = GET_MAP(vm);
+	struct allocator *al = malloc(sizeof(*al));
+
+	igt_assert(al);
+	igt_assert(fd == ial->fd);
+	al->fd = fd;
+	al->ctx = ctx;
+	al->vm = vm;
+	al->ial = ial;
+
+	igt_map_add(map, al, al);
+
+	return al;
+}
+
+static void __allocator_destroy(struct allocator *al)
+{
+	struct igt_map *map = GET_MAP(al->vm);
+
+	igt_map_del(map, al);
+	free(al);
+}
+
+static int __allocator_get(struct allocator *al)
+{
+	struct intel_allocator *ial = al->ial;
+	int refcount;
+
+	refcount = atomic_fetch_add(&ial->refcount, 1);
+	igt_assert(refcount >= 0);
+
+	return refcount;
+}
+
+static bool __allocator_put(struct allocator *al)
+{
+	struct intel_allocator *ial = al->ial;
+	bool released = false;
+	int refcount;
+
+	refcount = atomic_fetch_sub(&ial->refcount, 1);
+	igt_assert(refcount >= 1);
+	if (refcount == 1) {
+		if (!ial->is_empty(ial) && warn_if_not_empty)
+			igt_warn("Allocator not clear before destroy!\n");
+
+		released = true;
+	}
+
+	return released;
+}
+
+static struct intel_allocator *intel_allocator_create(int fd,
+						      uint64_t start, uint64_t end,
+						      uint8_t allocator_type,
+						      uint8_t allocator_strategy)
+{
+	struct intel_allocator *ial = NULL;
+
+	switch (allocator_type) {
+	/*
+	 * Few words of explanation is required here.
+	 *
+	 * INTEL_ALLOCATOR_NONE allows keeping information in the code (intel-bb
+	 * is an example) we're not using IGT allocator itself and likely
+	 * we rely on relocations.
+	 * So trying to create NONE allocator doesn't makes sense and below
+	 * assertion catches such invalid usage.
+	 */
+	case INTEL_ALLOCATOR_NONE:
+		igt_assert_f(allocator_type != INTEL_ALLOCATOR_NONE,
+			     "We cannot use NONE allocator\n");
+		break;
+	case INTEL_ALLOCATOR_RELOC:
+		ial = intel_allocator_reloc_create(fd);
+		break;
+	case INTEL_ALLOCATOR_RANDOM:
+		ial = intel_allocator_random_create(fd);
+		break;
+	case INTEL_ALLOCATOR_SIMPLE:
+		if (!start && !end)
+			ial = intel_allocator_simple_create(fd);
+		else
+			ial = intel_allocator_simple_create_full(fd, start, end,
+								 allocator_strategy);
+		break;
+	default:
+		igt_assert_f(ial, "Allocator type %d not implemented\n",
+			     allocator_type);
+		break;
+	}
+
+	igt_assert(ial);
+
+	ial->type = allocator_type;
+	ial->strategy = allocator_strategy;
+	pthread_mutex_init(&ial->mutex, NULL);
+
+	return ial;
+}
+
+static void intel_allocator_destroy(struct intel_allocator *ial)
+{
+	alloc_info("Destroying allocator (empty: %d)\n", ial->is_empty(ial));
+
+	ial->destroy(ial);
+}
+
+static struct allocator *allocator_open(int fd, uint32_t ctx, uint32_t vm,
+					uint64_t start, uint64_t end,
+					uint8_t allocator_type,
+					uint8_t allocator_strategy)
+{
+	struct intel_allocator *ial;
+	struct allocator *al;
+	const char *idstr = vm ? "vm" : "ctx";
+
+	al = __allocator_find(fd, ctx, vm);
+	if (!al) {
+		alloc_info("Allocator fd: %d, ctx: %u, vm: %u, <0x%llx : 0x%llx> "
+			    "not found, creating one\n",
+			    fd, ctx, vm, (long long) start, (long long) end);
+		ial = intel_allocator_create(fd, start, end, allocator_type,
+					     allocator_strategy);
+		al = __allocator_create(fd, ctx, vm, ial);
+	} else {
+		al = __allocator_create(al->fd, al->ctx, al->vm, al->ial);
+		ial = al->ial;
+	}
+	__allocator_get(al);
+	al->handle = __handle_create(al);
+
+	if (ial->type != allocator_type) {
+		igt_warn("Allocator type must be same for fd/%s\n", idstr);
+		ial = NULL;
+	}
+
+	if (ial->strategy != allocator_strategy) {
+		igt_warn("Allocator strategy must be same or fd/%s\n", idstr);
+		ial = NULL;
+	}
+
+	return al;
+}
+
+static struct allocator *allocator_open_as(struct allocator *base,
+					   uint32_t new_vm)
+{
+	struct allocator *al;
+
+	al = __allocator_create(base->fd, base->ctx, new_vm, base->ial);
+	__allocator_get(al);
+	al->handle = __handle_create(al);
+
+	return al;
+}
+
+static bool allocator_close(uint64_t ahnd)
+{
+	struct allocator *al;
+	bool released, is_empty = false;
+
+	al = __allocator_find_by_handle(ahnd);
+	if (!al) {
+		igt_warn("Cannot find handle: %llx\n", (long long) ahnd);
+		return false;
+	}
+
+	released = __allocator_put(al);
+	if (released) {
+		is_empty = al->ial->is_empty(al->ial);
+		intel_allocator_destroy(al->ial);
+	}
+	__allocator_destroy(al);
+	__handle_destroy(ahnd);
+
+	return is_empty;
+}
+
+static int send_req_recv_resp(struct msg_channel *msgchan,
+			      struct alloc_req *request,
+			      struct alloc_resp *response)
+{
+	int ret;
+
+	ret = send_req(msgchan, child_tid, request);
+	if (ret < 0) {
+		igt_warn("Error sending request [type: %d]: err = %d [%s]\n",
+			 request->request_type, errno, strerror(errno));
+
+		return ret;
+	}
+
+	ret = recv_resp(msgchan, child_tid, response);
+	if (ret < 0)
+		igt_warn("Error receiving response [type: %d]: err = %d [%s]\n",
+			 request->request_type, errno, strerror(errno));
+
+	/*
+	 * This is main assumption - we receive message which size must be > 0.
+	 * If this is fulfilled we return 0 as a success.
+	 */
+	if (ret > 0)
+		ret = 0;
+
+	return ret;
+}
+
+static int handle_request(struct alloc_req *req, struct alloc_resp *resp)
+{
+	int ret;
+
+	memset(resp, 0, sizeof(*resp));
+
+	if (is_same_process()) {
+		struct intel_allocator *ial;
+		struct allocator *al;
+		uint64_t start, end, size;
+		uint32_t ctx, vm;
+		bool allocated, reserved, unreserved;
+		/* Used when debug is on, so avoid compilation warnings */
+		(void) ctx;
+		(void) vm;
+
+		/*
+		 * Mutex only work on allocator instance, not stop/open/close
+		 */
+		if (req->request_type > REQ_CLOSE) {
+			/*
+			 * We have to lock map mutex because concurrent open
+			 * can lead to resizing the map.
+			 */
+			pthread_mutex_lock(&map_mutex);
+			al = __allocator_find_by_handle(req->allocator_handle);
+			pthread_mutex_unlock(&map_mutex);
+			igt_assert(al);
+
+			ial = al->ial;
+			igt_assert(ial);
+
+			pthread_mutex_lock(&ial->mutex);
+		}
+
+		switch (req->request_type) {
+		case REQ_STOP:
+			alloc_info("<stop>\n");
+			break;
+
+		case REQ_OPEN:
+			pthread_mutex_lock(&map_mutex);
+			al = allocator_open(req->open.fd,
+					    req->open.ctx, req->open.vm,
+					    req->open.start, req->open.end,
+					    req->open.allocator_type,
+					    req->open.allocator_strategy);
+			ret = atomic_load(&al->ial->refcount);
+			pthread_mutex_unlock(&map_mutex);
+
+			resp->response_type = RESP_OPEN;
+			resp->open.allocator_handle = al->handle;
+			alloc_info("<open> [tid: %ld] fd: %d, ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", alloc_type: %u, refcnt: %d->%d\n",
+				   (long) req->tid, req->open.fd, al->handle,
+				   req->open.ctx,
+				   req->open.vm, req->open.allocator_type,
+				   ret - 1, ret);
+			break;
+
+		case REQ_OPEN_AS:
+			/* lock first to avoid concurrent close */
+			pthread_mutex_lock(&map_mutex);
+
+			al = __allocator_find_by_handle(req->allocator_handle);
+			resp->response_type = RESP_OPEN_AS;
+
+			if (!al) {
+				alloc_info("<open as> [tid: %ld] ahnd: %" PRIx64
+					   " -> no handle\n",
+					   (long) req->tid, req->allocator_handle);
+				pthread_mutex_unlock(&map_mutex);
+				break;
+			}
+
+			if (!al->vm) {
+				alloc_info("<open as> [tid: %ld] ahnd: %" PRIx64
+					   " -> only open as for <fd, vm> is possible\n",
+					   (long) req->tid, req->allocator_handle);
+				pthread_mutex_unlock(&map_mutex);
+				break;
+			}
+
+
+			al = allocator_open_as(al, req->open_as.new_vm);
+			ret = atomic_load(&al->ial->refcount);
+			pthread_mutex_unlock(&map_mutex);
+
+			resp->response_type = RESP_OPEN_AS;
+			resp->open.allocator_handle = al->handle;
+			alloc_info("<open as> [tid: %ld] fd: %d, ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", alloc_type: %u, refcnt: %d->%d\n",
+				   (long) req->tid, al->fd, al->handle,
+				   al->ctx, al->vm, al->ial->type,
+				   ret - 1, ret);
+			break;
+
+		case REQ_CLOSE:
+			pthread_mutex_lock(&map_mutex);
+			al = __allocator_find_by_handle(req->allocator_handle);
+			resp->response_type = RESP_CLOSE;
+
+			if (!al) {
+				alloc_info("<close> [tid: %ld] ahnd: %" PRIx64
+					   " -> no handle\n",
+					   (long) req->tid, req->allocator_handle);
+				pthread_mutex_unlock(&map_mutex);
+				break;
+			}
+
+			resp->response_type = RESP_CLOSE;
+			ctx = al->ctx;
+			vm = al->vm;
+
+			ret = atomic_load(&al->ial->refcount);
+			resp->close.is_empty = allocator_close(req->allocator_handle);
+			pthread_mutex_unlock(&map_mutex);
+
+			alloc_info("<close> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", is_empty: %d, refcnt: %d->%d\n",
+				   (long) req->tid, req->allocator_handle,
+				   ctx, vm, resp->close.is_empty,
+				   ret, al ? ret - 1 : 0);
+
+			break;
+
+		case REQ_ADDRESS_RANGE:
+			resp->response_type = RESP_ADDRESS_RANGE;
+			ial->get_address_range(ial, &start, &end);
+			resp->address_range.start = start;
+			resp->address_range.end = end;
+			alloc_info("<address range> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", start: 0x%" PRIx64 ", end: 0x%" PRId64 "\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm, start, end);
+			break;
+
+		case REQ_ALLOC:
+			resp->response_type = RESP_ALLOC;
+			resp->alloc.offset = ial->alloc(ial,
+							req->alloc.handle,
+							req->alloc.size,
+							req->alloc.alignment);
+			alloc_info("<alloc> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u, handle: %u"
+				   ", size: 0x%" PRIx64 ", offset: 0x%" PRIx64
+				   ", alignment: 0x%" PRIx64 "\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm,
+				   req->alloc.handle, req->alloc.size,
+				   resp->alloc.offset, req->alloc.alignment);
+			break;
+
+		case REQ_FREE:
+			resp->response_type = RESP_FREE;
+			resp->free.freed = ial->free(ial, req->free.handle);
+			alloc_info("<free> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", handle: %u, freed: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm,
+				   req->free.handle, resp->free.freed);
+			break;
+
+		case REQ_IS_ALLOCATED:
+			resp->response_type = RESP_IS_ALLOCATED;
+			allocated = ial->is_allocated(ial,
+						      req->is_allocated.handle,
+						      req->is_allocated.size,
+						      req->is_allocated.offset);
+			resp->is_allocated.allocated = allocated;
+			alloc_info("<is allocated> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", offset: 0x%" PRIx64
+				   ", allocated: %d\n", (long) req->tid,
+				   req->allocator_handle, al->ctx, al->vm,
+				   req->is_allocated.offset, allocated);
+			break;
+
+		case REQ_RESERVE:
+			resp->response_type = RESP_RESERVE;
+			reserved = ial->reserve(ial,
+						req->reserve.handle,
+						req->reserve.start,
+						req->reserve.end);
+			resp->reserve.reserved = reserved;
+			alloc_info("<reserve> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u, handle: %u"
+				   ", start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", reserved: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm, req->reserve.handle,
+				   req->reserve.start, req->reserve.end, reserved);
+			break;
+
+		case REQ_UNRESERVE:
+			resp->response_type = RESP_UNRESERVE;
+			unreserved = ial->unreserve(ial,
+						    req->unreserve.handle,
+						    req->unreserve.start,
+						    req->unreserve.end);
+			resp->unreserve.unreserved = unreserved;
+			alloc_info("<unreserve> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u, handle: %u"
+				   ", start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", unreserved: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm, req->unreserve.handle,
+				   req->unreserve.start, req->unreserve.end,
+				   unreserved);
+			break;
+
+		case REQ_IS_RESERVED:
+			resp->response_type = RESP_IS_RESERVED;
+			reserved = ial->is_reserved(ial,
+						    req->is_reserved.start,
+						    req->is_reserved.end);
+			resp->is_reserved.reserved = reserved;
+			alloc_info("<is reserved> [tid: %ld] ahnd: %" PRIx64
+				   ", ctx: %u, vm: %u"
+				   ", start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", reserved: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm, req->is_reserved.start,
+				   req->is_reserved.end, reserved);
+			break;
+
+		case REQ_RESERVE_IF_NOT_ALLOCATED:
+			resp->response_type = RESP_RESERVE_IF_NOT_ALLOCATED;
+			size = req->reserve.end - req->reserve.start;
+
+			allocated = ial->is_allocated(ial, req->reserve.handle,
+						      size, req->reserve.start);
+			if (allocated) {
+				resp->reserve_if_not_allocated.allocated = allocated;
+				alloc_info("<reserve if not allocated> [tid: %ld] "
+					   "ahnd: %" PRIx64 ", ctx: %u, vm: %u"
+					   ", handle: %u, size: 0x%lx"
+					   ", start: 0x%" PRIx64 ", end: 0x%" PRIx64
+					   ", allocated: %d, reserved: %d\n",
+					   (long) req->tid, req->allocator_handle,
+					   al->ctx, al->vm, req->reserve.handle,
+					   (long) size, req->reserve.start,
+					   req->reserve.end, allocated, false);
+				break;
+			}
+
+			reserved = ial->reserve(ial,
+						req->reserve.handle,
+						req->reserve.start,
+						req->reserve.end);
+			resp->reserve_if_not_allocated.reserved = reserved;
+			alloc_info("<reserve if not allocated> [tid: %ld] "
+				   "ahnd: %" PRIx64 ", ctx: %u, vm: %u"
+				   ", handle: %u, start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", allocated: %d, reserved: %d\n",
+				   (long) req->tid, req->allocator_handle,
+				   al->ctx, al->vm,
+				   req->reserve.handle,
+				   req->reserve.start, req->reserve.end,
+				   false, reserved);
+			break;
+		}
+
+		if (req->request_type > REQ_CLOSE)
+			pthread_mutex_unlock(&ial->mutex);
+
+		return 0;
+	}
+
+	ret = send_req_recv_resp(channel, req, resp);
+
+	if (ret < 0)
+		exit(0);
+
+	return ret;
+}
+
+static void kill_children(int sig)
+{
+	sighandler_t old;
+
+	old = signal(sig, SIG_IGN);
+	igt_assert(old != SIG_ERR);
+	kill(-getpgrp(), sig);
+	igt_assert(signal(sig, old) != SIG_ERR);
+}
+
+static void *allocator_thread_loop(void *data)
+{
+	struct alloc_req req;
+	struct alloc_resp resp;
+	int ret;
+	(void) data;
+
+	alloc_info("Allocator pid: %ld, tid: %ld\n",
+		   (long) allocator_pid, (long) gettid());
+	alloc_info("Entering allocator loop\n");
+
+	while (1) {
+		ret = recv_req(channel, &req);
+
+		if (ret == -1) {
+			igt_warn("Error receiving request in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+			kill_children(SIGINT);
+			return (void *) -1;
+		}
+
+		/* Fake message to stop the thread */
+		if (req.request_type == REQ_STOP) {
+			alloc_info("<stop request>\n");
+			break;
+		}
+
+		ret = handle_request(&req, &resp);
+		if (ret) {
+			igt_warn("Error handling request in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+			break;
+		}
+
+		ret = send_resp(channel, req.tid, &resp);
+		if (ret) {
+			igt_warn("Error sending response in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+
+			kill_children(SIGINT);
+			return (void *) -1;
+		}
+	}
+
+	return NULL;
+}
+
+/**
+ * intel_allocator_multiprocess_start:
+ *
+ * Function turns on intel_allocator multiprocess mode what means
+ * all allocations from children processes are performed in a separate thread
+ * within main igt process. Children are aware of the situation and use
+ * some interprocess communication channel to send/receive messages
+ * (open, close, alloc, free, ...) to/from allocator thread.
+ *
+ * Must be used when you want to use an allocator in non single-process code.
+ * All allocations in threads spawned in main igt process are handled by
+ * mutexing, not by sending/receiving messages to/from allocator thread.
+ *
+ * Note. This destroys all previously created allocators and theirs content.
+ */
+void intel_allocator_multiprocess_start(void)
+{
+	alloc_info("allocator multiprocess start\n");
+
+	igt_assert_f(child_pid == -1,
+		     "Allocator thread can be spawned only in main IGT process\n");
+	intel_allocator_init();
+
+	multiprocess = true;
+	channel->init(channel);
+
+	pthread_create(&allocator_thread, NULL,
+		       allocator_thread_loop, NULL);
+}
+
+/**
+ * intel_allocator_multiprocess_stop:
+ *
+ * Function turns off intel_allocator multiprocess mode what means
+ * stopping allocator thread and deinitializing its data.
+ */
+void intel_allocator_multiprocess_stop(void)
+{
+	alloc_info("allocator multiprocess stop\n");
+
+	if (multiprocess) {
+		send_alloc_stop(channel);
+		/* Deinit, this should stop all blocked syscalls, if any */
+		channel->deinit(channel);
+		pthread_join(allocator_thread, NULL);
+		/* But we're not sure does child will stuck */
+		kill_children(SIGINT);
+		igt_waitchildren_timeout(5, "Stopping children");
+		multiprocess = false;
+	}
+}
+
+static uint64_t __intel_allocator_open_full(int fd, uint32_t ctx,
+					    uint32_t vm,
+					    uint64_t start, uint64_t end,
+					    uint8_t allocator_type,
+					    enum allocator_strategy strategy)
+{
+	struct alloc_req req = { .request_type = REQ_OPEN,
+				 .open.fd = fd,
+				 .open.ctx = ctx,
+				 .open.vm = vm,
+				 .open.start = start,
+				 .open.end = end,
+				 .open.allocator_type = allocator_type,
+				 .open.allocator_strategy = strategy };
+	struct alloc_resp resp;
+
+	/* Get child_tid only once at open() */
+	if (child_tid == -1)
+		child_tid = gettid();
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.open.allocator_handle);
+	igt_assert(resp.response_type == RESP_OPEN);
+
+	return resp.open.allocator_handle;
+}
+
+/**
+ * intel_allocator_open_full:
+ * @fd: i915 descriptor
+ * @ctx: context
+ * @start: address of the beginning
+ * @end: address of the end
+ * @allocator_type: one of INTEL_ALLOCATOR_* define
+ * @strategy: passed to the allocator to define the strategy (like order
+ * of allocation, see notes below).
+ *
+ * Function opens an allocator instance within <@start, @end) vm for given
+ * @fd and @ctx and returns its handle. If the allocator for such pair
+ * doesn't exist it is created with refcount = 1.
+ * Parallel opens returns same handle bumping its refcount.
+ *
+ * Returns: unique handle to the currently opened allocator.
+ *
+ * Notes:
+ * Strategy is generally used internally by the underlying allocator:
+ *
+ * For SIMPLE allocator:
+ * - ALLOC_STRATEGY_HIGH_TO_LOW means topmost addresses are allocated first,
+ * - ALLOC_STRATEGY_LOW_TO_HIGH opposite, allocation starts from lowest
+ *   addresses.
+ *
+ * For RANDOM allocator:
+ * - none of strategy is currently implemented.
+ */
+uint64_t intel_allocator_open_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   uint8_t allocator_type,
+				   enum allocator_strategy strategy)
+{
+	return __intel_allocator_open_full(fd, ctx, 0, start, end,
+					   allocator_type, strategy);
+}
+
+uint64_t intel_allocator_open_vm_full(int fd, uint32_t vm,
+				      uint64_t start, uint64_t end,
+				      uint8_t allocator_type,
+				      enum allocator_strategy strategy)
+{
+	igt_assert(vm != 0);
+	return __intel_allocator_open_full(fd, 0, vm, start, end,
+					   allocator_type, strategy);
+}
+
+/**
+ * intel_allocator_open:
+ * @fd: i915 descriptor
+ * @ctx: context
+ * @allocator_type: one of INTEL_ALLOCATOR_* define
+ *
+ * Function opens an allocator instance for given @fd and @ctx and returns
+ * its handle. If the allocator for such pair doesn't exist it is created
+ * with refcount = 1. Parallel opens returns same handle bumping its refcount.
+ *
+ * Returns: unique handle to the currently opened allocator.
+ *
+ * Notes: we pass ALLOC_STRATEGY_HIGH_TO_LOW as default, playing with higher
+ * addresses makes easier to find addressing issues (like passing non-canonical
+ * offsets, which won't be catched unless 47-bit is set).
+ */
+uint64_t intel_allocator_open(int fd, uint32_t ctx, uint8_t allocator_type)
+{
+	return intel_allocator_open_full(fd, ctx, 0, 0, allocator_type,
+					 ALLOC_STRATEGY_HIGH_TO_LOW);
+}
+
+uint64_t intel_allocator_open_vm(int fd, uint32_t vm, uint8_t allocator_type)
+{
+	return intel_allocator_open_vm_full(fd, vm, 0, 0, allocator_type,
+					    ALLOC_STRATEGY_HIGH_TO_LOW);
+}
+
+uint64_t intel_allocator_open_vm_as(uint64_t allocator_handle, uint32_t new_vm)
+{
+	struct alloc_req req = { .request_type = REQ_OPEN_AS,
+				 .allocator_handle = allocator_handle,
+				 .open_as.new_vm = new_vm };
+	struct alloc_resp resp;
+
+	/* Get child_tid only once at open() */
+	if (child_tid == -1)
+		child_tid = gettid();
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.open_as.allocator_handle);
+	igt_assert(resp.response_type == RESP_OPEN_AS);
+
+	return resp.open.allocator_handle;
+}
+
+/**
+ * intel_allocator_close:
+ * @allocator_handle: handle to the allocator that will be closed
+ *
+ * Function decreases an allocator refcount for the given @handle.
+ * When refcount reaches zero allocator is closed (destroyed) and all
+ * allocated / reserved areas are freed.
+ *
+ * Returns: true if closed allocator was empty, false otherwise.
+ */
+bool intel_allocator_close(uint64_t allocator_handle)
+{
+	struct alloc_req req = { .request_type = REQ_CLOSE,
+				 .allocator_handle = allocator_handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_CLOSE);
+
+	return resp.close.is_empty;
+}
+
+/**
+ * intel_allocator_get_address_range:
+ * @allocator_handle: handle to an allocator
+ * @startp: pointer to the variable where function writes starting offset
+ * @endp: pointer to the variable where function writes ending offset
+ *
+ * Function fills @startp, @endp with respectively, starting and ending offset
+ * of the allocator working virtual address space range.
+ *
+ * Note. Allocators working ranges can differ depending on the device or
+ * the allocator type so before reserving a specific offset a good practise
+ * is to ensure that address is between accepted range.
+ */
+void intel_allocator_get_address_range(uint64_t allocator_handle,
+				       uint64_t *startp, uint64_t *endp)
+{
+	struct alloc_req req = { .request_type = REQ_ADDRESS_RANGE,
+				 .allocator_handle = allocator_handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_ADDRESS_RANGE);
+
+	if (startp)
+		*startp = resp.address_range.start;
+
+	if (endp)
+		*endp = resp.address_range.end;
+}
+
+/**
+ * intel_allocator_alloc:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @alignment: determines object alignment
+ *
+ * Function finds and returns the most suitable offset with given @alignment
+ * for an object with @size identified by the @handle.
+ *
+ * Returns: currently assigned address for a given object. If an object was
+ * already allocated returns same address.
+ */
+uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t alignment)
+{
+	struct alloc_req req = { .request_type = REQ_ALLOC,
+				 .allocator_handle = allocator_handle,
+				 .alloc.handle = handle,
+				 .alloc.size = size,
+				 .alloc.alignment = alignment };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_ALLOC);
+
+	return resp.alloc.offset;
+}
+
+/**
+ * intel_allocator_free:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object to be freed
+ *
+ * Function free object identified by the @handle in allocator what makes it
+ * offset again allocable.
+ *
+ * Note. Reserved objects can only be freed by an #intel_allocator_unreserve
+ * function.
+ *
+ * Returns: true if the object was successfully freed, otherwise false.
+ */
+bool intel_allocator_free(uint64_t allocator_handle, uint32_t handle)
+{
+	struct alloc_req req = { .request_type = REQ_FREE,
+				 .allocator_handle = allocator_handle,
+				 .free.handle = handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_FREE);
+
+	return resp.free.freed;
+}
+
+/**
+ * intel_allocator_is_allocated:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function checks whether the object identified by the @handle and @size
+ * is allocated at the @offset.
+ *
+ * Returns: true if the object is currently allocated at the @offset,
+ * otherwise false.
+ */
+bool intel_allocator_is_allocated(uint64_t allocator_handle, uint32_t handle,
+				  uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_IS_ALLOCATED,
+				 .allocator_handle = allocator_handle,
+				 .is_allocated.handle = handle,
+				 .is_allocated.size = size,
+				 .is_allocated.offset = offset };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_IS_ALLOCATED);
+
+	return resp.is_allocated.allocated;
+}
+
+/**
+ * intel_allocator_reserve:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function reserves space that starts at the @offset and has @size.
+ * Optionally we can pass @handle to mark that space is for a specific
+ * object, otherwise pass -1.
+ *
+ * Note. Reserved space is identified by offset and size, not a handle.
+ * So an object can have multiple reserved spaces with its handle.
+ *
+ * Returns: true if space is successfully reserved, otherwise false.
+ */
+bool intel_allocator_reserve(uint64_t allocator_handle, uint32_t handle,
+			     uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_RESERVE,
+				 .allocator_handle = allocator_handle,
+				 .reserve.handle = handle,
+				 .reserve.start = offset,
+				 .reserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_RESERVE);
+
+	return resp.reserve.reserved;
+}
+
+/**
+ * intel_allocator_unreserve:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function unreserves space that starts at the @offset, @size and @handle.
+ *
+ * Note. @handle, @size and @offset have to match those used in reservation.
+ * i.e. check with the same offset but even smaller size will fail.
+ *
+ * Returns: true if the space is successfully unreserved, otherwise false.
+ */
+bool intel_allocator_unreserve(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_UNRESERVE,
+				 .allocator_handle = allocator_handle,
+				 .unreserve.handle = handle,
+				 .unreserve.start = offset,
+				 .unreserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_UNRESERVE);
+
+	return resp.unreserve.unreserved;
+}
+
+/**
+ * intel_allocator_is_reserved:
+ * @allocator_handle: handle to an allocator
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function checks whether space starting at the @offset and @size is
+ * currently under reservation.
+ *
+ * Note. @size and @offset have to match those used in reservation,
+ * i.e. check with the same offset but even smaller size will fail.
+ *
+ * Returns: true if space is reserved, othwerise false.
+ */
+bool intel_allocator_is_reserved(uint64_t allocator_handle,
+				 uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_IS_RESERVED,
+				 .allocator_handle = allocator_handle,
+				 .is_reserved.start = offset,
+				 .is_reserved.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_IS_RESERVED);
+
+	return resp.is_reserved.reserved;
+}
+
+/**
+ * intel_allocator_reserve_if_not_allocated:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ * @is_allocatedp: if not NULL function writes there object allocation status
+ * (true/false)
+ *
+ * Function checks whether the object identified by the @handle and @size
+ * is allocated at the @offset and writes the result to @is_allocatedp.
+ * If it's not it reserves it at the given @offset.
+ *
+ * Returns: true if the space for an object was reserved, otherwise false.
+ */
+bool intel_allocator_reserve_if_not_allocated(uint64_t allocator_handle,
+					      uint32_t handle,
+					      uint64_t size, uint64_t offset,
+					      bool *is_allocatedp)
+{
+	struct alloc_req req = { .request_type = REQ_RESERVE_IF_NOT_ALLOCATED,
+				 .allocator_handle = allocator_handle,
+				 .reserve.handle = handle,
+				 .reserve.start = offset,
+				 .reserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_RESERVE_IF_NOT_ALLOCATED);
+
+	if (is_allocatedp)
+		*is_allocatedp = resp.reserve_if_not_allocated.allocated;
+
+	return resp.reserve_if_not_allocated.reserved;
+}
+
+/**
+ * intel_allocator_print:
+ * @allocator_handle: handle to an allocator
+ *
+ * Function prints statistics and content of the allocator.
+ * Mainly for debugging purposes.
+ *
+ * Note. Printing possible only in the main process.
+ **/
+void intel_allocator_print(uint64_t allocator_handle)
+{
+	igt_assert(allocator_handle);
+
+	if (!multiprocess || is_same_process()) {
+		struct intel_allocator *ial = from_user_pointer(allocator_handle);
+
+		pthread_mutex_lock(&map_mutex);
+		ial->print(ial, true);
+		pthread_mutex_unlock(&map_mutex);
+	} else {
+		igt_warn("Print stats is in main process only\n");
+	}
+}
+
+static bool equal_handles(const void *key1, const void *key2)
+{
+	const struct handle_entry *h1 = key1, *h2 = key2;
+
+	alloc_debug("h1: %llx, h2: %llx\n",
+		   (long long) h1->handle, (long long) h2->handle);
+
+	return h1->handle == h2->handle;
+}
+
+static bool equal_ctx(const void *key1, const void *key2)
+{
+	const struct allocator *a1 = key1, *a2 = key2;
+
+	alloc_debug("a1: <fd: %d, ctx: %u>, a2 <fd: %d, ctx: %u>\n",
+		   a1->fd, a1->ctx, a2->fd, a2->ctx);
+
+	return a1->fd == a2->fd && a1->ctx == a2->ctx;
+}
+
+static bool equal_vm(const void *key1, const void *key2)
+{
+	const struct allocator *a1 = key1, *a2 = key2;
+
+	alloc_debug("a1: <fd: %d, vm: %u>, a2 <fd: %d, vm: %u>\n",
+		   a1->fd, a1->vm, a2->fd, a2->vm);
+
+	return a1->fd == a2->fd && a1->vm == a2->vm;
+}
+
+/*  2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
+#define GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001ULL
+
+static inline uint64_t hash_handles(const void *val, unsigned int bits)
+{
+	uint64_t hash = ((struct handle_entry *) val)->handle;
+
+	hash = hash * GOLDEN_RATIO_PRIME_64;
+	return hash >> (64 - bits);
+}
+
+static inline uint64_t hash_instance(const void *val, unsigned int bits)
+{
+	uint64_t hash = ((struct allocator *) val)->fd;
+
+	hash = hash * GOLDEN_RATIO_PRIME_64;
+	return hash >> (64 - bits);
+}
+
+static void __free_maps(struct igt_map *map, bool close_allocators)
+{
+	struct igt_map_entry *pos;
+	struct igt_hlist_node *tmp;
+	const struct handle_entry *h;
+	int i;
+
+	if (!map)
+		return;
+
+	if (close_allocators)
+		igt_map_for_each_safe(map, i, tmp, pos) {
+			h = pos->key;
+			allocator_close(h->handle);
+		}
+
+	igt_map_free(map);
+	free(map);
+}
+
+/**
+ * intel_allocator_init:
+ *
+ * Function initializes the allocators infrastructure. The second call will
+ * override current infra and destroy existing there allocators. It is called
+ * in igt_constructor.
+ **/
+void intel_allocator_init(void)
+{
+	alloc_info("Prepare an allocator infrastructure\n");
+
+	allocator_pid = getpid();
+	alloc_info("Allocator pid: %ld\n", (long) allocator_pid);
+
+	__free_maps(handles, true);
+	__free_maps(ctx_map, false);
+	__free_maps(vm_map, false);
+
+	handles = calloc(sizeof(*handles), 1);
+	igt_assert(handles);
+
+	ctx_map = calloc(sizeof(*ctx_map), 1);
+	igt_assert(ctx_map);
+
+	vm_map = calloc(sizeof(*vm_map), 1);
+	igt_assert(vm_map);
+
+	atomic_init(&next_handle, 1);
+	__igt_map_init(handles, equal_handles, hash_handles, 8);
+	__igt_map_init(ctx_map, equal_ctx, hash_instance, 8);
+	__igt_map_init(vm_map, equal_vm, hash_instance, 8);
+
+	channel = intel_allocator_get_msgchannel(CHANNEL_SYSVIPC_MSGQUEUE);
+}
+
+igt_constructor {
+	intel_allocator_init();
+}
diff --git a/lib/intel_allocator.h b/lib/intel_allocator.h
new file mode 100644
index 000000000..f07663334
--- /dev/null
+++ b/lib/intel_allocator.h
@@ -0,0 +1,220 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __INTEL_ALLOCATOR_H__
+#define __INTEL_ALLOCATOR_H__
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <pthread.h>
+#include <stdint.h>
+#include <stdatomic.h>
+
+/**
+ * SECTION:intel_allocator
+ * @short_description: igt implementation of allocator
+ * @title: Intel allocator
+ * @include: intel_allocator.h
+ *
+ * # Introduction
+ *
+ * With the era of discrete cards we requested to adopt IGT to handle
+ * addresses in userspace only (softpin, without support of relocations).
+ * Writing an allocator for single purpose would be relatively easy
+ * but supporting different tests with different requirements became
+ * quite complicated task where couple of scenarios may be not covered yet.
+ *
+ * # Assumptions
+ *
+ * - Allocator has to work in multiprocess / multithread environment.
+ * - Allocator backend (algorithm) should be plugable. Currently we support
+ *   SIMPLE (borrowed from Mesa allocator), RELOC (pseudo allocator which
+ *   returns incremented addresses without checking overlapping)
+ *   and RANDOM (pseudo allocator which randomize addresses without
+ *   checking overlapping).
+ * - Has to integrate in intel-bb (our simpler libdrm replacement used in
+ *   couple of tests).
+ *
+ * # Implementation
+ *
+ * ## Single process (allows multiple threads)
+ *
+ * For single process we don't need to create dedicated
+ * entity (kind of arbiter) to solve allocations. Simple locking over
+ * allocator data structure is enough. As basic usage example would be:
+ *
+ * |[<!-- language="c" -->
+ * struct object {
+ *      uint32_t handle;
+ *      uint64_t offset;
+ *      uint64_t size;
+ * };
+ *
+ * struct object obj1, obj2;
+ * uint64_t ahnd, startp, endp, size = 4096, align = 1 << 13;
+ * int fd = -1;
+ *
+ * fd = drm_open_driver(DRIVER_INTEL);
+ * ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+ *
+ * obj1.handle = gem_create(4096);
+ * obj2.handle = gem_create(4096);
+ *
+ * // Reserve hole for an object in given address.
+ * // In this example the first possible address.
+ * intel_allocator_get_address_range(ahnd, &startp, &endp);
+ * obj1.offset = startp;
+ * igt_assert(intel_allocator_reserve(ahnd, obj1.handle, size, startp));
+ *
+ * // Get the most suitable offset for the object. Preferred way.
+ * obj2.offset = intel_allocator_alloc(ahnd, obj2.handle, size, align);
+ *
+ *  ...
+ *
+ * // Reserved addresses can be only freed by unreserve.
+ * intel_allocator_unreserve(ahnd, obj1.handle, size, obj1.offset);
+ * intel_allocator_free(ahnd, obj2.handle);
+ *
+ * gem_close(obj1.handle);
+ * gem_close(obj2.handle);
+ * ]|
+ *
+ * Description:
+ * - ahnd is allocator handle (vm space handled by it)
+ * - we call get_address_range() to get start/end range provided by the
+ *   allocator (we haven't specified its range in open so allocator code will
+ *   assume some safe address range - we don't want to exercise some potential
+ *   HW bugs on the last page)
+ * - alloc() / free() pair just gets address for gem object proposed by the
+ *   allocator
+ * - reserve() / unreserve() pair gives us full control of acquire/return
+ *   range we're interested in
+ *
+ * ## Multiple processes
+ *
+ * When process forks and its child uses same fd vm its address space is also
+ * the same. Some coordination - in this case interprocess communication -
+ * is required to assign proper addresses for gem objects and avoid collision.
+ * Additional thread is spawned for such case to cover child processes needs.
+ * It uses some form of communication channel to receive, perform action
+ * (alloc, free...) and send response to requesting process. Currently
+ * SYSVIPC message queue was chosen for this but it can replaced by other
+ * mechanism. Allocation techniques are same as for single process, we
+ * just need to wrap such code with:
+ *
+ *
+ * |[<!-- language="c" -->
+ *
+ *
+ * intel_allocator_multiprocess_start();
+ *
+ * ... allocation code (open, close, alloc, free, ...)
+ *
+ * intel_allocator_multiprocess_stop();
+ * ]|
+ *
+ * Calling start() spawns additional allocator thread ready for handling
+ * incoming allocation requests (open / close are also requests in that case).
+ *
+ * Calling stop() request to stop allocator thread unblocking all pending
+ * children (if any).
+ */
+
+enum allocator_strategy {
+	ALLOC_STRATEGY_NONE,
+	ALLOC_STRATEGY_LOW_TO_HIGH,
+	ALLOC_STRATEGY_HIGH_TO_LOW
+};
+
+struct intel_allocator {
+	int fd;
+	uint8_t type;
+	enum allocator_strategy strategy;
+	_Atomic(int32_t) refcount;
+	pthread_mutex_t mutex;
+
+	/* allocator's private structure */
+	void *priv;
+
+	void (*get_address_range)(struct intel_allocator *ial,
+				  uint64_t *startp, uint64_t *endp);
+	uint64_t (*alloc)(struct intel_allocator *ial, uint32_t handle,
+			  uint64_t size, uint64_t alignment);
+	bool (*is_allocated)(struct intel_allocator *ial, uint32_t handle,
+			     uint64_t size, uint64_t alignment);
+	bool (*reserve)(struct intel_allocator *ial,
+			uint32_t handle, uint64_t start, uint64_t size);
+	bool (*unreserve)(struct intel_allocator *ial,
+			  uint32_t handle, uint64_t start, uint64_t size);
+	bool (*is_reserved)(struct intel_allocator *ial,
+			    uint64_t start, uint64_t size);
+	bool (*free)(struct intel_allocator *ial, uint32_t handle);
+
+	void (*destroy)(struct intel_allocator *ial);
+
+	bool (*is_empty)(struct intel_allocator *ial);
+
+	void (*print)(struct intel_allocator *ial, bool full);
+};
+
+void intel_allocator_init(void);
+void intel_allocator_multiprocess_start(void);
+void intel_allocator_multiprocess_stop(void);
+
+uint64_t intel_allocator_open(int fd, uint32_t ctx, uint8_t allocator_type);
+uint64_t intel_allocator_open_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   uint8_t allocator_type,
+				   enum allocator_strategy strategy);
+uint64_t intel_allocator_open_vm(int fd, uint32_t vm, uint8_t allocator_type);
+uint64_t intel_allocator_open_vm_full(int fd, uint32_t vm,
+				      uint64_t start, uint64_t end,
+				      uint8_t allocator_type,
+				      enum allocator_strategy strategy);
+
+uint64_t intel_allocator_open_vm_as(uint64_t allocator_handle, uint32_t new_vm);
+bool intel_allocator_close(uint64_t allocator_handle);
+void intel_allocator_get_address_range(uint64_t allocator_handle,
+				       uint64_t *startp, uint64_t *endp);
+uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t alignment);
+bool intel_allocator_free(uint64_t allocator_handle, uint32_t handle);
+bool intel_allocator_is_allocated(uint64_t allocator_handle, uint32_t handle,
+				  uint64_t size, uint64_t offset);
+bool intel_allocator_reserve(uint64_t allocator_handle, uint32_t handle,
+			     uint64_t size, uint64_t offset);
+bool intel_allocator_unreserve(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t offset);
+bool intel_allocator_is_reserved(uint64_t allocator_handle,
+				 uint64_t size, uint64_t offset);
+bool intel_allocator_reserve_if_not_allocated(uint64_t allocator_handle,
+					      uint32_t handle,
+					      uint64_t size, uint64_t offset,
+					      bool *is_allocatedp);
+
+void intel_allocator_print(uint64_t allocator_handle);
+
+#define INTEL_ALLOCATOR_NONE   0
+#define INTEL_ALLOCATOR_RELOC  1
+#define INTEL_ALLOCATOR_RANDOM 2
+#define INTEL_ALLOCATOR_SIMPLE 3
+
+#define GEN8_GTT_ADDRESS_WIDTH 48
+
+static inline uint64_t sign_extend64(uint64_t x, int high)
+{
+	int shift = 63 - high;
+
+	return (int64_t)(x << shift) >> shift;
+}
+
+static inline uint64_t CANONICAL(uint64_t offset)
+{
+	return sign_extend64(offset, GEN8_GTT_ADDRESS_WIDTH - 1);
+}
+
+#define DECANONICAL(offset) (offset & ((1ull << GEN8_GTT_ADDRESS_WIDTH) - 1))
+
+#endif
diff --git a/lib/intel_allocator_msgchannel.c b/lib/intel_allocator_msgchannel.c
new file mode 100644
index 000000000..8280bc4ec
--- /dev/null
+++ b/lib/intel_allocator_msgchannel.c
@@ -0,0 +1,187 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include "igt.h"
+#include "intel_allocator_msgchannel.h"
+
+extern __thread pid_t child_tid;
+
+/* ----- SYSVIPC MSGQUEUE ----- */
+
+#define FTOK_IGT_ALLOCATOR_KEY "/tmp/igt.allocator.key"
+#define FTOK_IGT_ALLOCATOR_PROJID 2020
+
+#define ALLOCATOR_REQUEST 1
+
+struct msgqueue_data {
+	key_t key;
+	int queue;
+};
+
+struct msgqueue_buf {
+	long mtype;
+	union {
+		struct alloc_req request;
+		struct alloc_resp response;
+	} data;
+};
+
+static void msgqueue_init(struct msg_channel *channel)
+{
+	struct msgqueue_data *msgdata;
+	struct msqid_ds qstat;
+	key_t key;
+	int fd, queue;
+
+	igt_debug("Init msgqueue\n");
+
+	/* Create ftok key only if not exists */
+	fd = open(FTOK_IGT_ALLOCATOR_KEY, O_CREAT | O_EXCL | O_WRONLY, 0600);
+	igt_assert(fd >= 0 || errno == EEXIST);
+	if (fd >= 0)
+		close(fd);
+
+	key = ftok(FTOK_IGT_ALLOCATOR_KEY, FTOK_IGT_ALLOCATOR_PROJID);
+	igt_assert(key != -1);
+	igt_debug("Queue key: %x\n", (int) key);
+
+	queue = msgget(key, 0);
+	if (queue != -1) {
+		igt_assert(msgctl(queue, IPC_STAT, &qstat) == 0);
+		igt_debug("old messages: %lu\n", qstat.msg_qnum);
+		igt_assert(msgctl(queue, IPC_RMID, NULL) == 0);
+	}
+
+	queue = msgget(key, IPC_CREAT);
+	igt_debug("msg queue: %d\n", queue);
+
+	msgdata = calloc(1, sizeof(*msgdata));
+	igt_assert(msgdata);
+	msgdata->key = key;
+	msgdata->queue = queue;
+	channel->priv = msgdata;
+}
+
+static void msgqueue_deinit(struct msg_channel *channel)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+
+	igt_debug("Deinit msgqueue\n");
+	msgctl(msgdata->queue, IPC_RMID, NULL);
+	free(channel->priv);
+}
+
+static int msgqueue_send_req(struct msg_channel *channel,
+			     struct alloc_req *request)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret;
+
+	buf.mtype = ALLOCATOR_REQUEST;
+	buf.data.request.request_type = 1;
+	memcpy(&buf.data.request, request, sizeof(*request));
+
+retry:
+	ret = msgsnd(msgdata->queue, &buf, sizeof(buf) - sizeof(long), 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_recv_req(struct msg_channel *channel,
+			     struct alloc_req *request)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret, size = sizeof(buf) - sizeof(long);
+
+retry:
+	ret = msgrcv(msgdata->queue, &buf, size, ALLOCATOR_REQUEST, 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == size)
+		memcpy(request, &buf.data.request, sizeof(*request));
+	else if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_send_resp(struct msg_channel *channel,
+			      struct alloc_resp *response)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret;
+
+	buf.mtype = response->tid;
+	memcpy(&buf.data.response, response, sizeof(*response));
+
+retry:
+	ret = msgsnd(msgdata->queue, &buf, sizeof(buf) - sizeof(long), 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_recv_resp(struct msg_channel *channel,
+			      struct alloc_resp *response)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret, size = sizeof(buf) - sizeof(long);
+
+retry:
+	ret = msgrcv(msgdata->queue, &buf, sizeof(buf) - sizeof(long),
+		     response->tid, 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == size)
+		memcpy(response, &buf.data.response, sizeof(*response));
+	else if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static struct msg_channel msgqueue_channel = {
+	.priv = NULL,
+	.init = msgqueue_init,
+	.deinit = msgqueue_deinit,
+	.send_req = msgqueue_send_req,
+	.recv_req = msgqueue_recv_req,
+	.send_resp = msgqueue_send_resp,
+	.recv_resp = msgqueue_recv_resp,
+};
+
+struct msg_channel *intel_allocator_get_msgchannel(enum msg_channel_type type)
+{
+	struct msg_channel *channel = NULL;
+
+	switch (type) {
+	case CHANNEL_SYSVIPC_MSGQUEUE:
+		channel = &msgqueue_channel;
+	}
+
+	igt_assert(channel);
+
+	return channel;
+}
diff --git a/lib/intel_allocator_msgchannel.h b/lib/intel_allocator_msgchannel.h
new file mode 100644
index 000000000..ac6edfb9e
--- /dev/null
+++ b/lib/intel_allocator_msgchannel.h
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __INTEL_ALLOCATOR_MSGCHANNEL_H__
+#define __INTEL_ALLOCATOR_MSGCHANNEL_H__
+
+#include <sys/types.h>
+#include <unistd.h>
+#include <stdint.h>
+
+enum reqtype {
+	REQ_STOP,
+	REQ_OPEN,
+	REQ_OPEN_AS,
+	REQ_CLOSE,
+	REQ_ADDRESS_RANGE,
+	REQ_ALLOC,
+	REQ_FREE,
+	REQ_IS_ALLOCATED,
+	REQ_RESERVE,
+	REQ_UNRESERVE,
+	REQ_RESERVE_IF_NOT_ALLOCATED,
+	REQ_IS_RESERVED,
+};
+
+enum resptype {
+	RESP_OPEN,
+	RESP_OPEN_AS,
+	RESP_CLOSE,
+	RESP_ADDRESS_RANGE,
+	RESP_ALLOC,
+	RESP_FREE,
+	RESP_IS_ALLOCATED,
+	RESP_RESERVE,
+	RESP_UNRESERVE,
+	RESP_IS_RESERVED,
+	RESP_RESERVE_IF_NOT_ALLOCATED,
+};
+
+struct alloc_req {
+	enum reqtype request_type;
+
+	/* Common */
+	pid_t tid;
+	uint64_t allocator_handle;
+
+	union {
+		struct {
+			int fd;
+			uint32_t ctx;
+			uint32_t vm;
+			uint64_t start;
+			uint64_t end;
+			uint8_t allocator_type;
+			uint8_t allocator_strategy;
+		} open;
+
+		struct {
+			uint32_t new_vm;
+		} open_as;
+
+		struct {
+			uint32_t handle;
+			uint64_t size;
+			uint64_t alignment;
+		} alloc;
+
+		struct {
+			uint32_t handle;
+		} free;
+
+		struct {
+			uint32_t handle;
+			uint64_t size;
+			uint64_t offset;
+		} is_allocated;
+
+		struct {
+			uint32_t handle;
+			uint64_t start;
+			uint64_t end;
+		} reserve, unreserve;
+
+		struct {
+			uint64_t start;
+			uint64_t end;
+		} is_reserved;
+
+	};
+};
+
+struct alloc_resp {
+	enum resptype response_type;
+	pid_t tid;
+
+	union {
+		struct {
+			uint64_t allocator_handle;
+		} open, open_as;
+
+		struct {
+			bool is_empty;
+		} close;
+
+		struct {
+			uint64_t start;
+			uint64_t end;
+			uint8_t direction;
+		} address_range;
+
+		struct {
+			uint64_t offset;
+		} alloc;
+
+		struct {
+			bool freed;
+		} free;
+
+		struct {
+			bool allocated;
+		} is_allocated;
+
+		struct {
+			bool reserved;
+		} reserve, is_reserved;
+
+		struct {
+			bool unreserved;
+		} unreserve;
+
+		struct {
+			bool allocated;
+			bool reserved;
+		} reserve_if_not_allocated;
+	};
+};
+
+struct msg_channel {
+	void *priv;
+	void (*init)(struct msg_channel *channel);
+	void (*deinit)(struct msg_channel *channel);
+	int (*send_req)(struct msg_channel *channel, struct alloc_req *request);
+	int (*recv_req)(struct msg_channel *channel, struct alloc_req *request);
+	int (*send_resp)(struct msg_channel *channel, struct alloc_resp *response);
+	int (*recv_resp)(struct msg_channel *channel, struct alloc_resp *response);
+};
+
+enum msg_channel_type {
+	CHANNEL_SYSVIPC_MSGQUEUE
+};
+
+struct msg_channel *intel_allocator_get_msgchannel(enum msg_channel_type type);
+
+#endif
diff --git a/lib/meson.build b/lib/meson.build
index 7254faeac..216231761 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -34,6 +34,11 @@ lib_sources = [
 	'igt_vgem.c',
 	'igt_x86.c',
 	'instdone.c',
+	'intel_allocator.c',
+	'intel_allocator_msgchannel.c',
+	'intel_allocator_random.c',
+	'intel_allocator_reloc.c',
+	'intel_allocator_simple.c',
 	'intel_batchbuffer.c',
 	'intel_bufops.c',
 	'intel_chipset.c',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 09/35] lib/intel_allocator: Try to stop smoothly instead of deinit
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (7 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 08/35] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 10/35] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
                   ` (27 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Avoid race when stop was send to allocator thread. We wait around
100 ms to give thread chance to stop smoothly instead of removing
queue and enforcing exiting all blocked message syscalls.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index de86c57e9..9a2bcf2fe 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -106,6 +106,7 @@ static pthread_mutex_t map_mutex = PTHREAD_MUTEX_INITIALIZER;
 
 static bool multiprocess;
 static pthread_t allocator_thread;
+static bool allocator_thread_running;
 
 static bool warn_if_not_empty;
 
@@ -715,6 +716,8 @@ static void *allocator_thread_loop(void *data)
 		   (long) allocator_pid, (long) gettid());
 	alloc_info("Entering allocator loop\n");
 
+	WRITE_ONCE(allocator_thread_running, true);
+
 	while (1) {
 		ret = recv_req(channel, &req);
 
@@ -748,6 +751,8 @@ static void *allocator_thread_loop(void *data)
 		}
 	}
 
+	WRITE_ONCE(allocator_thread_running, false);
+
 	return NULL;
 }
 
@@ -787,15 +792,24 @@ void intel_allocator_multiprocess_start(void)
  * Function turns off intel_allocator multiprocess mode what means
  * stopping allocator thread and deinitializing its data.
  */
+#define STOP_TIMEOUT_MS 100
 void intel_allocator_multiprocess_stop(void)
 {
+	int time_left = STOP_TIMEOUT_MS;
+
 	alloc_info("allocator multiprocess stop\n");
 
 	if (multiprocess) {
 		send_alloc_stop(channel);
+
+		/* Give allocator thread time to complete */
+		while (time_left-- > 0 && READ_ONCE(allocator_thread_running))
+			usleep(1000); /* coarse calculation */
+
 		/* Deinit, this should stop all blocked syscalls, if any */
 		channel->deinit(channel);
 		pthread_join(allocator_thread, NULL);
+
 		/* But we're not sure does child will stuck */
 		kill_children(SIGINT);
 		igt_waitchildren_timeout(5, "Stopping children");
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 10/35] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (8 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 09/35] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 11/35] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
                   ` (26 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

When playing with multiprocess mode in allocator we're currently
using sysvipc message queues in blocking mode (request/response).
We can calculate then what is maximum depth for the queue for
requested number of children. Change alters kernel queue depth
to cover 4k users (1 is main thread and 4095 are children).

We're still prone to unlimited wait in allocator thread
(more than 4095 children successfully send the messages)
but we're going to address this later.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator_msgchannel.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/intel_allocator_msgchannel.c b/lib/intel_allocator_msgchannel.c
index 8280bc4ec..172858d3d 100644
--- a/lib/intel_allocator_msgchannel.c
+++ b/lib/intel_allocator_msgchannel.c
@@ -17,6 +17,7 @@ extern __thread pid_t child_tid;
 
 #define FTOK_IGT_ALLOCATOR_KEY "/tmp/igt.allocator.key"
 #define FTOK_IGT_ALLOCATOR_PROJID 2020
+#define MAXQLEN 4096
 
 #define ALLOCATOR_REQUEST 1
 
@@ -62,6 +63,13 @@ static void msgqueue_init(struct msg_channel *channel)
 	queue = msgget(key, IPC_CREAT);
 	igt_debug("msg queue: %d\n", queue);
 
+	igt_assert(msgctl(queue, IPC_STAT, &qstat) == 0);
+	igt_debug("msg size in bytes: %lu\n", qstat.msg_qbytes);
+	qstat.msg_qbytes = MAXQLEN * sizeof(struct msgqueue_buf);
+	igt_debug("resizing queue to support %d requests\n", MAXQLEN);
+	igt_assert_f(msgctl(queue, IPC_SET, &qstat) == 0,
+		     "Couldn't change queue size to %lu\n", qstat.msg_qbytes);
+
 	msgdata = calloc(1, sizeof(*msgdata));
 	igt_assert(msgdata);
 	msgdata->key = key;
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 11/35] lib/intel_allocator: Separate allocator multiprocess start
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (9 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 10/35] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 12/35] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
                   ` (25 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Validating allocator code (leaks and memory overwriting) can be done
with address sanitizer. When allocator is not working in multiprocess
mode it is easy, problems start when fork is in the game. In this
situation we need to separate preparation and starting allocator thread.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reported-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator.c | 41 ++++++++++++++++++++++++++++++++++-------
 lib/intel_allocator.h |  2 ++
 2 files changed, 36 insertions(+), 7 deletions(-)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index 9a2bcf2fe..e35d32148 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -756,6 +756,38 @@ static void *allocator_thread_loop(void *data)
 	return NULL;
 }
 
+
+/**
+ * __intel_allocator_multiprocess_prepare:
+ *
+ * Prepares allocator infrastructure to work in multiprocess mode.
+ *
+ * Some description is required why prepare/start steps are separated.
+ * When we write the code and we don't use address sanitizer simple
+ * intel_allocator_multiprocess_start() call is enough. With address
+ * sanitizer and using forking we can encounter situation where one
+ * forked child called allocator alloc() (so parent has some poisoned
+ * memory in shadow map), then second fork occurs. Second child will
+ * get poisoned shadow map from parent (there allocator thread reside).
+ * Checking shadow map in this child will report memory leak.
+ *
+ * How to separate initialization steps take a look into api_intel_allocator.c
+ * fork_simple_stress() function.
+ */
+void __intel_allocator_multiprocess_prepare(void)
+{
+	intel_allocator_init();
+
+	multiprocess = true;
+	channel->init(channel);
+}
+
+void __intel_allocator_multiprocess_start(void)
+{
+	pthread_create(&allocator_thread, NULL,
+		       allocator_thread_loop, NULL);
+}
+
 /**
  * intel_allocator_multiprocess_start:
  *
@@ -777,13 +809,8 @@ void intel_allocator_multiprocess_start(void)
 
 	igt_assert_f(child_pid == -1,
 		     "Allocator thread can be spawned only in main IGT process\n");
-	intel_allocator_init();
-
-	multiprocess = true;
-	channel->init(channel);
-
-	pthread_create(&allocator_thread, NULL,
-		       allocator_thread_loop, NULL);
+	__intel_allocator_multiprocess_prepare();
+	__intel_allocator_multiprocess_start();
 }
 
 /**
diff --git a/lib/intel_allocator.h b/lib/intel_allocator.h
index f07663334..f24f7d150 100644
--- a/lib/intel_allocator.h
+++ b/lib/intel_allocator.h
@@ -160,6 +160,8 @@ struct intel_allocator {
 };
 
 void intel_allocator_init(void);
+void __intel_allocator_multiprocess_prepare(void);
+void __intel_allocator_multiprocess_start(void);
 void intel_allocator_multiprocess_start(void);
 void intel_allocator_multiprocess_stop(void);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 12/35] lib/intel_bufops: Change size from 32->64 bit
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (10 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 11/35] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 21:33   ` Jason Ekstrand
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 13/35] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
                   ` (24 subsequent siblings)
  36 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

1. Buffer size from 32 -> 64 bit was changed to be consistent
   with drm code.
2. Remember buffer creation size to avoid recalculation.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_bufops.c        | 17 ++++++++---------
 lib/intel_bufops.h        |  7 +++++--
 tests/i915/api_intel_bb.c |  6 +++---
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index a50035e40..eb5ac4dad 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -711,7 +711,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 			     uint32_t req_tiling, uint32_t compression)
 {
 	uint32_t tiling = req_tiling;
-	uint32_t size;
+	uint64_t size;
 	uint32_t devid;
 	int tile_width;
 	int align_h = 1;
@@ -776,6 +776,9 @@ static void __intel_buf_init(struct buf_ops *bops,
 		size = buf->surface[0].stride * ALIGN(height, align_h);
 	}
 
+	/* Store real bo size to avoid mistakes in calculating it again */
+	buf->size = size;
+
 	if (handle)
 		buf->handle = handle;
 	else
@@ -1001,8 +1004,8 @@ void intel_buf_flush_and_unmap(struct intel_buf *buf)
 void intel_buf_print(const struct intel_buf *buf)
 {
 	igt_info("[name: %s]\n", buf->name);
-	igt_info("[%u]: w: %u, h: %u, stride: %u, size: %u, bo-size: %u, "
-		 "bpp: %u, tiling: %u, compress: %u\n",
+	igt_info("[%u]: w: %u, h: %u, stride: %u, size: %" PRIx64
+		 ", bo-size: %" PRIx64 ", bpp: %u, tiling: %u, compress: %u\n",
 		 buf->handle, intel_buf_width(buf), intel_buf_height(buf),
 		 buf->surface[0].stride, buf->surface[0].size,
 		 intel_buf_bo_size(buf), buf->bpp,
@@ -1208,13 +1211,9 @@ static void idempotency_selftest(struct buf_ops *bops, uint32_t tiling)
 	buf_ops_set_software_tiling(bops, tiling, false);
 }
 
-uint32_t intel_buf_bo_size(const struct intel_buf *buf)
+uint64_t intel_buf_bo_size(const struct intel_buf *buf)
 {
-	int offset = CCS_OFFSET(buf) ?: buf->surface[0].size;
-	int ccs_size =
-		buf->compression ? CCS_SIZE(buf->bops->intel_gen, buf) : 0;
-
-	return offset + ccs_size;
+	return buf->size;
 }
 
 static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency)
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 8debe7f22..5619fc6fa 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -9,10 +9,13 @@ struct buf_ops;
 
 #define INTEL_BUF_INVALID_ADDRESS (-1ull)
 #define INTEL_BUF_NAME_MAXSIZE 32
+#define INVALID_ADDR(x) ((x) == INTEL_BUF_INVALID_ADDRESS)
+
 struct intel_buf {
 	struct buf_ops *bops;
 	bool is_owner;
 	uint32_t handle;
+	uint64_t size;
 	uint32_t tiling;
 	uint32_t bpp;
 	uint32_t compression;
@@ -23,7 +26,7 @@ struct intel_buf {
 	struct {
 		uint32_t offset;
 		uint32_t stride;
-		uint32_t size;
+		uint64_t size;
 	} surface[2];
 	struct {
 		uint32_t offset;
@@ -88,7 +91,7 @@ intel_buf_ccs_height(int gen, const struct intel_buf *buf)
 	return DIV_ROUND_UP(intel_buf_height(buf), 512) * 32;
 }
 
-uint32_t intel_buf_bo_size(const struct intel_buf *buf);
+uint64_t intel_buf_bo_size(const struct intel_buf *buf);
 
 struct buf_ops *buf_ops_create(int fd);
 struct buf_ops *buf_ops_create_with_selftest(int fd);
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index cc1d1be6e..14bfeadb3 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -123,9 +123,9 @@ static void print_buf(struct intel_buf *buf, const char *name)
 
 	ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
 					buf->surface[0].size, PROT_READ);
-	igt_debug("[%s] Buf handle: %d, size: %d, v: 0x%02x, presumed_addr: %p\n",
+	igt_debug("[%s] Buf handle: %d, size: %" PRIx64 ", v: 0x%02x, presumed_addr: %p\n",
 		  name, buf->handle, buf->surface[0].size, ptr[0],
-			from_user_pointer(buf->addr.offset));
+		  from_user_pointer(buf->addr.offset));
 	munmap(ptr, buf->surface[0].size);
 }
 
@@ -677,7 +677,7 @@ static int dump_base64(const char *name, struct intel_buf *buf)
 	if (ret != Z_OK) {
 		igt_warn("error compressing, ret: %d\n", ret);
 	} else {
-		igt_info("compressed %u -> %lu\n",
+		igt_info("compressed %" PRIx64 " -> %lu\n",
 			 buf->surface[0].size, outsize);
 
 		igt_info("--- %s ---\n", name);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 13/35] lib/intel_bufops: Add init with handle and size function
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (11 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 12/35] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 21:36   ` Jason Ekstrand
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 14/35] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
                   ` (23 subsequent siblings)
  36 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

For some cases (fb with compression) fb size is bigger than calculated
intel_buf what lead to execbuf failure when allocator is used
along with EXEC_OBJECT_PINNED flag set for all objects.

We need to create intel_buf with size equal to fb so new function
in which we pass handle and size is required.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_bufops.c | 33 ++++++++++++++++++++++++++++-----
 lib/intel_bufops.h |  7 +++++++
 2 files changed, 35 insertions(+), 5 deletions(-)

diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index eb5ac4dad..d8eb64e3a 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -708,7 +708,8 @@ static void __intel_buf_init(struct buf_ops *bops,
 			     uint32_t handle,
 			     struct intel_buf *buf,
 			     int width, int height, int bpp, int alignment,
-			     uint32_t req_tiling, uint32_t compression)
+			     uint32_t req_tiling, uint32_t compression,
+			     uint64_t bo_size)
 {
 	uint32_t tiling = req_tiling;
 	uint64_t size;
@@ -758,7 +759,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 		buf->ccs[0].offset = buf->surface[0].stride * ALIGN(height, 32);
 		buf->ccs[0].stride = aux_width;
 
-		size = buf->ccs[0].offset + aux_width * aux_height;
+		size = max(bo_size, buf->ccs[0].offset + aux_width * aux_height);
 	} else {
 		if (tiling) {
 			devid =  intel_get_drm_devid(bops->fd);
@@ -773,7 +774,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 		buf->tiling = tiling;
 		buf->bpp = bpp;
 
-		size = buf->surface[0].stride * ALIGN(height, align_h);
+		size = max(bo_size, buf->surface[0].stride * ALIGN(height, align_h));
 	}
 
 	/* Store real bo size to avoid mistakes in calculating it again */
@@ -809,7 +810,7 @@ void intel_buf_init(struct buf_ops *bops,
 		    uint32_t tiling, uint32_t compression)
 {
 	__intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
-			 tiling, compression);
+			 tiling, compression, 0);
 
 	intel_buf_set_ownership(buf, true);
 }
@@ -858,7 +859,7 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
 				 uint32_t req_tiling, uint32_t compression)
 {
 	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
-			 req_tiling, compression);
+			 req_tiling, compression, 0);
 }
 
 /**
@@ -927,6 +928,28 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
 	return buf;
 }
 
+struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
+							 uint32_t handle,
+							 int width, int height,
+							 int bpp, int alignment,
+							 uint32_t req_tiling,
+							 uint32_t compression,
+							 uint64_t size)
+{
+	struct intel_buf *buf;
+
+	igt_assert(bops);
+
+	buf = calloc(1, sizeof(*buf));
+	igt_assert(buf);
+
+	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
+			 req_tiling, compression, size);
+
+	return buf;
+}
+
+
 /**
  * intel_buf_destroy
  * @buf: intel_buf
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 5619fc6fa..54480bff6 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -139,6 +139,13 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
 						uint32_t req_tiling,
 						uint32_t compression);
 
+struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
+							 uint32_t handle,
+							 int width, int height,
+							 int bpp, int alignment,
+							 uint32_t req_tiling,
+							 uint32_t compression,
+							 uint64_t size);
 void intel_buf_destroy(struct intel_buf *buf);
 
 void *intel_buf_cpu_map(struct intel_buf *buf, bool write);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 14/35] lib/intel_batchbuffer: Integrate intel_bb with allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (12 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 13/35] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 15/35] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
                   ` (22 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Refactor the intel-bb interface to introduce the IGT allocator for
specifying the position of objects within the ppGTT.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_aux_pgtable.c      |  26 +-
 lib/intel_batchbuffer.c      | 511 +++++++++++++++++++++++++----------
 lib/intel_batchbuffer.h      |  23 +-
 tests/i915/api_intel_bb.c    |  16 +-
 tests/i915/gem_mmap_offset.c |   4 +-
 5 files changed, 403 insertions(+), 177 deletions(-)

diff --git a/lib/intel_aux_pgtable.c b/lib/intel_aux_pgtable.c
index c05d4511e..f29a30916 100644
--- a/lib/intel_aux_pgtable.c
+++ b/lib/intel_aux_pgtable.c
@@ -94,9 +94,9 @@ pgt_table_count(int address_bits, struct intel_buf **bufs, int buf_count)
 		/* We require bufs to be sorted. */
 		igt_assert(i == 0 ||
 			   buf->addr.offset >= bufs[i - 1]->addr.offset +
-					       intel_buf_bo_size(bufs[i - 1]));
-
+				intel_buf_bo_size(bufs[i - 1]));
 		start = ALIGN_DOWN(buf->addr.offset, 1UL << address_bits);
+
 		/* Avoid double counting for overlapping aligned bufs. */
 		start = max(start, end);
 
@@ -346,10 +346,8 @@ pgt_populate_entries_for_buf(struct pgtable *pgt,
 			     uint64_t top_table,
 			     int surface_idx)
 {
-	uint64_t surface_addr = buf->addr.offset +
-				buf->surface[surface_idx].offset;
-	uint64_t surface_end = surface_addr +
-			       buf->surface[surface_idx].size;
+	uint64_t surface_addr = buf->addr.offset + buf->surface[surface_idx].offset;
+	uint64_t surface_end = surface_addr +  buf->surface[surface_idx].size;
 	uint64_t aux_addr = buf->addr.offset + buf->ccs[surface_idx].offset;
 	uint64_t l1_flags = pgt_get_l1_flags(buf, surface_idx);
 	uint64_t lx_flags = pgt_get_lx_flags();
@@ -441,7 +439,6 @@ struct intel_buf *
 intel_aux_pgtable_create(struct intel_bb *ibb,
 			 struct intel_buf **bufs, int buf_count)
 {
-	struct drm_i915_gem_exec_object2 *obj;
 	static const struct pgtable_level_desc level_desc[] = {
 		{
 			.idx_shift = 16,
@@ -465,7 +462,6 @@ intel_aux_pgtable_create(struct intel_bb *ibb,
 	struct pgtable *pgt;
 	struct buf_ops *bops;
 	struct intel_buf *buf;
-	uint64_t prev_alignment;
 
 	igt_assert(buf_count);
 	bops = bufs[0]->bops;
@@ -476,11 +472,8 @@ intel_aux_pgtable_create(struct intel_bb *ibb,
 				    I915_COMPRESSION_NONE);
 
 	/* We need to use pgt->max_align for aux table */
-	prev_alignment = intel_bb_set_default_object_alignment(ibb,
-							       pgt->max_align);
-	obj = intel_bb_add_intel_buf(ibb, pgt->buf, false);
-	intel_bb_set_default_object_alignment(ibb, prev_alignment);
-	obj->alignment = pgt->max_align;
+	intel_bb_add_intel_buf_with_alignment(ibb, pgt->buf,
+					      pgt->max_align, false);
 
 	pgt_map(ibb->i915, pgt);
 	pgt_populate_entries(pgt, bufs, buf_count);
@@ -498,9 +491,10 @@ aux_pgtable_reserve_buf_slot(struct intel_buf **bufs, int buf_count,
 {
 	int i;
 
-	for (i = 0; i < buf_count; i++)
+	for (i = 0; i < buf_count; i++) {
 		if (bufs[i]->addr.offset > new_buf->addr.offset)
 			break;
+	}
 
 	memmove(&bufs[i + 1], &bufs[i], sizeof(bufs[0]) * (buf_count - i));
 
@@ -606,8 +600,10 @@ gen12_aux_pgtable_cleanup(struct intel_bb *ibb, struct aux_pgtable_info *info)
 		igt_assert_eq_u64(addr, info->buf_pin_offsets[i]);
 	}
 
-	if (info->pgtable_buf)
+	if (info->pgtable_buf) {
+		intel_bb_remove_intel_buf(ibb, info->pgtable_buf);
 		intel_buf_destroy(info->pgtable_buf);
+	}
 }
 
 uint32_t
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 8118dc945..c9a6c8909 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -50,7 +50,6 @@
 #include "media_spin.h"
 #include "gpgpu_fill.h"
 #include "igt_aux.h"
-#include "igt_rand.h"
 #include "i830_reg.h"
 #include "huc_copy.h"
 #include <glib.h>
@@ -1211,23 +1210,9 @@ static void __reallocate_objects(struct intel_bb *ibb)
 	}
 }
 
-/*
- * gen8_canonical_addr
- * Used to convert any address into canonical form, i.e. [63:48] == [47].
- * Based on kernel's sign_extend64 implementation.
- * @address - a virtual address
- */
-#define GEN8_HIGH_ADDRESS_BIT 47
-static uint64_t gen8_canonical_addr(uint64_t address)
-{
-	int shift = 63 - GEN8_HIGH_ADDRESS_BIT;
-
-	return (int64_t)(address << shift) >> shift;
-}
-
 static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
 					     uint32_t handle,
-					     uint32_t size,
+					     uint64_t size,
 					     uint32_t alignment)
 {
 	uint64_t offset;
@@ -1235,33 +1220,77 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
 	if (ibb->enforce_relocs)
 		return 0;
 
-	/* randomize the address, we try to avoid relocations */
-	offset = hars_petruska_f54_1_random64(&ibb->prng);
-	offset += 256 << 10; /* Keep the low 256k clear, for negative deltas */
-	offset &= ibb->gtt_size - 1;
-	offset &= ~(ibb->alignment - 1);
-	offset = gen8_canonical_addr(offset);
+	offset = intel_allocator_alloc(ibb->allocator_handle,
+				       handle, size, alignment);
 
 	return offset;
 }
 
 /**
- * intel_bb_create:
+ * __intel_bb_create:
  * @i915: drm fd
+ * @ctx: context
  * @size: size of the batchbuffer
+ * @do_relocs: use relocations or allocator
+ * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
+ *
+ * intel-bb assumes it will work in one of two modes - with relocations or
+ * with using allocator (currently RANDOM and SIMPLE are implemented).
+ * Some description is required to describe how they maintain the addresses.
+ *
+ * Before entering into each scenarios generic rule is intel-bb keeps objects
+ * and their offsets in the internal cache and reuses in subsequent execs.
+ *
+ * 1. intel-bb with relocations
+ *
+ * Creating new intel-bb adds handle to cache implicitly and sets its address
+ * to 0. Objects added to intel-bb later also have address 0 set for first run.
+ * After calling execbuf cache is altered with new addresses. As intel-bb
+ * works in reloc mode addresses are only suggestion to the driver and we
+ * cannot be sure they won't change at next exec.
+ *
+ * 2. with allocator
+ *
+ * This mode is valid only for ppgtt. Addresses are acquired from allocator
+ * and softpinned. intel-bb cache must be then coherent with allocator
+ * (simple is coherent, random is not due to fact we don't keep its state).
+ * When we do intel-bb reset with purging cache it has to reacquire addresses
+ * from allocator (allocator should return same address - what is true for
+ * simple allocator and false for random as mentioned before).
+ *
+ * If we do reset without purging caches we use addresses from intel-bb cache
+ * during execbuf objects construction.
  *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
  */
 static struct intel_bb *
-__intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
+__intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
+		  uint8_t allocator_type)
 {
+	struct drm_i915_gem_exec_object2 *object;
 	struct intel_bb *ibb = calloc(1, sizeof(*ibb));
-	uint64_t gtt_size;
 
 	igt_assert(ibb);
 
+	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(i915);
+
+	/*
+	 * If we don't have full ppgtt driver can change our addresses
+	 * so allocator is useless in this case. Just enforce relocations
+	 * for such gens and don't use allocator at all.
+	 */
+	if (!ibb->uses_full_ppgtt) {
+		do_relocs = true;
+		allocator_type = INTEL_ALLOCATOR_NONE;
+	}
+
+	if (!do_relocs)
+		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+	else
+		igt_assert(allocator_type == INTEL_ALLOCATOR_NONE);
+	ibb->allocator_type = allocator_type;
 	ibb->i915 = i915;
 	ibb->devid = intel_get_drm_devid(i915);
 	ibb->gen = intel_gen(ibb->devid);
@@ -1273,41 +1302,43 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
 	ibb->batch = calloc(1, size);
 	igt_assert(ibb->batch);
 	ibb->ptr = ibb->batch;
-	ibb->prng = (uint32_t) to_user_pointer(ibb);
 	ibb->fence = -1;
 
-	gtt_size = gem_aperture_size(i915);
-	if (!gem_uses_full_ppgtt(i915))
-		gtt_size /= 2;
-	if ((gtt_size - 1) >> 32) {
+	ibb->gtt_size = gem_aperture_size(i915);
+	if ((ibb->gtt_size - 1) >> 32)
 		ibb->supports_48b_address = true;
 
-		/*
-		 * Until we develop IGT address allocator we workaround
-		 * playing with canonical addresses with 47-bit set to 1
-		 * just by limiting gtt size to 46-bit when gtt is 47 or 48
-		 * bit size. Current interface doesn't pass bo size, so
-		 * limiting to 46 bit make us sure we won't enter to
-		 * addresses with 47-bit set (we use 32-bit size now so
-		 * still we fit 47-bit address space).
-		 */
-		if (gtt_size & (3ull << 47))
-			gtt_size = (1ull << 46);
-	}
-	ibb->gtt_size = gtt_size;
-
-	ibb->batch_offset = __intel_bb_get_offset(ibb,
-						  ibb->handle,
-						  ibb->size,
-						  ibb->alignment);
-	intel_bb_add_object(ibb, ibb->handle, ibb->size,
-			    ibb->batch_offset, false);
+	object = intel_bb_add_object(ibb, ibb->handle, ibb->size,
+				     INTEL_BUF_INVALID_ADDRESS, ibb->alignment,
+				     false);
+	ibb->batch_offset = object->offset;
 
 	ibb->refcount = 1;
 
 	return ibb;
 }
 
+/**
+ * intel_bb_create_full:
+ * @i915: drm fd
+ * @ctx: context
+ * @size: size of the batchbuffer
+ * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ *
+ * Creates bb with context passed in @ctx, size in @size and allocator type
+ * in @allocator_type. Relocations are set to false because IGT allocator
+ * is not used in that case.
+ *
+ * Returns:
+ *
+ * Pointer the intel_bb, asserts on failure.
+ */
+struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+				      uint8_t allocator_type)
+{
+	return __intel_bb_create(i915, ctx, size, false, allocator_type);
+}
+
 /**
  * intel_bb_create:
  * @i915: drm fd
@@ -1318,10 +1349,19 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
+ *
+ * Notes:
+ *
+ * intel_bb must not be created in igt_fixture. The reason is intel_bb
+ * "opens" connection to the allocator and when test completes it can
+ * leave the allocator in unknown state (mostly for failed tests).
+ * As igt_core was armed to reset the allocator infrastructure
+ * connection to it inside intel_bb is not valid anymore.
+ * Trying to use it leads to catastrofic errors.
  */
 struct intel_bb *intel_bb_create(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, false);
+	return __intel_bb_create(i915, 0, size, false, INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1339,7 +1379,7 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, false);
+	return __intel_bb_create(i915, ctx, size, false, INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1356,7 +1396,7 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
  */
 struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, true);
+	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
 }
 
 /**
@@ -1375,7 +1415,7 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, true);
+	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
 }
 
 static void __intel_bb_destroy_relocations(struct intel_bb *ibb)
@@ -1429,6 +1469,10 @@ void intel_bb_destroy(struct intel_bb *ibb)
 	__intel_bb_destroy_objects(ibb);
 	__intel_bb_destroy_cache(ibb);
 
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE) {
+		intel_allocator_free(ibb->allocator_handle, ibb->handle);
+		intel_allocator_close(ibb->allocator_handle);
+	}
 	gem_close(ibb->i915, ibb->handle);
 
 	if (ibb->fence >= 0)
@@ -1445,6 +1489,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
  *
  * Recreate batch bo when there's no additional reference.
 */
+
 void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 {
 	uint32_t i;
@@ -1468,28 +1513,32 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	__intel_bb_destroy_objects(ibb);
 	__reallocate_objects(ibb);
 
-	if (purge_objects_cache) {
+	if (purge_objects_cache)
 		__intel_bb_destroy_cache(ibb);
+
+	/*
+	 * When we use allocators we're in no-reloc mode so we have to free
+	 * and reacquire offset (ibb->handle can change in multiprocess
+	 * environment). We also have to remove and add it again to
+	 * objects and cache tree.
+	 */
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE && !purge_objects_cache)
+		intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset,
+				       ibb->size);
+
+	gem_close(ibb->i915, ibb->handle);
+	ibb->handle = gem_create(ibb->i915, ibb->size);
+
+	/* Keep address for bb in reloc mode and RANDOM allocator */
+	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
 		ibb->batch_offset = __intel_bb_get_offset(ibb,
 							  ibb->handle,
 							  ibb->size,
 							  ibb->alignment);
-	} else {
-		struct drm_i915_gem_exec_object2 *object;
-
-		object = intel_bb_find_object(ibb, ibb->handle);
-		ibb->batch_offset = object ? object->offset :
-					     __intel_bb_get_offset(ibb,
-								   ibb->handle,
-								   ibb->size,
-								   ibb->alignment);
-	}
-
-	gem_close(ibb->i915, ibb->handle);
-	ibb->handle = gem_create(ibb->i915, ibb->size);
 
 	intel_bb_add_object(ibb, ibb->handle, ibb->size,
-			    ibb->batch_offset, false);
+			    ibb->batch_offset,
+			    ibb->alignment, false);
 	ibb->ptr = ibb->batch;
 	memset(ibb->batch, 0, ibb->size);
 }
@@ -1528,8 +1577,8 @@ void intel_bb_print(struct intel_bb *ibb)
 		 ibb->i915, ibb->gen, ibb->devid, ibb->debug);
 	igt_info("handle: %u, size: %u, batch: %p, ptr: %p\n",
 		 ibb->handle, ibb->size, ibb->batch, ibb->ptr);
-	igt_info("prng: %u, gtt_size: %" PRIu64 ", supports 48bit: %d\n",
-		 ibb->prng, ibb->gtt_size, ibb->supports_48b_address);
+	igt_info("gtt_size: %" PRIu64 ", supports 48bit: %d\n",
+		 ibb->gtt_size, ibb->supports_48b_address);
 	igt_info("ctx: %u\n", ibb->ctx);
 	igt_info("root: %p\n", ibb->root);
 	igt_info("objects: %p, num_objects: %u, allocated obj: %u\n",
@@ -1605,7 +1654,7 @@ __add_to_cache(struct intel_bb *ibb, uint32_t handle)
 	if (*found == object) {
 		memset(object, 0, sizeof(*object));
 		object->handle = handle;
-		object->alignment = ibb->alignment;
+		object->offset = INTEL_BUF_INVALID_ADDRESS;
 	} else {
 		free(object);
 		object = *found;
@@ -1614,6 +1663,25 @@ __add_to_cache(struct intel_bb *ibb, uint32_t handle)
 	return object;
 }
 
+static bool __remove_from_cache(struct intel_bb *ibb, uint32_t handle)
+{
+	struct drm_i915_gem_exec_object2 **found, *object;
+
+	object = intel_bb_find_object(ibb, handle);
+	if (!object) {
+		igt_warn("Object: handle: %u not found\n", handle);
+		return false;
+	}
+
+	found = tdelete((void *) object, &ibb->root, __compare_objects);
+	if (!found)
+		return false;
+
+	free(object);
+
+	return true;
+}
+
 static int __compare_handles(const void *p1, const void *p2)
 {
 	return (int) (*(int32_t *) p1 - *(int32_t *) p2);
@@ -1639,12 +1707,54 @@ static void __add_to_objects(struct intel_bb *ibb,
 	}
 }
 
+static void __remove_from_objects(struct intel_bb *ibb,
+				  struct drm_i915_gem_exec_object2 *object)
+{
+	uint32_t i, **handle, *to_free;
+	bool found = false;
+
+	for (i = 0; i < ibb->num_objects; i++) {
+		if (ibb->objects[i] == object) {
+			found = true;
+			break;
+		}
+	}
+
+	/*
+	 * When we reset bb (without purging) we have:
+	 * 1. cache which contains all cached objects
+	 * 2. objects array which contains only bb object (cleared in reset
+	 *    path with bb object added at the end)
+	 * So !found is normal situation and no warning is added here.
+	 */
+	if (!found)
+		return;
+
+	ibb->num_objects--;
+	if (i < ibb->num_objects)
+		memmove(&ibb->objects[i], &ibb->objects[i + 1],
+			sizeof(object) * (ibb->num_objects - i));
+
+	handle = tfind((void *) &object->handle,
+		       &ibb->current, __compare_handles);
+	if (!handle) {
+		igt_warn("Object %u doesn't exist in the tree, can't remove",
+			 object->handle);
+		return;
+	}
+
+	to_free = *handle;
+	tdelete((void *) &object->handle, &ibb->current, __compare_handles);
+	free(to_free);
+}
+
 /**
  * intel_bb_add_object:
  * @ibb: pointer to intel_bb
  * @handle: which handle to add to objects array
  * @size: object size
  * @offset: presumed offset of the object when no relocation is enforced
+ * @alignment: alignment of the object, if 0 it will be set to page size
  * @write: does a handle is a render target
  *
  * Function adds or updates execobj slot in bb objects array and
@@ -1652,23 +1762,71 @@ static void __add_to_objects(struct intel_bb *ibb,
  * be marked with EXEC_OBJECT_WRITE flag.
  */
 struct drm_i915_gem_exec_object2 *
-intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
-		    uint64_t offset, bool write)
+intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
+		    uint64_t offset, uint64_t alignment, bool write)
 {
 	struct drm_i915_gem_exec_object2 *object;
 
+	igt_assert(INVALID_ADDR(offset) || alignment == 0
+		   || ALIGN(offset, alignment) == offset);
+
 	object = __add_to_cache(ibb, handle);
+	object->alignment = alignment ?: 4096;
 	__add_to_objects(ibb, object);
 
-	/* Limit current offset to gtt size */
+	/*
+	 * If object->offset == INVALID_ADDRESS we added freshly object to the
+	 * cache. In that case we have two choices:
+	 * a) get new offset (passed offset was invalid)
+	 * b) use offset passed in the call (valid)
+	 */
+	if (INVALID_ADDR(object->offset)) {
+		if (INVALID_ADDR(offset)) {
+			offset = __intel_bb_get_offset(ibb, handle, size,
+						       object->alignment);
+		} else {
+			offset = offset & (ibb->gtt_size - 1);
+
+			/*
+			 * For simple allocator check entry consistency
+			 * - reserve if it is not already allocated.
+			 */
+			if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE) {
+				bool allocated, reserved;
+
+				reserved = intel_allocator_reserve_if_not_allocated(ibb->allocator_handle,
+										    handle, size, offset,
+										    &allocated);
+				igt_assert_f(allocated || reserved,
+					     "Can't get offset, allocated: %d, reserved: %d\n",
+					     allocated, reserved);
+			}
+		}
+	} else {
+		/*
+		 * This assertion makes sense only when we have to be consistent
+		 * with underlying allocator. For relocations and when !ppgtt
+		 * we can expect addresses passed by the user can be moved
+		 * within the driver.
+		 */
+		if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+			igt_assert_f(object->offset == offset,
+				     "(pid: %ld) handle: %u, offset not match: %" PRIx64 " <> %" PRIx64 "\n",
+				     (long) getpid(), handle,
+				     (uint64_t) object->offset,
+				     offset);
+	}
+
 	object->offset = offset;
-	if (offset != INTEL_BUF_INVALID_ADDRESS)
-		object->offset = gen8_canonical_addr(offset & (ibb->gtt_size - 1));
 
-	if (object->offset == INTEL_BUF_INVALID_ADDRESS)
+	/* Limit current offset to gtt size */
+	if (offset != INTEL_BUF_INVALID_ADDRESS) {
+		object->offset = CANONICAL(offset & (ibb->gtt_size - 1));
+	} else {
 		object->offset = __intel_bb_get_offset(ibb,
 						       handle, size,
 						       object->alignment);
+	}
 
 	if (write)
 		object->flags |= EXEC_OBJECT_WRITE;
@@ -1676,40 +1834,95 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
 	if (ibb->supports_48b_address)
 		object->flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 
+	if (ibb->uses_full_ppgtt && ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+		object->flags |= EXEC_OBJECT_PINNED;
+
 	return object;
 }
 
-struct drm_i915_gem_exec_object2 *
-intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write)
+bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
+			    uint64_t offset, uint64_t size)
 {
-	struct drm_i915_gem_exec_object2 *obj;
+	struct drm_i915_gem_exec_object2 *object;
+	bool is_reserved;
 
-	obj = intel_bb_add_object(ibb, buf->handle,
-				  intel_buf_bo_size(buf),
-				  buf->addr.offset, write);
+	object = intel_bb_find_object(ibb, handle);
+	if (!object)
+		return false;
 
-	/* For compressed surfaces ensure address is aligned to 64KB */
-	if (ibb->gen >= 12 && buf->compression) {
-		obj->offset &= ~(0x10000 - 1);
-		obj->alignment = 0x10000;
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE) {
+		intel_allocator_free(ibb->allocator_handle, handle);
+		is_reserved = intel_allocator_is_reserved(ibb->allocator_handle,
+							  size, offset);
+		if (is_reserved)
+			intel_allocator_unreserve(ibb->allocator_handle, handle,
+						  size, offset);
 	}
 
-	/* For gen3 ensure tiled buffers are aligned to power of two size */
-	if (ibb->gen == 3 && buf->tiling) {
-		uint64_t alignment = 1024 * 1024;
+	__remove_from_objects(ibb, object);
+	__remove_from_cache(ibb, handle);
 
-		while (alignment < buf->surface[0].size)
-			alignment <<= 1;
-		obj->offset &= ~(alignment - 1);
-		obj->alignment = alignment;
+	return true;
+}
+
+static struct drm_i915_gem_exec_object2 *
+__intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
+			 uint64_t alignment, bool write)
+{
+	struct drm_i915_gem_exec_object2 *obj;
+
+	igt_assert(ALIGN(alignment, 4096) == alignment);
+
+	if (!alignment) {
+		alignment = 0x1000;
+
+		if (ibb->gen >= 12 && buf->compression)
+			alignment = 0x10000;
+
+		/* For gen3 ensure tiled buffers are aligned to power of two size */
+		if (ibb->gen == 3 && buf->tiling) {
+			alignment = 1024 * 1024;
+
+			while (alignment < buf->surface[0].size)
+				alignment <<= 1;
+		}
 	}
 
-	/* Update address in intel_buf buffer */
+	obj = intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
+				  buf->addr.offset, alignment, write);
 	buf->addr.offset = obj->offset;
 
+	if (!ibb->enforce_relocs)
+		obj->alignment = alignment;
+
 	return obj;
 }
 
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write)
+{
+	return __intel_bb_add_intel_buf(ibb, buf, 0, write);
+}
+
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
+				      uint64_t alignment, bool write)
+{
+	return __intel_bb_add_intel_buf(ibb, buf, alignment, write);
+}
+
+bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf)
+{
+	bool removed = intel_bb_remove_object(ibb, buf->handle,
+					      buf->addr.offset,
+					      intel_buf_bo_size(buf));
+
+	if (removed)
+		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+
+	return removed;
+}
+
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 {
@@ -1717,6 +1930,8 @@ intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 	struct drm_i915_gem_exec_object2 **found;
 
 	found = tfind((void *) &object, &ibb->root, __compare_objects);
+	if (!found)
+		return NULL;
 
 	return *found;
 }
@@ -1727,6 +1942,8 @@ intel_bb_object_set_flag(struct intel_bb *ibb, uint32_t handle, uint64_t flag)
 	struct drm_i915_gem_exec_object2 object = { .handle = handle };
 	struct drm_i915_gem_exec_object2 **found;
 
+	igt_assert_f(ibb->root, "Trying to search in null tree\n");
+
 	found = tfind((void *) &object, &ibb->root, __compare_objects);
 	if (!found) {
 		igt_warn("Trying to set fence on not found handle: %u\n",
@@ -1766,14 +1983,9 @@ intel_bb_object_clear_flag(struct intel_bb *ibb, uint32_t handle, uint64_t flag)
  * @write_domain: gem domain bit for the relocation
  * @delta: delta value to add to @buffer's gpu address
  * @offset: offset within bb to be patched
- * @presumed_offset: address of the object in address space. If -1 is passed
- * then final offset of the object will be randomized (for no-reloc bb) or
- * 0 (for reloc bb, in that case reloc.presumed_offset will be -1). In
- * case address is known it should passed in @presumed_offset (for no-reloc).
  *
  * Function allocates additional relocation slot in reloc array for a handle.
- * It also implicitly adds handle in the objects array if object doesn't
- * exists but doesn't mark it as a render target.
+ * Object must be previously added to bb.
  */
 static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
 				   uint32_t to_handle,
@@ -1788,13 +2000,8 @@ static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
 	struct drm_i915_gem_exec_object2 *object, *to_object;
 	uint32_t i;
 
-	if (ibb->enforce_relocs) {
-		object = intel_bb_add_object(ibb, handle, 0,
-					     presumed_offset, false);
-	} else {
-		object = intel_bb_find_object(ibb, handle);
-		igt_assert(object);
-	}
+	object = intel_bb_find_object(ibb, handle);
+	igt_assert(object);
 
 	/* For ibb we have relocs allocated in chunks */
 	if (to_handle == ibb->handle) {
@@ -1988,45 +2195,47 @@ static void intel_bb_dump_execbuf(struct intel_bb *ibb,
 	int i, j;
 	uint64_t address;
 
-	igt_info("execbuf batch len: %u, start offset: 0x%x, "
-		 "DR1: 0x%x, DR4: 0x%x, "
-		 "num clip: %u, clipptr: 0x%llx, "
-		 "flags: 0x%llx, rsvd1: 0x%llx, rsvd2: 0x%llx\n",
-		 execbuf->batch_len, execbuf->batch_start_offset,
-		 execbuf->DR1, execbuf->DR4,
-		 execbuf->num_cliprects, execbuf->cliprects_ptr,
-		 execbuf->flags, execbuf->rsvd1, execbuf->rsvd2);
-
-	igt_info("execbuf buffer_count: %d\n", execbuf->buffer_count);
+	igt_debug("execbuf [pid: %ld, fd: %d, ctx: %u]\n",
+		  (long) getpid(), ibb->i915, ibb->ctx);
+	igt_debug("execbuf batch len: %u, start offset: 0x%x, "
+		  "DR1: 0x%x, DR4: 0x%x, "
+		  "num clip: %u, clipptr: 0x%llx, "
+		  "flags: 0x%llx, rsvd1: 0x%llx, rsvd2: 0x%llx\n",
+		  execbuf->batch_len, execbuf->batch_start_offset,
+		  execbuf->DR1, execbuf->DR4,
+		  execbuf->num_cliprects, execbuf->cliprects_ptr,
+		  execbuf->flags, execbuf->rsvd1, execbuf->rsvd2);
+
+	igt_debug("execbuf buffer_count: %d\n", execbuf->buffer_count);
 	for (i = 0; i < execbuf->buffer_count; i++) {
 		objects = &((struct drm_i915_gem_exec_object2 *)
 			    from_user_pointer(execbuf->buffers_ptr))[i];
 		relocs = from_user_pointer(objects->relocs_ptr);
 		address = objects->offset;
-		igt_info(" [%d] handle: %u, reloc_count: %d, reloc_ptr: %p, "
-			 "align: 0x%llx, offset: 0x%" PRIx64 ", flags: 0x%llx, "
-			 "rsvd1: 0x%llx, rsvd2: 0x%llx\n",
-			 i, objects->handle, objects->relocation_count,
-			 relocs,
-			 objects->alignment,
-			 address,
-			 objects->flags,
-			 objects->rsvd1, objects->rsvd2);
+		igt_debug(" [%d] handle: %u, reloc_count: %d, reloc_ptr: %p, "
+			  "align: 0x%llx, offset: 0x%" PRIx64 ", flags: 0x%llx, "
+			  "rsvd1: 0x%llx, rsvd2: 0x%llx\n",
+			  i, objects->handle, objects->relocation_count,
+			  relocs,
+			  objects->alignment,
+			  address,
+			  objects->flags,
+			  objects->rsvd1, objects->rsvd2);
 		if (objects->relocation_count) {
-			igt_info("\texecbuf relocs:\n");
+			igt_debug("\texecbuf relocs:\n");
 			for (j = 0; j < objects->relocation_count; j++) {
 				reloc = &relocs[j];
 				address = reloc->presumed_offset;
-				igt_info("\t [%d] target handle: %u, "
-					 "offset: 0x%llx, delta: 0x%x, "
-					 "presumed_offset: 0x%" PRIx64 ", "
-					 "read_domains: 0x%x, "
-					 "write_domain: 0x%x\n",
-					 j, reloc->target_handle,
-					 reloc->offset, reloc->delta,
-					 address,
-					 reloc->read_domains,
-					 reloc->write_domain);
+				igt_debug("\t [%d] target handle: %u, "
+					  "offset: 0x%llx, delta: 0x%x, "
+					  "presumed_offset: 0x%" PRIx64 ", "
+					  "read_domains: 0x%x, "
+					  "write_domain: 0x%x\n",
+					  j, reloc->target_handle,
+					  reloc->offset, reloc->delta,
+					  address,
+					  reloc->read_domains,
+					  reloc->write_domain);
 			}
 		}
 	}
@@ -2069,6 +2278,12 @@ static void print_node(const void *node, VISIT which, int depth)
 	}
 }
 
+void intel_bb_dump_cache(struct intel_bb *ibb)
+{
+	igt_info("[pid: %ld] dump cache\n", (long) getpid());
+	twalk(ibb->root, print_node);
+}
+
 static struct drm_i915_gem_exec_object2 *
 create_objects_array(struct intel_bb *ibb)
 {
@@ -2078,8 +2293,10 @@ create_objects_array(struct intel_bb *ibb)
 	objects = malloc(sizeof(*objects) * ibb->num_objects);
 	igt_assert(objects);
 
-	for (i = 0; i < ibb->num_objects; i++)
+	for (i = 0; i < ibb->num_objects; i++) {
 		objects[i] = *(ibb->objects[i]);
+		objects[i].offset = CANONICAL(objects[i].offset);
+	}
 
 	return objects;
 }
@@ -2094,7 +2311,10 @@ static void update_offsets(struct intel_bb *ibb,
 		object = intel_bb_find_object(ibb, objects[i].handle);
 		igt_assert(object);
 
-		object->offset = objects[i].offset;
+		object->offset = DECANONICAL(objects[i].offset);
+
+		if (i == 0)
+			ibb->batch_offset = object->offset;
 	}
 }
 
@@ -2122,6 +2342,7 @@ static int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 	ibb->objects[0]->relocs_ptr = to_user_pointer(ibb->relocs);
 	ibb->objects[0]->relocation_count = ibb->num_relocs;
 	ibb->objects[0]->handle = ibb->handle;
+	ibb->objects[0]->offset = ibb->batch_offset;
 
 	gem_write(ibb->i915, ibb->handle, 0, ibb->batch, ibb->size);
 
@@ -2206,7 +2427,6 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 {
 	struct drm_i915_gem_exec_object2 object = { .handle = handle };
 	struct drm_i915_gem_exec_object2 **found;
-	uint64_t address;
 
 	igt_assert(ibb);
 
@@ -2214,12 +2434,7 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 	if (!found)
 		return INTEL_BUF_INVALID_ADDRESS;
 
-	address = (*found)->offset;
-
-	if (address == INTEL_BUF_INVALID_ADDRESS)
-		return address;
-
-	return address & (ibb->gtt_size - 1);
+	return (*found)->offset;
 }
 
 /**
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 76989eaae..b9a4ee7e8 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -8,6 +8,7 @@
 #include "igt_core.h"
 #include "intel_reg.h"
 #include "drmtest.h"
+#include "intel_allocator.h"
 
 #define BATCH_SZ 4096
 #define BATCH_RESERVED 16
@@ -441,6 +442,9 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
  * Batchbuffer without libdrm dependency
  */
 struct intel_bb {
+	uint64_t allocator_handle;
+	uint8_t allocator_type;
+
 	int i915;
 	unsigned int gen;
 	bool debug;
@@ -454,9 +458,9 @@ struct intel_bb {
 	uint64_t alignment;
 	int fence;
 
-	uint32_t prng;
 	uint64_t gtt_size;
 	bool supports_48b_address;
+	bool uses_full_ppgtt;
 
 	uint32_t ctx;
 
@@ -484,6 +488,8 @@ struct intel_bb {
 	int32_t refcount;
 };
 
+struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+				      uint8_t allocator_type);
 struct intel_bb *intel_bb_create(int i915, uint32_t size);
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size);
@@ -510,6 +516,7 @@ void intel_bb_dump(struct intel_bb *ibb, const char *filename);
 void intel_bb_set_debug(struct intel_bb *ibb, bool debug);
 void intel_bb_set_dump_base64(struct intel_bb *ibb, bool dump);
 
+/*
 static inline uint64_t
 intel_bb_set_default_object_alignment(struct intel_bb *ibb, uint64_t alignment)
 {
@@ -525,6 +532,7 @@ intel_bb_get_default_object_alignment(struct intel_bb *ibb)
 {
 	return ibb->alignment;
 }
+*/
 
 static inline uint32_t intel_bb_offset(struct intel_bb *ibb)
 {
@@ -574,11 +582,16 @@ static inline void intel_bb_out(struct intel_bb *ibb, uint32_t dword)
 }
 
 struct drm_i915_gem_exec_object2 *
-intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
-		    uint64_t offset, bool write);
+intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
+		    uint64_t offset, uint64_t alignment, bool write);
+bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
+			    uint64_t offset, uint64_t size);
 struct drm_i915_gem_exec_object2 *
 intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write);
-
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
+				      uint64_t alignment, bool write);
+bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf);
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle);
 
@@ -625,6 +638,8 @@ uint64_t intel_bb_offset_reloc_to_object(struct intel_bb *ibb,
 					 uint32_t offset,
 					 uint64_t presumed_offset);
 
+void intel_bb_dump_cache(struct intel_bb *ibb);
+
 void intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 		   uint64_t flags, bool sync);
 
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 14bfeadb3..c6c943506 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -810,11 +810,11 @@ static void offset_control(struct buf_ops *bops)
 	dst2 = create_buf(bops, WIDTH, HEIGHT, COLOR_77);
 
 	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
-			    src->addr.offset, false);
+			    src->addr.offset, 0, false);
 	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
-			    dst1->addr.offset, true);
+			    dst1->addr.offset, 0, true);
 	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
-			    dst2->addr.offset, true);
+			    dst2->addr.offset, 0, true);
 
 	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
 	intel_bb_ptr_align(ibb, 8);
@@ -838,13 +838,13 @@ static void offset_control(struct buf_ops *bops)
 
 	dst3 = create_buf(bops, WIDTH, HEIGHT, COLOR_33);
 	intel_bb_add_object(ibb, dst3->handle, intel_buf_bo_size(dst3),
-			    dst3->addr.offset, true);
+			    dst3->addr.offset, 0, true);
 	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
-			    src->addr.offset, false);
+			    src->addr.offset, 0, false);
 	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
-			    dst1->addr.offset, true);
+			    dst1->addr.offset, 0, true);
 	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
-			    dst2->addr.offset, true);
+			    dst2->addr.offset, 0, true);
 
 	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
 	intel_bb_ptr_align(ibb, 8);
@@ -901,7 +901,7 @@ static void delta_check(struct buf_ops *bops)
 	buf = create_buf(bops, 0x1000, 0x10, COLOR_CC);
 	buf->addr.offset = 0xfffff000;
 	intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
-			    buf->addr.offset, false);
+			    buf->addr.offset, 0, false);
 
 	intel_bb_out(ibb, MI_STORE_DWORD_IMM);
 	intel_bb_emit_reloc(ibb, buf->handle,
diff --git a/tests/i915/gem_mmap_offset.c b/tests/i915/gem_mmap_offset.c
index 8b29d0a0a..5e48cd697 100644
--- a/tests/i915/gem_mmap_offset.c
+++ b/tests/i915/gem_mmap_offset.c
@@ -614,8 +614,8 @@ static void blt_coherency(int i915)
 	dst = create_bo(bops, 1, width, height);
 	size = src->surface[0].size;
 
-	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, false);
-	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, true);
+	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, 0, false);
+	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, 0, true);
 
 	intel_bb_blt_copy(ibb,
 			  src, 0, 0, src->surface[0].stride,
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 15/35] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (13 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 14/35] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 16/35] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
                   ` (21 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

We want to use relocations as long as we can so change intel-bb
strategy to use it as default up to gen12. For gen12 we have
to use softpinning in ccs aux tables so there we enforce using
allocator instead of relocations.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c | 39 ++++++++++++++++++++++++++++-----------
 1 file changed, 28 insertions(+), 11 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index c9a6c8909..388395a94 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1275,25 +1275,25 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 	igt_assert(ibb);
 
 	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(i915);
+	ibb->devid = intel_get_drm_devid(i915);
+	ibb->gen = intel_gen(ibb->devid);
 
 	/*
 	 * If we don't have full ppgtt driver can change our addresses
 	 * so allocator is useless in this case. Just enforce relocations
 	 * for such gens and don't use allocator at all.
 	 */
-	if (!ibb->uses_full_ppgtt) {
+	if (!ibb->uses_full_ppgtt)
 		do_relocs = true;
-		allocator_type = INTEL_ALLOCATOR_NONE;
-	}
 
-	if (!do_relocs)
-		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+	/* if relocs are set we won't use an allocator */
+	if (do_relocs)
+		allocator_type = INTEL_ALLOCATOR_NONE;
 	else
-		igt_assert(allocator_type == INTEL_ALLOCATOR_NONE);
+		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+
 	ibb->allocator_type = allocator_type;
 	ibb->i915 = i915;
-	ibb->devid = intel_get_drm_devid(i915);
-	ibb->gen = intel_gen(ibb->devid);
 	ibb->enforce_relocs = do_relocs;
 	ibb->handle = gem_create(i915, size);
 	ibb->size = size;
@@ -1327,7 +1327,7 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
  *
  * Creates bb with context passed in @ctx, size in @size and allocator type
  * in @allocator_type. Relocations are set to false because IGT allocator
- * is not used in that case.
+ * is used in that case.
  *
  * Returns:
  *
@@ -1339,6 +1339,11 @@ struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
 	return __intel_bb_create(i915, ctx, size, false, allocator_type);
 }
 
+static bool aux_needs_softpin(int i915)
+{
+	return intel_gen(intel_get_drm_devid(i915)) >= 12;
+}
+
 /**
  * intel_bb_create:
  * @i915: drm fd
@@ -1361,7 +1366,11 @@ struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
  */
 struct intel_bb *intel_bb_create(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, false, INTEL_ALLOCATOR_SIMPLE);
+	bool relocs = gem_has_relocations(i915);
+
+	return __intel_bb_create(i915, 0, size,
+				 relocs && !aux_needs_softpin(i915),
+				 INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1379,7 +1388,11 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, false, INTEL_ALLOCATOR_SIMPLE);
+	bool relocs = gem_has_relocations(i915);
+
+	return __intel_bb_create(i915, ctx, size,
+				 relocs && !aux_needs_softpin(i915),
+				 INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1396,6 +1409,8 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
  */
 struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
+	igt_require(gem_has_relocations(i915));
+
 	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
 }
 
@@ -1415,6 +1430,8 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
+	igt_require(gem_has_relocations(i915));
+
 	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 16/35] lib/intel_batchbuffer: Create bb with strategy / vm ranges
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (14 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 15/35] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 17/35] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
                   ` (20 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Previously intel-bb just used default allocator settings (safe,
chosen for work from the box). But limitation is not good if we want
to exercise non-standard cases (vm ranges for example). As allocator
provides passing full settings lets intel-bb also allows to use them.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c | 63 +++++++++++++++++++++++++++++++++--------
 lib/intel_batchbuffer.h | 10 +++++--
 2 files changed, 59 insertions(+), 14 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 388395a94..045f3f157 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1267,7 +1267,8 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  */
 static struct intel_bb *
 __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
-		  uint8_t allocator_type)
+		  uint64_t start, uint64_t end,
+		  uint8_t allocator_type, enum allocator_strategy strategy)
 {
 	struct drm_i915_gem_exec_object2 *object;
 	struct intel_bb *ibb = calloc(1, sizeof(*ibb));
@@ -1290,9 +1291,12 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 	if (do_relocs)
 		allocator_type = INTEL_ALLOCATOR_NONE;
 	else
-		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
-
+		ibb->allocator_handle = intel_allocator_open_full(i915, ctx,
+								  start, end,
+								  allocator_type,
+								  strategy);
 	ibb->allocator_type = allocator_type;
+	ibb->allocator_strategy = strategy;
 	ibb->i915 = i915;
 	ibb->enforce_relocs = do_relocs;
 	ibb->handle = gem_create(i915, size);
@@ -1323,20 +1327,51 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
  * @i915: drm fd
  * @ctx: context
  * @size: size of the batchbuffer
+ * @start: allocator vm start address
+ * @end: allocator vm start address
  * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ * @strategy: allocation strategy
  *
  * Creates bb with context passed in @ctx, size in @size and allocator type
  * in @allocator_type. Relocations are set to false because IGT allocator
- * is used in that case.
+ * is used in that case. VM range is passed to allocator (@start and @end)
+ * and allocation @strategy (suggestion to allocator about address allocation
+ * preferences).
  *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
  */
 struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
-				      uint8_t allocator_type)
+				      uint64_t start, uint64_t end,
+				      uint8_t allocator_type,
+				      enum allocator_strategy strategy)
+{
+	return __intel_bb_create(i915, ctx, size, false, start, end,
+				 allocator_type, strategy);
+}
+
+/**
+ * intel_bb_create_with_allocator:
+ * @i915: drm fd
+ * @ctx: context
+ * @size: size of the batchbuffer
+ * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ *
+ * Creates bb with context passed in @ctx, size in @size and allocator type
+ * in @allocator_type. Relocations are set to false because IGT allocator
+ * is used in that case.
+ *
+ * Returns:
+ *
+ * Pointer the intel_bb, asserts on failure.
+ */
+struct intel_bb *intel_bb_create_with_allocator(int i915, uint32_t ctx,
+						uint32_t size,
+						uint8_t allocator_type)
 {
-	return __intel_bb_create(i915, ctx, size, false, allocator_type);
+	return __intel_bb_create(i915, ctx, size, false, 0, 0,
+				 allocator_type, ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 static bool aux_needs_softpin(int i915)
@@ -1369,8 +1404,9 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 	bool relocs = gem_has_relocations(i915);
 
 	return __intel_bb_create(i915, 0, size,
-				 relocs && !aux_needs_softpin(i915),
-				 INTEL_ALLOCATOR_SIMPLE);
+				 relocs && !aux_needs_softpin(i915), 0, 0,
+				 INTEL_ALLOCATOR_SIMPLE,
+				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
@@ -1391,8 +1427,9 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 	bool relocs = gem_has_relocations(i915);
 
 	return __intel_bb_create(i915, ctx, size,
-				 relocs && !aux_needs_softpin(i915),
-				 INTEL_ALLOCATOR_SIMPLE);
+				 relocs && !aux_needs_softpin(i915), 0, 0,
+				 INTEL_ALLOCATOR_SIMPLE,
+				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
@@ -1411,7 +1448,8 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
 	igt_require(gem_has_relocations(i915));
 
-	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
+	return __intel_bb_create(i915, 0, size, true, 0, 0,
+				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
 }
 
 /**
@@ -1432,7 +1470,8 @@ intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
 	igt_require(gem_has_relocations(i915));
 
-	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
+	return __intel_bb_create(i915, ctx, size, true, 0, 0,
+				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
 }
 
 static void __intel_bb_destroy_relocations(struct intel_bb *ibb)
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index b9a4ee7e8..702052d22 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -444,6 +444,7 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
 struct intel_bb {
 	uint64_t allocator_handle;
 	uint8_t allocator_type;
+	enum allocator_strategy allocator_strategy;
 
 	int i915;
 	unsigned int gen;
@@ -488,8 +489,13 @@ struct intel_bb {
 	int32_t refcount;
 };
 
-struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
-				      uint8_t allocator_type);
+struct intel_bb *
+intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+		     uint64_t start, uint64_t end,
+		     uint8_t allocator_type, enum allocator_strategy strategy);
+struct intel_bb *
+intel_bb_create_with_allocator(int i915, uint32_t ctx,
+			       uint32_t size, uint8_t allocator_type);
 struct intel_bb *intel_bb_create(int i915, uint32_t size);
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 17/35] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (15 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 16/35] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 18/35] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
                   ` (19 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

From now on intel_bb starts tracking added/removed intel_bufs.
We're safe now regardless order of intel_buf close/destroy
or intel_bb destroy paths.

When intel_buf is closed/destroyed first and it was previously added
to intel_bb it calls the code which removes itself from intel_bb.
In destroy path we go over all tracked intel_bufs and clear tracking
information and buffer offset (it is set to INTEL_BUF_INVALID_ADDRESS).

Reset path is handled as follows:
- intel_bb_reset(ibb, false) - just clean objects array leaving
  cache / allocator state intact.
- intel_bb_reset(ibb, true) - purge cache as well as detach intel_bufs
  from intel_bb (release offsets from allocator).

Remove intel_bb_object_offset_to_buf() function as tracking intel_buf
updates (checks for allocator) their offsets after execbuf.

Alter api_intel_bb according to intel-bb changes.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c   | 186 ++++++++++++++++++++++++++++----------
 lib/intel_batchbuffer.h   |  29 +++---
 lib/intel_bufops.c        |  13 ++-
 lib/intel_bufops.h        |   6 ++
 lib/media_spin.c          |   2 -
 tests/i915/api_intel_bb.c |   7 --
 6 files changed, 169 insertions(+), 74 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 045f3f157..1b234b14d 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1261,6 +1261,8 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  * If we do reset without purging caches we use addresses from intel-bb cache
  * during execbuf objects construction.
  *
+ * If we do reset with purging caches allocator entries are freed as well.
+ *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
@@ -1303,6 +1305,7 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 	ibb->size = size;
 	ibb->alignment = 4096;
 	ibb->ctx = ctx;
+	ibb->vm_id = 0;
 	ibb->batch = calloc(1, size);
 	igt_assert(ibb->batch);
 	ibb->ptr = ibb->batch;
@@ -1317,6 +1320,8 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 				     false);
 	ibb->batch_offset = object->offset;
 
+	IGT_INIT_LIST_HEAD(&ibb->intel_bufs);
+
 	ibb->refcount = 1;
 
 	return ibb;
@@ -1508,6 +1513,22 @@ static void __intel_bb_destroy_cache(struct intel_bb *ibb)
 	ibb->root = NULL;
 }
 
+static void __intel_bb_detach_intel_bufs(struct intel_bb *ibb)
+{
+	struct intel_buf *entry, *tmp;
+
+	igt_list_for_each_entry_safe(entry, tmp, &ibb->intel_bufs, link)
+		intel_bb_detach_intel_buf(ibb, entry);
+}
+
+static void __intel_bb_remove_intel_bufs(struct intel_bb *ibb)
+{
+	struct intel_buf *entry, *tmp;
+
+	igt_list_for_each_entry_safe(entry, tmp, &ibb->intel_bufs, link)
+		intel_bb_remove_intel_buf(ibb, entry);
+}
+
 /**
  * intel_bb_destroy:
  * @ibb: pointer to intel_bb
@@ -1521,6 +1542,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
 	ibb->refcount--;
 	igt_assert_f(ibb->refcount == 0, "Trying to destroy referenced bb!");
 
+	__intel_bb_remove_intel_bufs(ibb);
 	__intel_bb_destroy_relocations(ibb);
 	__intel_bb_destroy_objects(ibb);
 	__intel_bb_destroy_cache(ibb);
@@ -1544,6 +1566,10 @@ void intel_bb_destroy(struct intel_bb *ibb)
  * @purge_objects_cache: if true destroy internal execobj and relocs + cache
  *
  * Recreate batch bo when there's no additional reference.
+ *
+ * When purge_object_cache == true we destroy cache as well as remove intel_buf
+ * from intel-bb tracking list. Removing intel_bufs releases their addresses
+ * in the allocator.
 */
 
 void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
@@ -1569,8 +1595,10 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	__intel_bb_destroy_objects(ibb);
 	__reallocate_objects(ibb);
 
-	if (purge_objects_cache)
+	if (purge_objects_cache) {
+		__intel_bb_remove_intel_bufs(ibb);
 		__intel_bb_destroy_cache(ibb);
+	}
 
 	/*
 	 * When we use allocators we're in no-reloc mode so we have to free
@@ -1621,6 +1649,50 @@ int intel_bb_sync(struct intel_bb *ibb)
 	return ret;
 }
 
+uint64_t intel_bb_assign_vm(struct intel_bb *ibb, uint64_t allocator,
+			    uint32_t vm_id)
+{
+	struct drm_i915_gem_context_param arg = {
+		.param = I915_CONTEXT_PARAM_VM,
+	};
+	uint64_t prev_allocator = ibb->allocator_handle;
+	bool closed = false;
+
+	if (ibb->vm_id == vm_id) {
+		igt_debug("Skipping to assign same vm_id: %u\n", vm_id);
+		return 0;
+	}
+
+	/* Cannot switch if someone keeps bb refcount */
+	igt_assert(ibb->refcount == 1);
+
+	/* Detach intel_bufs and remove bb handle */
+	__intel_bb_detach_intel_bufs(ibb);
+	intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset, ibb->size);
+
+	/* Cache + objects are not valid after change anymore */
+	__intel_bb_destroy_objects(ibb);
+	__intel_bb_destroy_cache(ibb);
+
+	/* Attach new allocator */
+	ibb->allocator_handle = allocator;
+
+	/* Setparam */
+	ibb->vm_id = vm_id;
+
+	/* Skip set param, we likely return to default vm */
+	if (vm_id) {
+		arg.ctx_id = ibb->ctx;
+		arg.value = vm_id;
+		gem_context_set_param(ibb->i915, &arg);
+	}
+
+	/* Recreate bb */
+	intel_bb_reset(ibb, false);
+
+	return closed ? 0 : prev_allocator;
+}
+
 /*
  * intel_bb_print:
  * @ibb: pointer to intel_bb
@@ -1875,22 +1947,13 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
 
 	object->offset = offset;
 
-	/* Limit current offset to gtt size */
-	if (offset != INTEL_BUF_INVALID_ADDRESS) {
-		object->offset = CANONICAL(offset & (ibb->gtt_size - 1));
-	} else {
-		object->offset = __intel_bb_get_offset(ibb,
-						       handle, size,
-						       object->alignment);
-	}
-
 	if (write)
 		object->flags |= EXEC_OBJECT_WRITE;
 
 	if (ibb->supports_48b_address)
 		object->flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 
-	if (ibb->uses_full_ppgtt && ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+	if (ibb->uses_full_ppgtt && !ibb->enforce_relocs)
 		object->flags |= EXEC_OBJECT_PINNED;
 
 	return object;
@@ -1927,6 +1990,9 @@ __intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
 {
 	struct drm_i915_gem_exec_object2 *obj;
 
+	igt_assert(ibb);
+	igt_assert(buf);
+	igt_assert(!buf->ibb || buf->ibb == ibb);
 	igt_assert(ALIGN(alignment, 4096) == alignment);
 
 	if (!alignment) {
@@ -1951,6 +2017,13 @@ __intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
 	if (!ibb->enforce_relocs)
 		obj->alignment = alignment;
 
+	if (igt_list_empty(&buf->link)) {
+		igt_list_add_tail(&buf->link, &ibb->intel_bufs);
+		buf->ibb = ibb;
+	} else {
+		igt_assert(buf->ibb == ibb);
+	}
+
 	return obj;
 }
 
@@ -1967,18 +2040,53 @@ intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *bu
 	return __intel_bb_add_intel_buf(ibb, buf, alignment, write);
 }
 
+void intel_bb_detach_intel_buf(struct intel_bb *ibb, struct intel_buf *buf)
+{
+	igt_assert(ibb);
+	igt_assert(buf);
+	igt_assert(!buf->ibb || buf->ibb == ibb);
+
+	if (!igt_list_empty(&buf->link)) {
+		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+		buf->ibb = NULL;
+		igt_list_del_init(&buf->link);
+	}
+}
+
 bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf)
 {
-	bool removed = intel_bb_remove_object(ibb, buf->handle,
-					      buf->addr.offset,
-					      intel_buf_bo_size(buf));
+	bool removed;
 
-	if (removed)
+	igt_assert(ibb);
+	igt_assert(buf);
+	igt_assert(!buf->ibb || buf->ibb == ibb);
+
+	if (igt_list_empty(&buf->link))
+		return false;
+
+	removed = intel_bb_remove_object(ibb, buf->handle,
+					 buf->addr.offset,
+					 intel_buf_bo_size(buf));
+	if (removed) {
 		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+		buf->ibb = NULL;
+		igt_list_del_init(&buf->link);
+	}
 
 	return removed;
 }
 
+void intel_bb_print_intel_bufs(struct intel_bb *ibb)
+{
+	struct intel_buf *entry;
+
+	igt_list_for_each_entry(entry, &ibb->intel_bufs, link) {
+		igt_info("handle: %u, ibb: %p, offset: %lx\n",
+			 entry->handle, entry->ibb,
+			 (long) entry->addr.offset);
+	}
+}
+
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 {
@@ -2361,6 +2469,7 @@ static void update_offsets(struct intel_bb *ibb,
 			   struct drm_i915_gem_exec_object2 *objects)
 {
 	struct drm_i915_gem_exec_object2 *object;
+	struct intel_buf *entry;
 	uint32_t i;
 
 	for (i = 0; i < ibb->num_objects; i++) {
@@ -2372,11 +2481,23 @@ static void update_offsets(struct intel_bb *ibb,
 		if (i == 0)
 			ibb->batch_offset = object->offset;
 	}
+
+	igt_list_for_each_entry(entry, &ibb->intel_bufs, link) {
+		object = intel_bb_find_object(ibb, entry->handle);
+		igt_assert(object);
+
+		if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+			igt_assert(object->offset == entry->addr.offset);
+		else
+			entry->addr.offset = object->offset;
+
+		entry->addr.ctx = ibb->ctx;
+	}
 }
 
 #define LINELEN 76
 /*
- * @__intel_bb_exec:
+ * __intel_bb_exec:
  * @ibb: pointer to intel_bb
  * @end_offset: offset of the last instruction in the bb
  * @flags: flags passed directly to execbuf
@@ -2416,6 +2537,9 @@ static int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 	if (ibb->dump_base64)
 		intel_bb_dump_base64(ibb, LINELEN);
 
+	/* For debugging on CI, remove in final series */
+	intel_bb_dump_execbuf(ibb, &execbuf);
+
 	ret = __gem_execbuf_wr(ibb->i915, &execbuf);
 	if (ret) {
 		intel_bb_dump_execbuf(ibb, &execbuf);
@@ -2493,36 +2617,6 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 	return (*found)->offset;
 }
 
-/**
- * intel_bb_object_offset_to_buf:
- * @ibb: pointer to intel_bb
- * @buf: buffer we want to store last exec offset and context id
- *
- * Copy object offset used in the batch to intel_buf to allow caller prepare
- * other batch likely without relocations.
- */
-bool intel_bb_object_offset_to_buf(struct intel_bb *ibb, struct intel_buf *buf)
-{
-	struct drm_i915_gem_exec_object2 object = { .handle = buf->handle };
-	struct drm_i915_gem_exec_object2 **found;
-
-	igt_assert(ibb);
-	igt_assert(buf);
-
-	found = tfind((void *)&object, &ibb->root, __compare_objects);
-	if (!found) {
-		buf->addr.offset = 0;
-		buf->addr.ctx = ibb->ctx;
-
-		return false;
-	}
-
-	buf->addr.offset = (*found)->offset & (ibb->gtt_size - 1);
-	buf->addr.ctx = ibb->ctx;
-
-	return true;
-}
-
 /*
  * intel_bb_emit_bbe:
  * @ibb: batchbuffer
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 702052d22..6f148713b 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -6,6 +6,7 @@
 #include <i915_drm.h>
 
 #include "igt_core.h"
+#include "igt_list.h"
 #include "intel_reg.h"
 #include "drmtest.h"
 #include "intel_allocator.h"
@@ -464,6 +465,7 @@ struct intel_bb {
 	bool uses_full_ppgtt;
 
 	uint32_t ctx;
+	uint32_t vm_id;
 
 	/* Cache */
 	void *root;
@@ -481,6 +483,9 @@ struct intel_bb {
 	uint32_t num_relocs;
 	uint32_t allocated_relocs;
 
+	/* Tracked intel_bufs */
+	struct igt_list_head intel_bufs;
+
 	/*
 	 * BO recreate in reset path only when refcount == 0
 	 * Currently we don't need to use atomics because intel_bb
@@ -517,29 +522,15 @@ static inline void intel_bb_unref(struct intel_bb *ibb)
 
 void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache);
 int intel_bb_sync(struct intel_bb *ibb);
+
+uint64_t intel_bb_assign_vm(struct intel_bb *ibb, uint64_t allocator,
+			    uint32_t vm_id);
+
 void intel_bb_print(struct intel_bb *ibb);
 void intel_bb_dump(struct intel_bb *ibb, const char *filename);
 void intel_bb_set_debug(struct intel_bb *ibb, bool debug);
 void intel_bb_set_dump_base64(struct intel_bb *ibb, bool dump);
 
-/*
-static inline uint64_t
-intel_bb_set_default_object_alignment(struct intel_bb *ibb, uint64_t alignment)
-{
-	uint64_t old = ibb->alignment;
-
-	ibb->alignment = alignment;
-
-	return old;
-}
-
-static inline uint64_t
-intel_bb_get_default_object_alignment(struct intel_bb *ibb)
-{
-	return ibb->alignment;
-}
-*/
-
 static inline uint32_t intel_bb_offset(struct intel_bb *ibb)
 {
 	return (uint32_t) ((uint8_t *) ibb->ptr - (uint8_t *) ibb->batch);
@@ -597,7 +588,9 @@ intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write);
 struct drm_i915_gem_exec_object2 *
 intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
 				      uint64_t alignment, bool write);
+void intel_bb_detach_intel_buf(struct intel_bb *ibb, struct intel_buf *buf);
 bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf);
+void intel_bb_print_intel_bufs(struct intel_bb *ibb);
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle);
 
diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index d8eb64e3a..166a957f1 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -727,6 +727,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 
 	buf->bops = bops;
 	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+	IGT_INIT_LIST_HEAD(&buf->link);
 
 	if (compression) {
 		int aux_width, aux_height;
@@ -822,13 +823,23 @@ void intel_buf_init(struct buf_ops *bops,
  *
  * Function closes gem BO inside intel_buf if bo is owned by intel_buf.
  * For handle passed from the caller intel_buf doesn't take ownership and
- * doesn't close it in close()/destroy() paths.
+ * doesn't close it in close()/destroy() paths. When intel_buf was previously
+ * added to intel_bb (intel_bb_add_intel_buf() call) it is tracked there and
+ * must be removed from its internal structures.
  */
 void intel_buf_close(struct buf_ops *bops, struct intel_buf *buf)
 {
 	igt_assert(bops);
 	igt_assert(buf);
 
+	/* If buf is tracked by some intel_bb ensure it will be removed there */
+	if (buf->ibb) {
+		intel_bb_remove_intel_buf(buf->ibb, buf);
+		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+		buf->ibb = NULL;
+		IGT_INIT_LIST_HEAD(&buf->link);
+	}
+
 	if (buf->is_owner)
 		gem_close(bops->fd, buf->handle);
 }
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 54480bff6..1a3d86925 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -2,6 +2,7 @@
 #define __INTEL_BUFOPS_H__
 
 #include <stdint.h>
+#include "igt_list.h"
 #include "igt_aux.h"
 #include "intel_batchbuffer.h"
 
@@ -13,6 +14,7 @@ struct buf_ops;
 
 struct intel_buf {
 	struct buf_ops *bops;
+
 	bool is_owner;
 	uint32_t handle;
 	uint64_t size;
@@ -40,6 +42,10 @@ struct intel_buf {
 		uint32_t ctx;
 	} addr;
 
+	/* Tracking */
+	struct intel_bb *ibb;
+	struct igt_list_head link;
+
 	/* CPU mapping */
 	uint32_t *ptr;
 	bool cpu_write;
diff --git a/lib/media_spin.c b/lib/media_spin.c
index 5da469a52..d2345d153 100644
--- a/lib/media_spin.c
+++ b/lib/media_spin.c
@@ -132,7 +132,6 @@ gen8_media_spinfunc(int i915, struct intel_buf *buf, uint32_t spins)
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 
-	intel_bb_object_offset_to_buf(ibb, buf);
 	intel_bb_destroy(ibb);
 }
 
@@ -186,6 +185,5 @@ gen9_media_spinfunc(int i915, struct intel_buf *buf, uint32_t spins)
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 
-	intel_bb_object_offset_to_buf(ibb, buf);
 	intel_bb_destroy(ibb);
 }
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index c6c943506..77dfb6854 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -828,9 +828,6 @@ static void offset_control(struct buf_ops *bops)
 		print_buf(dst2, "dst2");
 	}
 
-	igt_assert(intel_bb_object_offset_to_buf(ibb, src) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst1) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst2) == true);
 	poff_src = src->addr.offset;
 	poff_dst1 = dst1->addr.offset;
 	poff_dst2 = dst2->addr.offset;
@@ -853,10 +850,6 @@ static void offset_control(struct buf_ops *bops)
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 	intel_bb_sync(ibb);
 
-	igt_assert(intel_bb_object_offset_to_buf(ibb, src) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst1) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst2) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst3) == true);
 	igt_assert(poff_src == src->addr.offset);
 	igt_assert(poff_dst1 == dst1->addr.offset);
 	igt_assert(poff_dst2 == dst2->addr.offset);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 18/35] lib/igt_fb: Initialize intel_buf with same size as fb
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (16 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 17/35] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 19/35] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
                   ` (18 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

We need to have same size when intel_buf is initialized over fb
(with compression) because allocator could be called with smaller
size what could lead to relocation.

Use new intel_buf function which allows initialize with handle and size.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_fb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index f0fcd1a7f..259449d60 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2197,11 +2197,11 @@ igt_fb_create_intel_buf(int fd, struct buf_ops *bops,
 	bo_name = gem_flink(fd, fb->gem_handle);
 	handle = gem_open(fd, bo_name);
 
-	buf = intel_buf_create_using_handle(bops, handle,
-					    fb->width, fb->height,
-					    fb->plane_bpp[0], 0,
-					    igt_fb_mod_to_tiling(fb->modifier),
-					    compression);
+	buf = intel_buf_create_using_handle_and_size(bops, handle,
+						     fb->width, fb->height,
+						     fb->plane_bpp[0], 0,
+						     igt_fb_mod_to_tiling(fb->modifier),
+						     compression, fb->size);
 	intel_buf_set_name(buf, name);
 
 	/* Make sure we close handle on destroy path */
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 19/35] tests/api_intel_bb: Remove check-canonical test
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (17 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 18/35] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 20/35] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
                   ` (17 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

As intel-bb uses internally decanonical address for objects/intel_bufs
checking canonical bits makes no sense anymore.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
---
 tests/i915/api_intel_bb.c | 44 ---------------------------------------
 1 file changed, 44 deletions(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 77dfb6854..3a2176b57 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -206,47 +206,6 @@ static void lot_of_buffers(struct buf_ops *bops)
 		intel_buf_destroy(buf[i]);
 }
 
-/*
- * Make sure intel-bb space allocator currently doesn't enter 47-48 bit
- * gtt sizes.
- */
-static void check_canonical(struct buf_ops *bops)
-{
-	int i915 = buf_ops_get_fd(bops);
-	struct intel_bb *ibb;
-	struct intel_buf *buf;
-	uint32_t offset;
-	uint64_t address;
-	bool supports_48bit;
-
-	ibb = intel_bb_create(i915, PAGE_SIZE);
-	supports_48bit = ibb->supports_48b_address;
-	if (!supports_48bit)
-		intel_bb_destroy(ibb);
-	igt_require_f(supports_48bit, "We need 48bit ppgtt for testing\n");
-
-	address = 0xc00000000000;
-	if (debug_bb)
-		intel_bb_set_debug(ibb, true);
-
-	offset = intel_bb_emit_bbe(ibb);
-
-	buf = intel_buf_create(bops, 512, 512, 32, 0,
-			       I915_TILING_NONE, I915_COMPRESSION_NONE);
-
-	buf->addr.offset = address;
-	intel_bb_add_intel_buf(ibb, buf, true);
-	intel_bb_object_set_flag(ibb, buf->handle, EXEC_OBJECT_PINNED);
-
-	igt_assert(buf->addr.offset == 0);
-
-	intel_bb_exec(ibb, offset,
-		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
-
-	intel_buf_destroy(buf);
-	intel_bb_destroy(ibb);
-}
-
 /*
  * Check flags are cleared after intel_bb_reset(ibb, false);
  */
@@ -1192,9 +1151,6 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("lot-of-buffers")
 		lot_of_buffers(bops);
 
-	igt_subtest("check-canonical")
-		check_canonical(bops);
-
 	igt_subtest("reset-flags")
 		reset_flags(bops);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 20/35] tests/api_intel_bb: Modify test to verify intel_bb with allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (18 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 19/35] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 21/35] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
                   ` (16 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

intel_bb was adopted to use allocator. Change the test to verify
addresses in different scenarios - with relocations and with softpin.

v2: adding intel-buf to intel-bb inserts addresses so they should
    be same even if intel-bb cache purge was called

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 480 +++++++++++++++++++++++++++++---------
 1 file changed, 371 insertions(+), 109 deletions(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 3a2176b57..29d4dd7b6 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -22,6 +22,7 @@
  */
 
 #include "igt.h"
+#include "i915/gem.h"
 #include <unistd.h>
 #include <stdlib.h>
 #include <stdio.h>
@@ -94,6 +95,7 @@ static void check_buf(struct intel_buf *buf, uint8_t color)
 
 	ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
 					buf->surface[0].size, PROT_READ);
+	gem_set_domain(i915, buf->handle, I915_GEM_DOMAIN_WC, 0);
 
 	for (i = 0; i < buf->surface[0].size; i++)
 		igt_assert(ptr[i] == color);
@@ -123,24 +125,34 @@ static void print_buf(struct intel_buf *buf, const char *name)
 
 	ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
 					buf->surface[0].size, PROT_READ);
-	igt_debug("[%s] Buf handle: %d, size: %" PRIx64 ", v: 0x%02x, presumed_addr: %p\n",
+	igt_debug("[%s] Buf handle: %d, size: %" PRIu64
+		  ", v: 0x%02x, presumed_addr: %p\n",
 		  name, buf->handle, buf->surface[0].size, ptr[0],
 		  from_user_pointer(buf->addr.offset));
 	munmap(ptr, buf->surface[0].size);
 }
 
+static void reset_bb(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	intel_bb_reset(ibb, false);
+	intel_bb_destroy(ibb);
+}
+
 static void simple_bb(struct buf_ops *bops, bool use_context)
 {
 	int i915 = buf_ops_get_fd(bops);
 	struct intel_bb *ibb;
-	uint32_t ctx;
+	uint32_t ctx = 0;
 
-	if (use_context) {
+	if (use_context)
 		gem_require_contexts(i915);
-		ctx = gem_context_create(i915);
-	}
 
-	ibb = intel_bb_create(i915, PAGE_SIZE);
+	ibb = intel_bb_create_with_allocator(i915, ctx, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
 	if (debug_bb)
 		intel_bb_set_debug(ibb, true);
 
@@ -155,10 +167,8 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 	intel_bb_reset(ibb, false);
 	intel_bb_reset(ibb, true);
 
-	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
-	intel_bb_ptr_align(ibb, 8);
-
 	if (use_context) {
+		ctx = gem_context_create(i915);
 		intel_bb_destroy(ibb);
 		ibb = intel_bb_create_with_context(i915, ctx, PAGE_SIZE);
 		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
@@ -166,11 +176,10 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 		intel_bb_exec(ibb, intel_bb_offset(ibb),
 			      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC,
 			      true);
+		gem_context_destroy(i915, ctx);
 	}
 
 	intel_bb_destroy(ibb);
-	if (use_context)
-		gem_context_destroy(i915, ctx);
 }
 
 /*
@@ -194,16 +203,20 @@ static void lot_of_buffers(struct buf_ops *bops)
 	for (i = 0; i < NUM_BUFS; i++) {
 		buf[i] = intel_buf_create(bops, 4096, 1, 8, 0, I915_TILING_NONE,
 					  I915_COMPRESSION_NONE);
-		intel_bb_add_intel_buf(ibb, buf[i], false);
+		if (i % 2)
+			intel_bb_add_intel_buf(ibb, buf[i], false);
+		else
+			intel_bb_add_intel_buf_with_alignment(ibb, buf[i],
+							      0x4000, false);
 	}
 
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
 
-	intel_bb_destroy(ibb);
-
 	for (i = 0; i < NUM_BUFS; i++)
 		intel_buf_destroy(buf[i]);
+
+	intel_bb_destroy(ibb);
 }
 
 /*
@@ -298,70 +311,287 @@ static void reset_flags(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
+static void add_remove_objects(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *mid, *dst;
+	uint32_t offset;
+	const uint32_t width = 512;
+	const uint32_t height = 512;
 
-#define MI_FLUSH_DW (0x26<<23)
-#define BCS_SWCTRL  0x22200
-#define BCS_SRC_Y   (1 << 0)
-#define BCS_DST_Y   (1 << 1)
-static void __emit_blit(struct intel_bb *ibb,
-			struct intel_buf *src, struct intel_buf *dst)
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	mid = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, mid, true);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(mid);
+	intel_buf_destroy(dst);
+	intel_bb_destroy(ibb);
+}
+
+static void destroy_bb(struct buf_ops *bops)
 {
-	uint32_t mask;
-	bool has_64b_reloc;
-	uint64_t address;
-
-	has_64b_reloc = ibb->gen >= 8;
-
-	if ((src->tiling | dst->tiling) >= I915_TILING_Y) {
-		intel_bb_out(ibb, MI_LOAD_REGISTER_IMM);
-		intel_bb_out(ibb, BCS_SWCTRL);
-
-		mask = (BCS_SRC_Y | BCS_DST_Y) << 16;
-		if (src->tiling == I915_TILING_Y)
-			mask |= BCS_SRC_Y;
-		if (dst->tiling == I915_TILING_Y)
-			mask |= BCS_DST_Y;
-		intel_bb_out(ibb, mask);
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *mid, *dst;
+	uint32_t offset;
+	const uint32_t width = 512;
+	const uint32_t height = 512;
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	mid = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, mid, true);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	/* Check destroy will detach intel_bufs */
+	intel_bb_destroy(ibb);
+	igt_assert(src->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(src->ibb == NULL);
+	igt_assert(mid->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(mid->ibb == NULL);
+	igt_assert(dst->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(dst->ibb == NULL);
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	intel_bb_destroy(ibb);
+	intel_buf_destroy(src);
+	intel_buf_destroy(mid);
+	intel_buf_destroy(dst);
+}
+
+static void object_reloc(struct buf_ops *bops, enum obj_cache_ops cache_op)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	uint32_t h1, h2;
+	uint64_t poff_bb, poff_h1, poff_h2;
+	uint64_t poff2_bb, poff2_h1, poff2_h2;
+	uint64_t flags = 0;
+	uint64_t shift = cache_op == PURGE_CACHE ? 0x2000 : 0x0;
+	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
+
+	ibb = intel_bb_create_with_relocs(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	h1 = gem_create(i915, PAGE_SIZE);
+	h2 = gem_create(i915, PAGE_SIZE);
+
+	/* intel_bb_create adds bb handle so it has 0 for relocs */
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	igt_assert(poff_bb == 0);
+
+	/* Before adding to intel_bb it should return INVALID_ADDRESS */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[1] poff_h1: %lx\n", (long) poff_h1);
+	igt_debug("[1] poff_h2: %lx\n", (long) poff_h2);
+	igt_assert(poff_h1 == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(poff_h2 == INTEL_BUF_INVALID_ADDRESS);
+
+	intel_bb_add_object(ibb, h1, PAGE_SIZE, poff_h1, 0, true);
+	intel_bb_add_object(ibb, h2, PAGE_SIZE, poff_h2, 0x2000, true);
+
+	/*
+	 * Objects were added to bb, we expect initial addresses are zeroed
+	 * for relocs.
+	 */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_assert(poff_h1 == 0);
+	igt_assert(poff_h2 == 0);
+
+	intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
+
+	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff2_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff2_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[2] poff2_h1: %lx\n", (long) poff2_h1);
+	igt_debug("[2] poff2_h2: %lx\n", (long) poff2_h2);
+	/* Some addresses won't be 0 */
+	igt_assert(poff2_bb | poff2_h1 | poff2_h2);
+
+	intel_bb_reset(ibb, purge_cache);
+
+	if (purge_cache) {
+		intel_bb_add_object(ibb, h1, PAGE_SIZE, poff2_h1, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2 + shift, 0x2000, true);
 	}
 
-	intel_bb_out(ibb,
-		     XY_SRC_COPY_BLT_CMD |
-		     XY_SRC_COPY_BLT_WRITE_ALPHA |
-		     XY_SRC_COPY_BLT_WRITE_RGB |
-		     (6 + 2 * has_64b_reloc));
-	intel_bb_out(ibb, 3 << 24 | 0xcc << 16 | dst->surface[0].stride);
-	intel_bb_out(ibb, 0);
-	intel_bb_out(ibb, intel_buf_height(dst) << 16 | intel_buf_width(dst));
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[3] poff_h1: %lx\n", (long) poff_h1);
+	igt_debug("[3] poff_h2: %lx\n", (long) poff_h2);
+	igt_debug("[3] poff2_h1: %lx\n", (long) poff2_h1);
+	igt_debug("[3] poff2_h2: %lx + shift (%lx)\n", (long) poff2_h2,
+		 (long) shift);
+	igt_assert(poff_h1 == poff2_h1);
+	igt_assert(poff_h2 == poff2_h2 + shift);
+	intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
 
-	address = intel_bb_get_object_offset(ibb, dst->handle);
-	intel_bb_emit_reloc_fenced(ibb, dst->handle,
-				   I915_GEM_DOMAIN_RENDER,
-				   I915_GEM_DOMAIN_RENDER,
-				   0, address);
-	intel_bb_out(ibb, 0);
-	intel_bb_out(ibb, src->surface[0].stride);
+	gem_close(i915, h1);
+	gem_close(i915, h2);
+	intel_bb_destroy(ibb);
+}
 
-	address = intel_bb_get_object_offset(ibb, src->handle);
-	intel_bb_emit_reloc_fenced(ibb, src->handle,
-				   I915_GEM_DOMAIN_RENDER, 0,
-				   0, address);
+#define WITHIN_RANGE(offset, start, end) \
+	(DECANONICAL(offset) >= start && DECANONICAL(offset) <= end)
+static void object_noreloc(struct buf_ops *bops, enum obj_cache_ops cache_op,
+			   uint8_t allocator_type)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	uint32_t h1, h2;
+	uint64_t start, end;
+	uint64_t poff_bb, poff_h1, poff_h2;
+	uint64_t poff2_bb, poff2_h1, poff2_h2;
+	uint64_t flags = 0;
+	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
 
-	if ((src->tiling | dst->tiling) >= I915_TILING_Y) {
-		igt_assert(ibb->gen >= 6);
-		intel_bb_out(ibb, MI_FLUSH_DW | 2);
-		intel_bb_out(ibb, 0);
-		intel_bb_out(ibb, 0);
-		intel_bb_out(ibb, 0);
+	igt_require(gem_uses_full_ppgtt(i915));
 
-		intel_bb_out(ibb, MI_LOAD_REGISTER_IMM);
-		intel_bb_out(ibb, BCS_SWCTRL);
-		intel_bb_out(ibb, (BCS_SRC_Y | BCS_DST_Y) << 16);
+	ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE, allocator_type);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	h1 = gem_create(i915, PAGE_SIZE);
+	h2 = gem_create(i915, PAGE_SIZE);
+
+	intel_allocator_get_address_range(ibb->allocator_handle,
+					  &start, &end);
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	igt_debug("[1] bb presumed offset: 0x%" PRIx64
+		  ", start: %" PRIx64 ", end: %" PRIx64 "\n",
+		  poff_bb, start, end);
+	igt_assert(WITHIN_RANGE(poff_bb, start, end));
+
+	/* Before adding to intel_bb it should return INVALID_ADDRESS */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[1] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[1] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+	igt_assert(poff_h1 == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(poff_h2 == INTEL_BUF_INVALID_ADDRESS);
+
+	intel_bb_add_object(ibb, h1, PAGE_SIZE, poff_h1, 0, true);
+	intel_bb_add_object(ibb, h2, PAGE_SIZE, poff_h2, 0, true);
+
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[2] bb presumed offset: 0x%"PRIx64"\n", poff_bb);
+	igt_debug("[2] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[2] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+	igt_assert(WITHIN_RANGE(poff_bb, start, end));
+	igt_assert(WITHIN_RANGE(poff_h1, start, end));
+	igt_assert(WITHIN_RANGE(poff_h2, start, end));
+
+	intel_bb_emit_bbe(ibb);
+	igt_debug("exec flags: %" PRIX64 "\n", flags);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
+
+	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff2_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff2_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[3] bb presumed offset: 0x%"PRIx64"\n", poff2_bb);
+	igt_debug("[3] h1 presumed offset: 0x%"PRIx64"\n", poff2_h1);
+	igt_debug("[3] h2 presumed offset: 0x%"PRIx64"\n", poff2_h2);
+	igt_assert(poff_h1 == poff2_h1);
+	igt_assert(poff_h2 == poff2_h2);
+
+	igt_debug("purge: %d\n", purge_cache);
+	intel_bb_reset(ibb, purge_cache);
+
+	/*
+	 * Check if intel-bb cache was purged:
+	 * a) retrieve same address from allocator (works for simple, not random)
+	 * b) passing previous address enters allocator <-> intel_bb cache
+	 *    consistency check path.
+	 */
+	if (purge_cache) {
+		intel_bb_add_object(ibb, h1, PAGE_SIZE,
+				    INTEL_BUF_INVALID_ADDRESS, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2, 0, true);
+	} else {
+		/* See consistency check will not fail */
+		intel_bb_add_object(ibb, h1, PAGE_SIZE, poff2_h1, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2, 0, true);
+	}
+
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[4] bb presumed offset: 0x%"PRIx64"\n", poff_bb);
+	igt_debug("[4] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[4] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+
+	/* For simple allocator and purge=cache we must have same addresses */
+	if (allocator_type == INTEL_ALLOCATOR_SIMPLE || !purge_cache) {
+		igt_assert(poff_h1 == poff2_h1);
+		igt_assert(poff_h2 == poff2_h2);
 	}
+
+	gem_close(i915, h1);
+	gem_close(i915, h2);
+	intel_bb_destroy(ibb);
+}
+static void __emit_blit(struct intel_bb *ibb,
+			 struct intel_buf *src, struct intel_buf *dst)
+{
+	intel_bb_emit_blt_copy(ibb,
+			       src, 0, 0, src->surface[0].stride,
+			       dst, 0, 0, dst->surface[0].stride,
+			       intel_buf_width(dst),
+			       intel_buf_height(dst),
+			       dst->bpp);
 }
 
 static void blit(struct buf_ops *bops,
 		 enum reloc_objects reloc_obj,
-		 enum obj_cache_ops cache_op)
+		 enum obj_cache_ops cache_op,
+		 uint8_t allocator_type)
 {
 	int i915 = buf_ops_get_fd(bops);
 	struct intel_bb *ibb;
@@ -372,6 +602,9 @@ static void blit(struct buf_ops *bops,
 	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
 	bool do_relocs = reloc_obj == RELOC ? true : false;
 
+	if (!do_relocs)
+		igt_require(gem_uses_full_ppgtt(i915));
+
 	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
 	dst = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
 
@@ -383,38 +616,31 @@ static void blit(struct buf_ops *bops,
 	if (do_relocs) {
 		ibb = intel_bb_create_with_relocs(i915, PAGE_SIZE);
 	} else {
-		ibb = intel_bb_create(i915, PAGE_SIZE);
+		ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE,
+						     allocator_type);
 		flags |= I915_EXEC_NO_RELOC;
 	}
 
-	if (ibb->gen >= 6)
-		flags |= I915_EXEC_BLT;
-
 	if (debug_bb)
 		intel_bb_set_debug(ibb, true);
 
-
-	intel_bb_add_intel_buf(ibb, src, false);
-	intel_bb_add_intel_buf(ibb, dst, true);
-
 	__emit_blit(ibb, src, dst);
 
 	/* We expect initial addresses are zeroed for relocs */
-	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
-	poff_src = intel_bb_get_object_offset(ibb, src->handle);
-	poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
-	igt_debug("bb  presumed offset: 0x%"PRIx64"\n", poff_bb);
-	igt_debug("src presumed offset: 0x%"PRIx64"\n", poff_src);
-	igt_debug("dst presumed offset: 0x%"PRIx64"\n", poff_dst);
 	if (reloc_obj == RELOC) {
+		poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+		poff_src = intel_bb_get_object_offset(ibb, src->handle);
+		poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
+		igt_debug("bb  presumed offset: 0x%"PRIx64"\n", poff_bb);
+		igt_debug("src presumed offset: 0x%"PRIx64"\n", poff_src);
+		igt_debug("dst presumed offset: 0x%"PRIx64"\n", poff_dst);
 		igt_assert(poff_bb == 0);
 		igt_assert(poff_src == 0);
 		igt_assert(poff_dst == 0);
 	}
 
 	intel_bb_emit_bbe(ibb);
-	igt_debug("exec flags: %" PRIX64 "\n", flags);
-	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, true);
+	intel_bb_flush_blit(ibb);
 	check_buf(dst, COLOR_CC);
 
 	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
@@ -423,15 +649,29 @@ static void blit(struct buf_ops *bops,
 
 	intel_bb_reset(ibb, purge_cache);
 
+	/* For purge we lost offsets and bufs were removed from tracking list */
+	if (purge_cache) {
+		src->addr.offset = poff_src;
+		dst->addr.offset = poff_dst;
+	}
+
+	/* Add buffers again, should work both for purge and keep cache */
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	igt_assert_f(poff_src == src->addr.offset,
+		     "prev src addr: %" PRIx64 " <> src addr %" PRIx64 "\n",
+		     poff_src, src->addr.offset);
+	igt_assert_f(poff_dst == dst->addr.offset,
+		     "prev dst addr: %" PRIx64 " <> dst addr %" PRIx64 "\n",
+		     poff_dst, dst->addr.offset);
+
 	fill_buf(src, COLOR_77);
 	fill_buf(dst, COLOR_00);
 
-	if (purge_cache && !do_relocs) {
-		intel_bb_add_intel_buf(ibb, src, false);
-		intel_bb_add_intel_buf(ibb, dst, true);
-	}
-
 	__emit_blit(ibb, src, dst);
+	intel_bb_flush_blit(ibb);
+	check_buf(dst, COLOR_77);
 
 	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
 	poff2_src = intel_bb_get_object_offset(ibb, src->handle);
@@ -455,21 +695,9 @@ static void blit(struct buf_ops *bops,
 	 * we are in full control of our own GTT.
 	 */
 	if (gem_uses_full_ppgtt(i915)) {
-		if (purge_cache) {
-			if (do_relocs) {
-				igt_assert_eq_u64(poff2_bb,  0);
-				igt_assert_eq_u64(poff2_src, 0);
-				igt_assert_eq_u64(poff2_dst, 0);
-			} else {
-				igt_assert_neq_u64(poff_bb, poff2_bb);
-				igt_assert_eq_u64(poff_src, poff2_src);
-				igt_assert_eq_u64(poff_dst, poff2_dst);
-			}
-		} else {
-			igt_assert_eq_u64(poff_bb,  poff2_bb);
-			igt_assert_eq_u64(poff_src, poff2_src);
-			igt_assert_eq_u64(poff_dst, poff2_dst);
-		}
+		igt_assert_eq_u64(poff_bb,  poff2_bb);
+		igt_assert_eq_u64(poff_src, poff2_src);
+		igt_assert_eq_u64(poff_dst, poff2_dst);
 	}
 
 	intel_bb_emit_bbe(ibb);
@@ -636,7 +864,7 @@ static int dump_base64(const char *name, struct intel_buf *buf)
 	if (ret != Z_OK) {
 		igt_warn("error compressing, ret: %d\n", ret);
 	} else {
-		igt_info("compressed %" PRIx64 " -> %lu\n",
+		igt_info("compressed %" PRIu64 " -> %lu\n",
 			 buf->surface[0].size, outsize);
 
 		igt_info("--- %s ---\n", name);
@@ -1142,6 +1370,10 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 		gen = intel_gen(intel_get_drm_devid(i915));
 	}
 
+	igt_describe("Ensure reset is possible on fresh bb");
+	igt_subtest("reset-bb")
+		reset_bb(bops);
+
 	igt_subtest("simple-bb")
 		simple_bb(bops, false);
 
@@ -1154,17 +1386,47 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("reset-flags")
 		reset_flags(bops);
 
-	igt_subtest("blit-noreloc-keep-cache")
-		blit(bops, NORELOC, KEEP_CACHE);
+	igt_subtest("add-remove-objects")
+		add_remove_objects(bops);
 
-	igt_subtest("blit-reloc-purge-cache")
-		blit(bops, RELOC, PURGE_CACHE);
+	igt_subtest("destroy-bb")
+		destroy_bb(bops);
 
-	igt_subtest("blit-noreloc-purge-cache")
-		blit(bops, NORELOC, PURGE_CACHE);
+	igt_subtest("object-reloc-purge-cache")
+		object_reloc(bops, PURGE_CACHE);
+
+	igt_subtest("object-reloc-keep-cache")
+		object_reloc(bops, KEEP_CACHE);
+
+	igt_subtest("object-noreloc-purge-cache-simple")
+		object_noreloc(bops, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("object-noreloc-keep-cache-simple")
+		object_noreloc(bops, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("object-noreloc-purge-cache-random")
+		object_noreloc(bops, PURGE_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("object-noreloc-keep-cache-random")
+		object_noreloc(bops, KEEP_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-reloc-purge-cache")
+		blit(bops, RELOC, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
 
 	igt_subtest("blit-reloc-keep-cache")
-		blit(bops, RELOC, KEEP_CACHE);
+		blit(bops, RELOC, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("blit-noreloc-keep-cache-random")
+		blit(bops, NORELOC, KEEP_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-noreloc-purge-cache-random")
+		blit(bops, NORELOC, PURGE_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-noreloc-keep-cache")
+		blit(bops, NORELOC, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("blit-noreloc-purge-cache")
+		blit(bops, NORELOC, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
 
 	igt_subtest("intel-bb-blit-none")
 		do_intel_bb_blit(bops, 10, I915_TILING_NONE);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 21/35] tests/api_intel_bb: Add compressed->compressed copy
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (19 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 20/35] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 22/35] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
                   ` (15 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Check aux pagetables are working when more than one compressed
buffers are added.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 29d4dd7b6..41d16e22a 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -1260,7 +1260,7 @@ static void render_ccs(struct buf_ops *bops)
 	struct intel_bb *ibb;
 	const int width = 1024;
 	const int height = 1024;
-	struct intel_buf src, dst, final;
+	struct intel_buf src, dst, dst2, final;
 	int i915 = buf_ops_get_fd(bops);
 	uint32_t fails = 0;
 	uint32_t compressed = 0;
@@ -1275,6 +1275,8 @@ static void render_ccs(struct buf_ops *bops)
 			 I915_COMPRESSION_NONE);
 	scratch_buf_init(bops, &dst, width, height, I915_TILING_Y,
 			 I915_COMPRESSION_RENDER);
+	scratch_buf_init(bops, &dst2, width, height, I915_TILING_Y,
+			 I915_COMPRESSION_RENDER);
 	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
 			 I915_COMPRESSION_NONE);
 
@@ -1294,6 +1296,12 @@ static void render_ccs(struct buf_ops *bops)
 	render_copy(ibb,
 		    &dst,
 		    0, 0, width, height,
+		    &dst2,
+		    0, 0);
+
+	render_copy(ibb,
+		    &dst2,
+		    0, 0, width, height,
 		    &final,
 		    0, 0);
 
@@ -1309,12 +1317,15 @@ static void render_ccs(struct buf_ops *bops)
 	if (write_png) {
 		intel_buf_write_to_png(&src, "render-ccs-src.png");
 		intel_buf_write_to_png(&dst, "render-ccs-dst.png");
+		intel_buf_write_to_png(&dst2, "render-ccs-dst2.png");
 		intel_buf_write_aux_to_png(&dst, "render-ccs-dst-aux.png");
+		intel_buf_write_aux_to_png(&dst2, "render-ccs-dst2-aux.png");
 		intel_buf_write_to_png(&final, "render-ccs-final.png");
 	}
 
 	intel_buf_close(bops, &src);
 	intel_buf_close(bops, &dst);
+	intel_buf_close(bops, &dst2);
 	intel_buf_close(bops, &final);
 
 	igt_assert_f(fails == 0, "render-ccs fails: %d\n", fails);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 22/35] tests/api_intel_bb: Add purge-bb test
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (20 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 21/35] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 23/35] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
                   ` (14 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

Check address acquired is same after purging bb. For relocations
we expect 0 twice, for allocator release and alloc same address.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 41d16e22a..596c05001 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -142,6 +142,33 @@ static void reset_bb(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
+static void purge_bb(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_buf *buf;
+	struct intel_bb *ibb;
+	uint64_t offset0, offset1;
+
+	buf = intel_buf_create(bops, 512, 512, 32, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	ibb = intel_bb_create(i915, 4096);
+	intel_bb_set_debug(ibb, true);
+
+	intel_bb_add_intel_buf(ibb, buf, false);
+	offset0 = buf->addr.offset;
+
+	intel_bb_reset(ibb, true);
+	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+
+	intel_bb_add_intel_buf(ibb, buf, false);
+	offset1 = buf->addr.offset;
+
+	igt_assert(offset0 == offset1);
+
+	intel_buf_destroy(buf);
+	intel_bb_destroy(ibb);
+}
+
 static void simple_bb(struct buf_ops *bops, bool use_context)
 {
 	int i915 = buf_ops_get_fd(bops);
@@ -1385,6 +1412,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("reset-bb")
 		reset_bb(bops);
 
+	igt_subtest_f("purge-bb")
+		purge_bb(bops);
+
 	igt_subtest("simple-bb")
 		simple_bb(bops, false);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 23/35] tests/api_intel_bb: Add simple intel-bb which uses allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (21 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 22/35] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 24/35] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
                   ` (13 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

A simple test which uses allocator and can be easily copy-paste
when intel-bb is used for batchbuffer creation.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 596c05001..0c73ee21c 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -209,6 +209,36 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 	intel_bb_destroy(ibb);
 }
 
+static void bb_with_allocator(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *dst;
+	uint32_t ctx = 0;
+
+	igt_require(gem_uses_full_ppgtt(i915));
+
+	ibb = intel_bb_create_with_allocator(i915, ctx, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	intel_bb_remove_intel_buf(ibb, src);
+	intel_bb_remove_intel_buf(ibb, dst);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(dst);
+	intel_bb_destroy(ibb);
+}
+
 /*
  * Make sure we lead to realloc in the intel_bb.
  */
@@ -1421,6 +1451,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("simple-bb-ctx")
 		simple_bb(bops, true);
 
+	igt_subtest("bb-with-allocator")
+		bb_with_allocator(bops);
+
 	igt_subtest("lot-of-buffers")
 		lot_of_buffers(bops);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 24/35] tests/api_intel_bb: Use allocator in delta-check test
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (22 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 23/35] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
@ 2021-03-17 14:45 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 25/35] tests/api_intel_bb: Check switching vm in intel-bb Zbigniew Kempczyński
                   ` (12 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:45 UTC (permalink / raw)
  To: igt-dev

We want to use address returned from emit_reloc() but do not call
kernel relocation path. Change intel-bb to use allocator to fully
control addresses passed in execbuf.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 0c73ee21c..26e982057 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -1126,7 +1126,8 @@ static void delta_check(struct buf_ops *bops)
 	uint64_t offset;
 	bool supports_48bit;
 
-	ibb = intel_bb_create(i915, PAGE_SIZE);
+	ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
 	supports_48bit = ibb->supports_48b_address;
 	if (!supports_48bit)
 		intel_bb_destroy(ibb);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 25/35] tests/api_intel_bb: Check switching vm in intel-bb
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (23 preceding siblings ...)
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 24/35] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 26/35] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
                   ` (11 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

For more vm-controlled scenarios we have to support changing vm
in the intel-bb. Test verifies allocator is able to provide
two vm's which can be assigned to used within intel-bb context.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 105 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 105 insertions(+)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 26e982057..4b6090830 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -36,6 +36,7 @@
 #include <glib.h>
 #include <zlib.h>
 #include "intel_bufops.h"
+#include "i915/gem_vm.h"
 
 #define PAGE_SIZE 4096
 
@@ -239,6 +240,107 @@ static void bb_with_allocator(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
+static void bb_with_vm(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct drm_i915_gem_context_param arg = {
+		.param = I915_CONTEXT_PARAM_VM,
+	};
+	struct intel_bb *ibb;
+	struct intel_buf *src, *dst, *gap;
+	uint32_t ctx = 0, vm_id1, vm_id2;
+	uint64_t prev_vm, vm;
+	uint64_t src_addr[5], dst_addr[5];
+
+	igt_require(gem_uses_full_ppgtt(i915));
+
+	ibb = intel_bb_create_with_allocator(i915, ctx, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	gap = intel_buf_create(bops, 4096, 128, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+
+	/* vm for second blit */
+	vm_id1 = gem_vm_create(i915);
+
+	/* Get vm_id for default vm */
+	arg.ctx_id = ctx;
+	gem_context_get_param(i915, &arg);
+	vm_id2 = arg.value;
+
+	igt_debug("Vm_id1: %u\n", vm_id1);
+	igt_debug("Vm_id2: %u\n", vm_id2);
+
+	/* First blit without set calling setparam */
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[0] = src->addr.offset;
+	dst_addr[0] = dst->addr.offset;
+	igt_debug("step1: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[0], (long long) dst_addr[0]);
+
+	/* Open new allocator with vm_id */
+	vm = intel_allocator_open_vm(i915, vm_id1, INTEL_ALLOCATOR_SIMPLE);
+	prev_vm = intel_bb_assign_vm(ibb, vm, vm_id1);
+
+	intel_bb_add_intel_buf(ibb, gap, false);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[1] = src->addr.offset;
+	dst_addr[1] = dst->addr.offset;
+	igt_debug("step2: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[1], (long long) dst_addr[1]);
+
+	/* Back with default vm */
+	intel_bb_assign_vm(ibb, prev_vm, vm_id2);
+	intel_bb_add_intel_buf(ibb, gap, false);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[2] = src->addr.offset;
+	dst_addr[2] = dst->addr.offset;
+	igt_debug("step3: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[2], (long long) dst_addr[2]);
+
+	/* And exchange one more time */
+	intel_bb_assign_vm(ibb, vm, vm_id1);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[3] = src->addr.offset;
+	dst_addr[3] = dst->addr.offset;
+	igt_debug("step4: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[3], (long long) dst_addr[3]);
+
+	/* Back with default vm */
+	gem_vm_destroy(i915, vm_id1);
+	gem_vm_destroy(i915, vm_id2);
+	intel_bb_assign_vm(ibb, prev_vm, 0);
+
+	/* We can close it after assign previous vm to ibb */
+	intel_allocator_close(vm);
+
+	/* Try default vm still works */
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	src_addr[4] = src->addr.offset;
+	dst_addr[4] = dst->addr.offset;
+	igt_debug("step5: src: 0x%llx, dst: 0x%llx\n",
+		  (long long) src_addr[4], (long long) dst_addr[4]);
+
+	/* Addresses should match for vm and prev_vm blits */
+	igt_assert_eq(src_addr[0], src_addr[2]);
+	igt_assert_eq(dst_addr[0], dst_addr[2]);
+	igt_assert_eq(src_addr[1], src_addr[3]);
+	igt_assert_eq(dst_addr[1], dst_addr[3]);
+	igt_assert_eq(src_addr[2], src_addr[4]);
+	igt_assert_eq(dst_addr[2], dst_addr[4]);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(dst);
+	intel_buf_destroy(gap);
+	intel_bb_destroy(ibb);
+}
+
 /*
  * Make sure we lead to realloc in the intel_bb.
  */
@@ -1455,6 +1557,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("bb-with-allocator")
 		bb_with_allocator(bops);
 
+	igt_subtest("bb-with-vm")
+		bb_with_vm(bops);
+
 	igt_subtest("lot-of-buffers")
 		lot_of_buffers(bops);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 26/35] tests/api_intel_allocator: Simple allocator test suite
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (24 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 25/35] tests/api_intel_bb: Check switching vm in intel-bb Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 27/35] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
                   ` (10 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

We want to verify allocator works as expected. Try to exploit it.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_allocator.c | 567 +++++++++++++++++++++++++++++++
 tests/meson.build                |   1 +
 2 files changed, 568 insertions(+)
 create mode 100644 tests/i915/api_intel_allocator.c

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
new file mode 100644
index 000000000..52acd2ff3
--- /dev/null
+++ b/tests/i915/api_intel_allocator.c
@@ -0,0 +1,567 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <stdatomic.h>
+#include "i915/gem.h"
+#include "igt.h"
+#include "igt_aux.h"
+#include "intel_allocator.h"
+
+#define OBJ_SIZE 1024
+
+struct test_obj {
+	uint32_t handle;
+	uint64_t offset;
+	uint64_t size;
+};
+
+static _Atomic(uint32_t) next_handle;
+
+static inline uint32_t gem_handle_gen(void)
+{
+	return atomic_fetch_add(&next_handle, 1);
+}
+
+static void alloc_simple(int fd)
+{
+	uint64_t ahnd;
+	uint64_t offset0, offset1, size = 0x1000, align = 0x1000;
+	bool is_allocated, freed;
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	offset0 = intel_allocator_alloc(ahnd, 1, size, align);
+	offset1 = intel_allocator_alloc(ahnd, 1, size, align);
+	igt_assert(offset0 == offset1);
+
+	is_allocated = intel_allocator_is_allocated(ahnd, 1, size, offset0);
+	igt_assert(is_allocated);
+
+	freed = intel_allocator_free(ahnd, 1);
+	igt_assert(freed);
+
+	is_allocated = intel_allocator_is_allocated(ahnd, 1, size, offset0);
+	igt_assert(!is_allocated);
+
+	freed = intel_allocator_free(ahnd, 1);
+	igt_assert(!freed);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+static void reserve_simple(int fd)
+{
+	uint64_t ahnd, start, size = 0x1000;
+	bool reserved, unreserved;
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	intel_allocator_get_address_range(ahnd, &start, NULL);
+
+	reserved = intel_allocator_reserve(ahnd, 0, size, start);
+	igt_assert(reserved);
+
+	reserved = intel_allocator_is_reserved(ahnd, size, start);
+	igt_assert(reserved);
+
+	reserved = intel_allocator_reserve(ahnd, 0, size, start);
+	igt_assert(!reserved);
+
+	unreserved = intel_allocator_unreserve(ahnd, 0, size, start);
+	igt_assert(unreserved);
+
+	reserved = intel_allocator_is_reserved(ahnd, size, start);
+	igt_assert(!reserved);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+static void reserve(int fd, uint8_t type)
+{
+	struct test_obj obj;
+	uint64_t ahnd, offset = 0x40000, size = 0x1000;
+
+	ahnd = intel_allocator_open(fd, 0, type);
+
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, size, offset), true);
+	/* try overlapping won't succeed */
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, size, offset + size/2), false);
+
+	obj.handle = gem_handle_gen();
+	obj.size = OBJ_SIZE;
+	obj.offset = intel_allocator_alloc(ahnd, obj.handle, obj.size, 0);
+
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, obj.size, obj.offset), false);
+	intel_allocator_free(ahnd, obj.handle);
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, obj.size, obj.offset), true);
+
+	igt_assert_eq(intel_allocator_unreserve(ahnd, 0, obj.size, obj.offset), true);
+	igt_assert_eq(intel_allocator_unreserve(ahnd, 0, size, offset), true);
+	igt_assert_eq(intel_allocator_reserve(ahnd, 0, size, offset + size/2), true);
+	igt_assert_eq(intel_allocator_unreserve(ahnd, 0, size, offset + size/2), true);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+static bool overlaps(struct test_obj *buf1, struct test_obj *buf2)
+{
+	uint64_t begin1 = buf1->offset;
+	uint64_t end1 = buf1->offset + buf1->size;
+	uint64_t begin2 = buf2->offset;
+	uint64_t end2 = buf2->offset + buf2->size;
+
+	return (end1 > begin2 && end2 > end1) || (end2 > begin1 && end1 > end2);
+}
+
+static void basic_alloc(int fd, int cnt, uint8_t type)
+{
+	struct test_obj *obj;
+	uint64_t ahnd;
+	int i, j;
+
+	ahnd = intel_allocator_open(fd, 0, type);
+	obj = malloc(sizeof(struct test_obj) * cnt);
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("allocating objects: ", i, cnt);
+		obj[i].handle = gem_handle_gen();
+		obj[i].size = OBJ_SIZE;
+		obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+						      obj[i].size, 4096);
+		igt_assert_eq(obj[i].offset % 4096, 0);
+	}
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("check overlapping: ", i, cnt);
+
+		if (type == INTEL_ALLOCATOR_RANDOM)
+			continue;
+
+		for (j = 0; j < cnt; j++) {
+			if (j == i)
+				continue;
+				igt_assert(!overlaps(&obj[i], &obj[j]));
+		}
+	}
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("freeing objects: ", i, cnt);
+		intel_allocator_free(ahnd, obj[i].handle);
+	}
+
+	free(obj);
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+static void reuse(int fd, uint8_t type)
+{
+	struct test_obj obj[128], tmp;
+	uint64_t ahnd, prev_offset;
+	int i;
+
+	ahnd = intel_allocator_open(fd, 0, type);
+
+	for (i = 0; i < 128; i++) {
+		obj[i].handle = gem_handle_gen();
+		obj[i].size = OBJ_SIZE;
+		obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+						      obj[i].size, 0x40);
+	}
+
+	/* check simple reuse */
+	for (i = 0; i < 128; i++) {
+		prev_offset = obj[i].offset;
+		obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+						      obj[i].size, 0);
+		igt_assert(prev_offset == obj[i].offset);
+	}
+	i--;
+
+	/* free previously allocated bo */
+	intel_allocator_free(ahnd, obj[i].handle);
+	/* alloc different buffer to fill freed hole */
+	tmp.handle = gem_handle_gen();
+	tmp.offset = intel_allocator_alloc(ahnd, tmp.handle, OBJ_SIZE, 0);
+	igt_assert(prev_offset == tmp.offset);
+
+	obj[i].offset = intel_allocator_alloc(ahnd, obj[i].handle,
+					      obj[i].size, 0);
+	igt_assert(prev_offset != obj[i].offset);
+	intel_allocator_free(ahnd, tmp.handle);
+
+	for (i = 0; i < 128; i++)
+		intel_allocator_free(ahnd, obj[i].handle);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+struct ial_thread_args {
+	uint64_t ahnd;
+	pthread_t thread;
+	uint32_t *handles;
+	uint64_t *offsets;
+	uint32_t count;
+	int threads;
+	int idx;
+};
+
+static void *alloc_bo_in_thread(void *arg)
+{
+	struct ial_thread_args *a = arg;
+	int i;
+
+	for (i = a->idx; i < a->count; i += a->threads) {
+		a->handles[i] = gem_handle_gen();
+		a->offsets[i] = intel_allocator_alloc(a->ahnd, a->handles[i], OBJ_SIZE,
+						      1UL << ((random() % 20) + 1));
+	}
+
+	return NULL;
+}
+
+static void *free_bo_in_thread(void *arg)
+{
+	struct ial_thread_args *a = arg;
+	int i;
+
+	for (i = (a->idx + 1) % a->threads; i < a->count; i += a->threads)
+		intel_allocator_free(a->ahnd, a->handles[i]);
+
+	return NULL;
+}
+
+#define THREADS 6
+
+static void parallel_one(int fd, uint8_t type)
+{
+	struct ial_thread_args a[THREADS];
+	uint32_t *handles;
+	uint64_t ahnd, *offsets;
+	int count, i;
+
+	srandom(0xdeadbeef);
+	ahnd = intel_allocator_open(fd, 0, type);
+	count = 1UL << 12;
+
+	handles = malloc(sizeof(uint32_t) * count);
+	offsets = calloc(1, sizeof(uint64_t) * count);
+
+	for (i = 0; i < THREADS; i++) {
+		a[i].ahnd = ahnd;
+		a[i].handles = handles;
+		a[i].offsets = offsets;
+		a[i].count = count;
+		a[i].threads = THREADS;
+		a[i].idx = i;
+		pthread_create(&a[i].thread, NULL, alloc_bo_in_thread, &a[i]);
+	}
+
+	for (i = 0; i < THREADS; i++)
+		pthread_join(a[i].thread, NULL);
+
+	/* Check if all objects are allocated */
+	for (i = 0; i < count; i++) {
+		/* Reloc + random allocators don't have state. */
+		if (type == INTEL_ALLOCATOR_RELOC || type == INTEL_ALLOCATOR_RANDOM)
+			break;
+
+		igt_assert_eq(offsets[i],
+			      intel_allocator_alloc(a->ahnd, handles[i], OBJ_SIZE, 0));
+	}
+
+	for (i = 0; i < THREADS; i++)
+		pthread_create(&a[i].thread, NULL, free_bo_in_thread, &a[i]);
+
+	for (i = 0; i < THREADS; i++)
+		pthread_join(a[i].thread, NULL);
+
+	free(handles);
+	free(offsets);
+
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+}
+
+#define SIMPLE_GROUP_ALLOCS 8
+static void __simple_allocs(int fd)
+{
+	uint32_t handles[SIMPLE_GROUP_ALLOCS];
+	uint64_t ahnd;
+	uint32_t ctx;
+	int i;
+
+	ctx = rand() % 2;
+	ahnd = intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_SIMPLE);
+
+	for (i = 0; i < SIMPLE_GROUP_ALLOCS; i++) {
+		uint32_t size;
+
+		size = (rand() % 4 + 1) * 0x1000;
+		handles[i] = gem_create(fd, size);
+		intel_allocator_alloc(ahnd, handles[i], size, 0x1000);
+	}
+
+	for (i = 0; i < SIMPLE_GROUP_ALLOCS; i++) {
+		igt_assert_f(intel_allocator_free(ahnd, handles[i]) == 1,
+			     "Error freeing handle: %u\n", handles[i]);
+		gem_close(fd, handles[i]);
+	}
+
+	intel_allocator_close(ahnd);
+}
+
+static void fork_simple_once(int fd)
+{
+	intel_allocator_multiprocess_start();
+
+	igt_fork(child, 1)
+		__simple_allocs(fd);
+
+	igt_waitchildren();
+
+	intel_allocator_multiprocess_stop();
+}
+
+#define SIMPLE_TIMEOUT 5
+static void *__fork_simple_thread(void *data)
+{
+	int fd = (int) (long) data;
+
+	igt_until_timeout(SIMPLE_TIMEOUT) {
+		__simple_allocs(fd);
+	}
+
+	return NULL;
+}
+
+static void fork_simple_stress(int fd, bool two_level_inception)
+{
+	pthread_t thread0, thread1;
+	uint64_t ahnd0, ahnd1;
+	bool are_empty;
+
+	__intel_allocator_multiprocess_prepare();
+
+	igt_fork(child, 8) {
+		if (two_level_inception) {
+			pthread_create(&thread0, NULL, __fork_simple_thread,
+				       (void *) (long) fd);
+			pthread_create(&thread1, NULL, __fork_simple_thread,
+				       (void *) (long) fd);
+		}
+
+		igt_until_timeout(SIMPLE_TIMEOUT) {
+			__simple_allocs(fd);
+		}
+
+		if (two_level_inception) {
+			pthread_join(thread0, NULL);
+			pthread_join(thread1, NULL);
+		}
+	}
+
+	pthread_create(&thread0, NULL, __fork_simple_thread, (void *) (long) fd);
+	pthread_create(&thread1, NULL, __fork_simple_thread, (void *) (long) fd);
+
+	ahnd0 = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd1 = intel_allocator_open(fd, 1, INTEL_ALLOCATOR_SIMPLE);
+
+	__intel_allocator_multiprocess_start();
+
+	igt_waitchildren();
+
+	pthread_join(thread0, NULL);
+	pthread_join(thread1, NULL);
+
+	are_empty = intel_allocator_close(ahnd0);
+	are_empty &= intel_allocator_close(ahnd1);
+
+	intel_allocator_multiprocess_stop();
+
+	igt_assert_f(are_empty, "Allocators were not emptied\n");
+}
+
+static void __reopen_allocs(int fd1, int fd2, bool check)
+{
+	uint64_t ahnd0, ahnd1, ahnd2;
+
+	ahnd0 = intel_allocator_open(fd1, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd1 = intel_allocator_open(fd2, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd2 = intel_allocator_open(fd2, 0, INTEL_ALLOCATOR_SIMPLE);
+	igt_assert(ahnd0 != ahnd1);
+	igt_assert(ahnd1 != ahnd2);
+
+	/* in fork mode we can have more references, so skip check */
+	if (!check) {
+		intel_allocator_close(ahnd0);
+		intel_allocator_close(ahnd1);
+		intel_allocator_close(ahnd2);
+	} else {
+		igt_assert_eq(intel_allocator_close(ahnd0), true);
+		igt_assert_eq(intel_allocator_close(ahnd1), false);
+		igt_assert_eq(intel_allocator_close(ahnd2), true);
+	}
+}
+
+static void reopen(int fd)
+{
+	int fd2;
+
+	igt_require_gem(fd);
+
+	fd2 = gem_reopen_driver(fd);
+
+	__reopen_allocs(fd, fd2, true);
+
+	close(fd2);
+}
+
+#define REOPEN_TIMEOUT 3
+static void reopen_fork(int fd)
+{
+	int fd2;
+
+	igt_require_gem(fd);
+
+	intel_allocator_multiprocess_start();
+
+	fd2 = gem_reopen_driver(fd);
+
+	igt_fork(child, 1) {
+		igt_until_timeout(REOPEN_TIMEOUT)
+			__reopen_allocs(fd, fd2, false);
+	}
+	igt_until_timeout(REOPEN_TIMEOUT)
+		__reopen_allocs(fd, fd2, false);
+
+	igt_waitchildren();
+
+	/* Check references at the end */
+	__reopen_allocs(fd, fd2, true);
+
+	close(fd2);
+
+	intel_allocator_multiprocess_stop();
+}
+
+static void open_vm(int fd)
+{
+	uint64_t ahnd[4], offset[4], size = 0x1000;
+	int i, n = ARRAY_SIZE(ahnd);
+
+	ahnd[0] = intel_allocator_open_vm(fd, 1, INTEL_ALLOCATOR_SIMPLE);
+	ahnd[1] = intel_allocator_open_vm(fd, 1, INTEL_ALLOCATOR_SIMPLE);
+	ahnd[2] = intel_allocator_open_vm_as(ahnd[1], 2);
+	ahnd[3] = intel_allocator_open(fd, 3, INTEL_ALLOCATOR_SIMPLE);
+
+	offset[0] = intel_allocator_alloc(ahnd[0], 1, size, 0);
+	offset[1] = intel_allocator_alloc(ahnd[1], 2, size, 0);
+	igt_assert(offset[0] != offset[1]);
+
+	offset[2] = intel_allocator_alloc(ahnd[2], 3, size, 0);
+	igt_assert(offset[0] != offset[2] && offset[1] != offset[2]);
+
+	offset[3] = intel_allocator_alloc(ahnd[3], 1, size, 0);
+	igt_assert(offset[0] == offset[3]);
+
+	/*
+	 * As ahnd[0-2] lead to same allocator check can we free all handles
+	 * using selected ahnd.
+	 */
+	intel_allocator_free(ahnd[0], 1);
+	intel_allocator_free(ahnd[0], 2);
+	intel_allocator_free(ahnd[0], 3);
+	intel_allocator_free(ahnd[3], 1);
+
+	for (i = 0; i < n - 1; i++)
+		igt_assert_eq(intel_allocator_close(ahnd[i]), (i == n - 2));
+	igt_assert_eq(intel_allocator_close(ahnd[n-1]), true);
+}
+
+struct allocators {
+	const char *name;
+	uint8_t type;
+} als[] = {
+	{"simple", INTEL_ALLOCATOR_SIMPLE},
+	{"reloc",  INTEL_ALLOCATOR_RELOC},
+	{"random", INTEL_ALLOCATOR_RANDOM},
+	{NULL, 0},
+};
+
+igt_main
+{
+	int fd;
+	struct allocators *a;
+
+	igt_fixture {
+		fd = drm_open_driver(DRIVER_INTEL);
+		atomic_init(&next_handle, 1);
+		srandom(0xdeadbeef);
+	}
+
+	igt_subtest_f("alloc-simple")
+		alloc_simple(fd);
+
+	igt_subtest_f("reserve-simple")
+		reserve_simple(fd);
+
+	igt_subtest_f("reuse")
+		reuse(fd, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest_f("reserve")
+		reserve(fd, INTEL_ALLOCATOR_SIMPLE);
+
+	for (a = als; a->name; a++) {
+		igt_subtest_with_dynamic_f("%s-allocator", a->name) {
+			igt_dynamic("basic")
+				basic_alloc(fd, 1UL << 8, a->type);
+
+			igt_dynamic("parallel-one")
+				parallel_one(fd, a->type);
+
+			igt_dynamic("print")
+				basic_alloc(fd, 1UL << 2, a->type);
+
+			if (a->type == INTEL_ALLOCATOR_SIMPLE) {
+				igt_dynamic("reuse")
+					reuse(fd, a->type);
+
+				igt_dynamic("reserve")
+					reserve(fd, a->type);
+			}
+		}
+	}
+
+	igt_subtest_f("fork-simple-once")
+		fork_simple_once(fd);
+
+	igt_subtest_f("fork-simple-stress")
+		fork_simple_stress(fd, false);
+
+	igt_subtest_f("fork-simple-stress-signal") {
+		igt_fork_signal_helper();
+		fork_simple_stress(fd, false);
+		igt_stop_signal_helper();
+	}
+
+	igt_subtest_f("two-level-inception")
+		fork_simple_stress(fd, true);
+
+	igt_subtest_f("two-level-inception-interruptible") {
+		igt_fork_signal_helper();
+		fork_simple_stress(fd, true);
+		igt_stop_signal_helper();
+	}
+
+	igt_subtest_f("reopen")
+		reopen(fd);
+
+	igt_subtest_f("reopen-fork")
+		reopen_fork(fd);
+
+	igt_subtest_f("open-vm")
+		open_vm(fd);
+
+	igt_fixture
+		close(fd);
+}
diff --git a/tests/meson.build b/tests/meson.build
index 825e01833..061691903 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -111,6 +111,7 @@ test_progs = [
 ]
 
 i915_progs = [
+	'api_intel_allocator',
 	'api_intel_bb',
 	'gen3_mixed_blits',
 	'gen3_render_linear_blits',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 27/35] tests/api_intel_allocator: Add execbuf with allocator example
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (25 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 26/35] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 28/35] tests/api_intel_allocator: Verify child can use its standalone allocator Zbigniew Kempczyński
                   ` (9 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

Simplified version of non-fork test which can be used as a copy-paste
template. Uses blit to show how to prepare batch with addresses
acquired from allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_allocator.c | 91 ++++++++++++++++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
index 52acd2ff3..a1328c3c1 100644
--- a/tests/i915/api_intel_allocator.c
+++ b/tests/i915/api_intel_allocator.c
@@ -478,6 +478,94 @@ static void open_vm(int fd)
 	igt_assert_eq(intel_allocator_close(ahnd[n-1]), true);
 }
 
+/* Simple execbuf which uses allocator, non-fork mode */
+static void execbuf_with_allocator(int fd)
+{
+	struct drm_i915_gem_execbuffer2 execbuf;
+	struct drm_i915_gem_exec_object2 object[3];
+	uint64_t ahnd, sz = 4096, gtt_size;
+	unsigned int flags = EXEC_OBJECT_PINNED;
+	uint32_t *ptr, batch[32], copied;
+	int gen = intel_gen(intel_get_drm_devid(fd));
+	int i;
+	const uint32_t magic = 0x900df00d;
+
+	igt_require(gem_uses_full_ppgtt(fd));
+
+	gtt_size = gem_aperture_size(fd);
+	if ((gtt_size - 1) >> 32)
+		flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	memset(object, 0, sizeof(object));
+
+	/* i == 0 (src), i == 1 (dst), i == 2 (batch) */
+	for (i = 0; i < ARRAY_SIZE(object); i++) {
+		uint64_t offset;
+
+		object[i].handle = gem_create(fd, sz);
+		offset = intel_allocator_alloc(ahnd, object[i].handle, sz, 0);
+		object[i].offset = CANONICAL(offset);
+
+		object[i].flags = flags;
+		if (i == 1)
+			object[i].flags |= EXEC_OBJECT_WRITE;
+	}
+
+	/* Prepare src data */
+	ptr = gem_mmap__device_coherent(fd, object[0].handle, 0, sz, PROT_WRITE);
+	ptr[0] = magic;
+	gem_munmap(ptr, sz);
+
+	/* Blit src -> dst */
+	i = 0;
+	batch[i++] = XY_SRC_COPY_BLT_CMD |
+		  XY_SRC_COPY_BLT_WRITE_ALPHA |
+		  XY_SRC_COPY_BLT_WRITE_RGB;
+	if (gen >= 8)
+		batch[i - 1] |= 8;
+	else
+		batch[i - 1] |= 6;
+
+	batch[i++] = (3 << 24) | (0xcc << 16) | 4;
+	batch[i++] = 0;
+	batch[i++] = (1 << 16) | 4;
+	batch[i++] = object[1].offset;
+	if (gen >= 8)
+		batch[i++] = object[1].offset >> 32;
+	batch[i++] = 0;
+	batch[i++] = 4;
+	batch[i++] = object[0].offset;
+	if (gen >= 8)
+		batch[i++] = object[0].offset >> 32;
+	batch[i++] = MI_BATCH_BUFFER_END;
+	batch[i++] = MI_NOOP;
+
+	gem_write(fd, object[2].handle, 0, batch, i * sizeof(batch[0]));
+
+	memset(&execbuf, 0, sizeof(execbuf));
+	execbuf.buffers_ptr = to_user_pointer(object);
+	execbuf.buffer_count = 3;
+	if (gen >= 6)
+		execbuf.flags = I915_EXEC_BLT;
+	gem_execbuf(fd, &execbuf);
+	gem_sync(fd, object[1].handle);
+
+	/* Check dst data */
+	ptr = gem_mmap__device_coherent(fd, object[1].handle, 0, sz, PROT_READ);
+	copied = ptr[0];
+	gem_munmap(ptr, sz);
+
+	for (i = 0; i < ARRAY_SIZE(object); i++) {
+		igt_assert(intel_allocator_free(ahnd, object[i].handle));
+		gem_close(fd, object[i].handle);
+	}
+
+	igt_assert(copied == magic);
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
 struct allocators {
 	const char *name;
 	uint8_t type;
@@ -562,6 +650,9 @@ igt_main
 	igt_subtest_f("open-vm")
 		open_vm(fd);
 
+	igt_subtest_f("execbuf-with-allocator")
+		execbuf_with_allocator(fd);
+
 	igt_fixture
 		close(fd);
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 28/35] tests/api_intel_allocator: Verify child can use its standalone allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (26 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 27/35] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 29/35] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
                   ` (8 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

Sometimes we don't want to use common allocator provided by main IGT
process.  Verify it is able to "detach" from it and initialize its own
version of allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_allocator.c | 42 ++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
index a1328c3c1..2d6f51deb 100644
--- a/tests/i915/api_intel_allocator.c
+++ b/tests/i915/api_intel_allocator.c
@@ -282,6 +282,45 @@ static void parallel_one(int fd, uint8_t type)
 	igt_assert_eq(intel_allocator_close(ahnd), true);
 }
 
+static void standalone(int fd)
+{
+	uint64_t ahnd, offset, size = 4096;
+	uint32_t handle = 1, child_handle = 2;
+	uint64_t *shared;
+
+	shared = mmap(0, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+
+	intel_allocator_multiprocess_start();
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	offset = intel_allocator_alloc(ahnd, handle, size, 0);
+
+	igt_fork(child, 2) {
+		/*
+		 * Use standalone allocator for child 1, detach from parent,
+		 * child 2 use allocator from parent.
+		 */
+		if (child == 1)
+			intel_allocator_init();
+
+		ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+		shared[child] = intel_allocator_alloc(ahnd, child_handle, size, 0);
+
+		intel_allocator_free(ahnd, child_handle);
+		intel_allocator_close(ahnd);
+	}
+	igt_waitchildren();
+	igt_assert_eq(offset, shared[1]);
+	igt_assert_neq(offset, shared[2]);
+
+	intel_allocator_free(ahnd, handle);
+	igt_assert_eq(intel_allocator_close(ahnd), true);
+
+	intel_allocator_multiprocess_stop();
+
+	munmap(shared, 4096);
+}
+
 #define SIMPLE_GROUP_ALLOCS 8
 static void __simple_allocs(int fd)
 {
@@ -620,6 +659,9 @@ igt_main
 		}
 	}
 
+	igt_subtest_f("standalone")
+		standalone(fd);
+
 	igt_subtest_f("fork-simple-once")
 		fork_simple_once(fd);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 29/35] tests/gem_softpin: Verify allocator and execbuf pair work together
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (27 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 28/35] tests/api_intel_allocator: Verify child can use its standalone allocator Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 30/35] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
                   ` (7 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

Exercise objects offsets produced by the allocator for execbuf
are valid and no EINVAL/ENOSPC occurs. Check it works properly
also for multiprocess allocations/execbufs for same context.
As we're in full-ppgtt we disable softpin to verify offsets produced
by the allocator are valid and kernel doesn't want to relocate them.

Add allocator-basic and allocator-basic-reserve to BAT.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_softpin.c              | 194 ++++++++++++++++++++++++++
 tests/intel-ci/fast-feedback.testlist |   2 +
 2 files changed, 196 insertions(+)

diff --git a/tests/i915/gem_softpin.c b/tests/i915/gem_softpin.c
index aba060a42..c3bfd10a9 100644
--- a/tests/i915/gem_softpin.c
+++ b/tests/i915/gem_softpin.c
@@ -28,6 +28,7 @@
 
 #include "i915/gem.h"
 #include "igt.h"
+#include "intel_allocator.h"
 
 #define EXEC_OBJECT_PINNED	(1<<4)
 #define EXEC_OBJECT_SUPPORTS_48B_ADDRESS (1<<3)
@@ -697,6 +698,184 @@ static void test_noreloc(int fd, enum sleep sleep, unsigned flags)
 		gem_close(fd, object[i].handle);
 }
 
+static void __reserve(uint64_t ahnd, int i915, bool pinned,
+		      struct drm_i915_gem_exec_object2 *objects,
+		      int num_obj, uint64_t size)
+{
+	uint64_t gtt = gem_aperture_size(i915);
+	unsigned int flags;
+	int i;
+
+	igt_assert(num_obj > 1);
+
+	flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	if (pinned)
+		flags |= EXEC_OBJECT_PINNED;
+
+	memset(objects, 0, sizeof(objects) * num_obj);
+
+	for (i = 0; i < num_obj; i++) {
+		objects[i].handle = gem_create(i915, size);
+		if (i < num_obj/2)
+			objects[i].offset = i * size;
+		else
+			objects[i].offset = gtt - (i + 1 - num_obj/2) * size;
+		objects[i].flags = flags;
+
+		intel_allocator_reserve(ahnd, objects[i].handle,
+					size, objects[i].offset);
+		igt_debug("Reserve i: %d, handle: %u, offset: %llx\n", i,
+			  objects[i].handle, (long long) objects[i].offset);
+	}
+}
+
+static void __unreserve(uint64_t ahnd, int i915,
+			struct drm_i915_gem_exec_object2 *objects,
+			int num_obj, uint64_t size)
+{
+	int i;
+
+	for (i = 0; i < num_obj; i++) {
+		intel_allocator_unreserve(ahnd, objects[i].handle,
+					  size, objects[i].offset);
+		igt_debug("Unreserve i: %d, handle: %u, offset: %llx\n", i,
+			  objects[i].handle, (long long) objects[i].offset);
+		gem_close(i915, objects[i].handle);
+	}
+}
+
+static void __exec_using_allocator(uint64_t ahnd, int i915, int num_obj,
+				   bool pinned)
+{
+	const uint32_t bbe = MI_BATCH_BUFFER_END;
+	struct drm_i915_gem_execbuffer2 execbuf;
+	struct drm_i915_gem_exec_object2 object[num_obj];
+	uint64_t stored_offsets[num_obj];
+	unsigned int flags;
+	uint64_t sz = 4096;
+	int i;
+
+	igt_assert(num_obj > 10);
+
+	flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	if (pinned)
+		flags |= EXEC_OBJECT_PINNED;
+
+	memset(object, 0, sizeof(object));
+
+	for (i = 0; i < num_obj; i++) {
+		sz = (rand() % 15 + 1) * 4096;
+		if (i == num_obj - 1)
+			sz = 4096;
+		object[i].handle = gem_create(i915, sz);
+		object[i].offset =
+			intel_allocator_alloc(ahnd, object[i].handle, sz, 0);
+	}
+	gem_write(i915, object[--i].handle, 0, &bbe, sizeof(bbe));
+
+	for (i = 0; i < num_obj; i++) {
+		object[i].flags = flags;
+		object[i].offset = gen8_canonical_addr(object[i].offset);
+		stored_offsets[i] = object[i].offset;
+	}
+
+	memset(&execbuf, 0, sizeof(execbuf));
+	execbuf.buffers_ptr = to_user_pointer(object);
+	execbuf.buffer_count = num_obj;
+	gem_execbuf(i915, &execbuf);
+
+	for (i = 0; i < num_obj; i++) {
+		igt_assert(intel_allocator_free(ahnd, object[i].handle));
+		gem_close(i915, object[i].handle);
+	}
+
+	/* Check kernel will keep offsets even if pinned is not set. */
+	for (i = 0; i < num_obj; i++)
+		igt_assert_eq_u64(stored_offsets[i], object[i].offset);
+}
+
+static void test_allocator_basic(int fd, bool reserve)
+{
+	const int num_obj = 257, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Check that we can place objects at start/end
+	 * of the GTT using the allocator.
+	 */
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	if (reserve)
+		__reserve(ahnd, fd, true, objects, num_reserved, ressize);
+	__exec_using_allocator(ahnd, fd, num_obj, true);
+	if (reserve)
+		__unreserve(ahnd, fd, objects, num_reserved, ressize);
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
+static void test_allocator_nopin(int fd, bool reserve)
+{
+	const int num_obj = 257, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Check that we can combine manual placement with automatic
+	 * GTT placement.
+	 *
+	 * This will also check that we agree with this small sampling of
+	 * allocator placements -- that is the given the same restrictions
+	 * in execobj[] the kernel does not reject the placement due
+	 * to overlaps or invalid addresses.
+	 */
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	if (reserve)
+		__reserve(ahnd, fd, false, objects, num_reserved, ressize);
+
+	__exec_using_allocator(ahnd, fd, num_obj, false);
+
+	if (reserve)
+		__unreserve(ahnd, fd, objects, num_reserved, ressize);
+
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
+static void test_allocator_fork(int fd)
+{
+	const int num_obj = 17, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Must be called before opening allocator in multiprocess environment
+	 * due to freeing previous allocator infrastructure and proper setup
+	 * of data structures and allocation thread.
+	 */
+	intel_allocator_multiprocess_start();
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	__reserve(ahnd, fd, true, objects, num_reserved, ressize);
+
+	igt_fork(child, 8) {
+		ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+		igt_until_timeout(2)
+			__exec_using_allocator(ahnd, fd, num_obj, true);
+		intel_allocator_close(ahnd);
+	}
+
+	igt_waitchildren();
+
+	__unreserve(ahnd, fd, objects, num_reserved, ressize);
+	igt_assert(intel_allocator_close(ahnd) == true);
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	igt_assert(intel_allocator_close(ahnd) == true);
+
+	intel_allocator_multiprocess_stop();
+}
+
 igt_main
 {
 	int fd = -1;
@@ -727,6 +906,21 @@ igt_main
 
 		igt_subtest("full")
 			test_full(fd);
+
+		igt_subtest("allocator-basic")
+			test_allocator_basic(fd, false);
+
+		igt_subtest("allocator-basic-reserve")
+			test_allocator_basic(fd, true);
+
+		igt_subtest("allocator-nopin")
+			test_allocator_nopin(fd, false);
+
+		igt_subtest("allocator-nopin-reserve")
+			test_allocator_nopin(fd, true);
+
+		igt_subtest("allocator-fork")
+			test_allocator_fork(fd);
 	}
 
 	igt_subtest("softpin")
diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
index eaa904fa7..fa5006d2e 100644
--- a/tests/intel-ci/fast-feedback.testlist
+++ b/tests/intel-ci/fast-feedback.testlist
@@ -39,6 +39,8 @@ igt@gem_mmap_gtt@basic
 igt@gem_render_linear_blits@basic
 igt@gem_render_tiled_blits@basic
 igt@gem_ringfill@basic-all
+igt@gem_softpin@allocator-basic
+igt@gem_softpin@allocator-basic-reserve
 igt@gem_sync@basic-all
 igt@gem_sync@basic-each
 igt@gem_tiled_blits@basic
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 30/35] tests/gem|kms: Remove intel_bb from fixture
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (28 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 29/35] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 31/35] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
                   ` (6 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

As intel_bb "opens" connection to allocator when test completes it can
leave allocator in unknown state (mostly in failed tests). As igt_core
was armed in resetting allocator infrastructure connection to it inside
intel_bb is not valid anymore. Trying to use it leads to catastrofic
errors.

Migrate intel_bb out of fixture and create it inside tests individually.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_caching.c              | 14 ++++++++--
 tests/i915/gem_partial_pwrite_pread.c | 40 +++++++++++++++++----------
 tests/i915/gem_render_copy.c          | 31 ++++++++++-----------
 tests/kms_big_fb.c                    | 12 +++++---
 4 files changed, 61 insertions(+), 36 deletions(-)

diff --git a/tests/i915/gem_caching.c b/tests/i915/gem_caching.c
index bdaff68a0..4e844952f 100644
--- a/tests/i915/gem_caching.c
+++ b/tests/i915/gem_caching.c
@@ -158,7 +158,6 @@ igt_main
 			flags = 0;
 		}
 		data.bops = buf_ops_create(data.fd);
-		ibb = intel_bb_create(data.fd, PAGE_SIZE);
 
 		scratch_buf = intel_buf_create(data.bops, BO_SIZE/4, 1,
 					       32, 0, I915_TILING_NONE, 0);
@@ -174,6 +173,8 @@ igt_main
 
 		igt_info("checking partial reads\n");
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		for (i = 0; i < ROUNDS; i++) {
 			uint8_t val0 = i;
 			int start, len;
@@ -195,11 +196,15 @@ igt_main
 
 			igt_progress("partial reads test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_subtest("writes") {
 		igt_require(flags & TEST_WRITE);
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		igt_info("checking partial writes\n");
 
 		for (i = 0; i < ROUNDS; i++) {
@@ -240,11 +245,15 @@ igt_main
 
 			igt_progress("partial writes test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_subtest("read-writes") {
 		igt_require((flags & TEST_BOTH) == TEST_BOTH);
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		igt_info("checking partial writes after partial reads\n");
 
 		for (i = 0; i < ROUNDS; i++) {
@@ -307,10 +316,11 @@ igt_main
 
 			igt_progress("partial read/writes test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_fixture {
-		intel_bb_destroy(ibb);
 		intel_buf_destroy(scratch_buf);
 		intel_buf_destroy(staging_buf);
 		buf_ops_destroy(data.bops);
diff --git a/tests/i915/gem_partial_pwrite_pread.c b/tests/i915/gem_partial_pwrite_pread.c
index 72c33539d..c2ca561e3 100644
--- a/tests/i915/gem_partial_pwrite_pread.c
+++ b/tests/i915/gem_partial_pwrite_pread.c
@@ -53,7 +53,6 @@ IGT_TEST_DESCRIPTION("Test pwrite/pread consistency when touching partial"
 #define PAGE_SIZE 4096
 #define BO_SIZE (4*4096)
 
-struct intel_bb *ibb;
 struct intel_buf *scratch_buf;
 struct intel_buf *staging_buf;
 
@@ -77,7 +76,8 @@ static void *__try_gtt_map_first(data_t *data, struct intel_buf *buf,
 	return ptr;
 }
 
-static void copy_bo(struct intel_buf *src, struct intel_buf *dst)
+static void copy_bo(struct intel_bb *ibb,
+		    struct intel_buf *src, struct intel_buf *dst)
 {
 	bool has_64b_reloc;
 
@@ -109,8 +109,8 @@ static void copy_bo(struct intel_buf *src, struct intel_buf *dst)
 }
 
 static void
-blt_bo_fill(data_t *data, struct intel_buf *tmp_bo,
-		struct intel_buf *bo, uint8_t val)
+blt_bo_fill(data_t *data, struct intel_bb *ibb,
+	    struct intel_buf *tmp_bo, struct intel_buf *bo, uint8_t val)
 {
 	uint8_t *gtt_ptr;
 	int i;
@@ -124,7 +124,7 @@ blt_bo_fill(data_t *data, struct intel_buf *tmp_bo,
 
 	igt_drop_caches_set(data->drm_fd, DROP_BOUND);
 
-	copy_bo(tmp_bo, bo);
+	copy_bo(ibb, tmp_bo, bo);
 }
 
 #define MAX_BLT_SIZE 128
@@ -139,14 +139,17 @@ static void get_range(int *start, int *len)
 
 static void test_partial_reads(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial reads\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		get_range(&start, &len);
 		gem_read(data->drm_fd, scratch_buf->handle, start, tmp, len);
@@ -159,26 +162,31 @@ static void test_partial_reads(data_t *data)
 
 		igt_progress("partial reads test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void test_partial_writes(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 	uint8_t *gtt_ptr;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial writes\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		memset(tmp, i + 63, BO_SIZE);
 
 		get_range(&start, &len);
 		gem_write(data->drm_fd, scratch_buf->handle, start, tmp, len);
 
-		copy_bo(scratch_buf, staging_buf);
+		copy_bo(ibb, scratch_buf, staging_buf);
 		gtt_ptr = __try_gtt_map_first(data, staging_buf, 0);
 
 		for (j = 0; j < start; j++) {
@@ -200,19 +208,24 @@ static void test_partial_writes(data_t *data)
 
 		igt_progress("partial writes test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void test_partial_read_writes(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 	uint8_t *gtt_ptr;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial writes after partial reads\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		/* partial read */
 		get_range(&start, &len);
@@ -226,7 +239,7 @@ static void test_partial_read_writes(data_t *data)
 		/* Change contents through gtt to make the pread cachelines
 		 * stale. */
 		val += 17;
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		/* partial write */
 		memset(tmp, i + 63, BO_SIZE);
@@ -234,7 +247,7 @@ static void test_partial_read_writes(data_t *data)
 		get_range(&start, &len);
 		gem_write(data->drm_fd, scratch_buf->handle, start, tmp, len);
 
-		copy_bo(scratch_buf, staging_buf);
+		copy_bo(ibb, scratch_buf, staging_buf);
 		gtt_ptr = __try_gtt_map_first(data, staging_buf, 0);
 
 		for (j = 0; j < start; j++) {
@@ -256,6 +269,8 @@ static void test_partial_read_writes(data_t *data)
 
 		igt_progress("partial read/writes test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void do_tests(data_t *data, int cache_level, const char *suffix)
@@ -288,8 +303,6 @@ igt_main
 		data.devid = intel_get_drm_devid(data.drm_fd);
 		data.bops = buf_ops_create(data.drm_fd);
 
-		ibb = intel_bb_create(data.drm_fd, PAGE_SIZE);
-
 		/* overallocate the buffers we're actually using because */	
 		scratch_buf = intel_buf_create(data.bops, BO_SIZE/4, 1, 32, 0, I915_TILING_NONE, 0);
 		staging_buf = intel_buf_create(data.bops, BO_SIZE/4, 1, 32, 0, I915_TILING_NONE, 0);
@@ -303,7 +316,6 @@ igt_main
 	do_tests(&data, 2, "-display");
 
 	igt_fixture {
-		intel_bb_destroy(ibb);
 		intel_buf_destroy(scratch_buf);
 		intel_buf_destroy(staging_buf);
 		buf_ops_destroy(data.bops);
diff --git a/tests/i915/gem_render_copy.c b/tests/i915/gem_render_copy.c
index afc490f1a..e48b5b996 100644
--- a/tests/i915/gem_render_copy.c
+++ b/tests/i915/gem_render_copy.c
@@ -58,7 +58,6 @@ typedef struct {
 	int drm_fd;
 	uint32_t devid;
 	struct buf_ops *bops;
-	struct intel_bb *ibb;
 	igt_render_copyfunc_t render_copy;
 	igt_vebox_copyfunc_t vebox_copy;
 } data_t;
@@ -341,6 +340,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		 enum i915_compression dst_compression,
 		 int flags)
 {
+	struct intel_bb *ibb;
 	struct intel_buf ref, src_tiled, src_ccs, dst_ccs, dst;
 	struct {
 		struct intel_buf buf;
@@ -397,6 +397,8 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	    src_compressed || dst_compressed)
 		igt_require(intel_gen(data->devid) >= 9);
 
+	ibb = intel_bb_create(data->drm_fd, 4096);
+
 	for (int i = 0; i < num_src; i++)
 		scratch_buf_init(data, &src[i].buf, WIDTH, HEIGHT, src[i].tiling,
 				 I915_COMPRESSION_NONE);
@@ -456,12 +458,12 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	 */
 	if (src_mixed_tiled) {
 		if (dst_compressed)
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst, 0, 0, WIDTH, HEIGHT,
 					  &dst_ccs, 0, 0);
 
 		for (int i = 0; i < num_src; i++) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &src[i].buf,
 					  WIDTH/4, HEIGHT/4, WIDTH/2-2, HEIGHT/2-2,
 					  dst_compressed ? &dst_ccs : &dst,
@@ -469,13 +471,13 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		}
 
 		if (dst_compressed)
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst_ccs, 0, 0, WIDTH, HEIGHT,
 					  &dst, 0, 0);
 
 	} else {
 		if (src_compression == I915_COMPRESSION_RENDER) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &src_tiled, 0, 0, WIDTH, HEIGHT,
 					  &src_ccs,
 					  0, 0);
@@ -486,7 +488,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 						       "render-src_ccs.bin");
 			}
 		} else if (src_compression == I915_COMPRESSION_MEDIA) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 &src_tiled, WIDTH, HEIGHT,
 					 &src_ccs);
 			if (dump_compressed_src_buf) {
@@ -498,34 +500,34 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		}
 
 		if (dst_compression == I915_COMPRESSION_RENDER) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  src_compressed ? &src_ccs : &src_tiled,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst_ccs,
 					  0, 0);
 
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst_ccs,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst,
 					  0, 0);
 		} else if (dst_compression == I915_COMPRESSION_MEDIA) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 src_compressed ? &src_ccs : &src_tiled,
 					 WIDTH, HEIGHT,
 					 &dst_ccs);
 
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 &dst_ccs,
 					 WIDTH, HEIGHT,
 					 &dst);
 		} else if (force_vebox_dst_copy) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 src_compressed ? &src_ccs : &src_tiled,
 					 WIDTH, HEIGHT,
 					 &dst);
 		} else {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  src_compressed ? &src_ccs : &src_tiled,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst,
@@ -572,8 +574,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	for (int i = 0; i < num_src; i++)
 		scratch_buf_fini(&src[i].buf);
 
-	/* handles gone, need to clean the objects cache within intel_bb */
-	intel_bb_reset(data->ibb, true);
+	intel_bb_destroy(ibb);
 }
 
 static int opt_handler(int opt, int opt_index, void *data)
@@ -796,7 +797,6 @@ igt_main_args("dac", NULL, help_str, opt_handler, NULL)
 		data.vebox_copy = igt_get_vebox_copyfunc(data.devid);
 
 		data.bops = buf_ops_create(data.drm_fd);
-		data.ibb = intel_bb_create(data.drm_fd, 4096);
 
 		igt_fork_hang_detector(data.drm_fd);
 	}
@@ -849,7 +849,6 @@ igt_main_args("dac", NULL, help_str, opt_handler, NULL)
 
 	igt_fixture {
 		igt_stop_hang_detector();
-		intel_bb_destroy(data.ibb);
 		buf_ops_destroy(data.bops);
 	}
 }
diff --git a/tests/kms_big_fb.c b/tests/kms_big_fb.c
index 5260176e1..b6707a5a4 100644
--- a/tests/kms_big_fb.c
+++ b/tests/kms_big_fb.c
@@ -664,7 +664,6 @@ igt_main
 			data.render_copy = igt_get_render_copyfunc(data.devid);
 
 		data.bops = buf_ops_create(data.drm_fd);
-		data.ibb = intel_bb_create(data.drm_fd, 4096);
 	}
 
 	/*
@@ -677,7 +676,9 @@ igt_main
 		igt_subtest_f("%s-addfb-size-overflow",
 			      modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_size_overflow(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
@@ -685,15 +686,18 @@ igt_main
 		igt_subtest_f("%s-addfb-size-offset-overflow",
 			      modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_size_offset_overflow(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
 	for (int i = 0; i < ARRAY_SIZE(modifiers); i++) {
 		igt_subtest_f("%s-addfb", modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
-
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_addfb(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
@@ -711,7 +715,9 @@ igt_main
 					igt_require(data.format == DRM_FORMAT_C8 ||
 						    igt_fb_supported_format(data.format));
 					igt_require(igt_display_has_format_mod(&data.display, data.format, data.modifier));
+					data.ibb = intel_bb_create(data.drm_fd, 4096);
 					test_scanout(&data);
+					intel_bb_destroy(data.ibb);
 				}
 			}
 
@@ -722,8 +728,6 @@ igt_main
 
 	igt_fixture {
 		igt_display_fini(&data.display);
-
-		intel_bb_destroy(data.ibb);
 		buf_ops_destroy(data.bops);
 	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 31/35] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (29 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 30/35] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 32/35] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
                   ` (5 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

Generally when playing with intel_buf we should use wrapper code
instead of adding it to intel_bb directly. Code checks required
alignment on specific gens so protects us from passing unaligned
properly addresses.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_mmap_offset.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/i915/gem_mmap_offset.c b/tests/i915/gem_mmap_offset.c
index 5e48cd697..8f2006274 100644
--- a/tests/i915/gem_mmap_offset.c
+++ b/tests/i915/gem_mmap_offset.c
@@ -614,8 +614,8 @@ static void blt_coherency(int i915)
 	dst = create_bo(bops, 1, width, height);
 	size = src->surface[0].size;
 
-	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, 0, false);
-	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, 0, true);
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
 
 	intel_bb_blt_copy(ibb,
 			  src, 0, 0, src->surface[0].stride,
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 32/35] tests/gem_ppgtt: Adopt test to use intel_bb with allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (30 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 31/35] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 33/35] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
                   ` (4 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

Initialize allocator within child processes to avoid creation allocator
thread and unnecessary communication to it.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_ppgtt.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_ppgtt.c b/tests/i915/gem_ppgtt.c
index 5f8b34224..da83484a6 100644
--- a/tests/i915/gem_ppgtt.c
+++ b/tests/i915/gem_ppgtt.c
@@ -104,12 +104,14 @@ static void fork_rcs_copy(int timeout, uint32_t final,
 		struct intel_buf *src;
 		unsigned long i;
 
+		/* Standalone allocator */
+		intel_allocator_init();
+
 		if (flags & CREATE_CONTEXT)
 			ctx = gem_context_create(buf_ops_get_fd(dst[child]->bops));
 
 		ibb = intel_bb_create_with_context(buf_ops_get_fd(dst[child]->bops),
 						   ctx, 4096);
-
 		i = 0;
 		igt_until_timeout(timeout) {
 			src = create_bo(dst[child]->bops,
@@ -151,6 +153,9 @@ static void fork_bcs_copy(int timeout, uint32_t final,
 		struct intel_bb *ibb;
 		unsigned long i;
 
+		/* Standalone allocator */
+		intel_allocator_init();
+
 		ibb = intel_bb_create(buf_ops_get_fd(dst[child]->bops), 4096);
 
 		i = 0;
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 33/35] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (31 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 32/35] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 34/35] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
                   ` (3 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

As intel_bb has some strong requirements when allocator is in use
(addresses cannot move when simple allocator is used) ensure gem
buffer which is created from flink will reacquire new address.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_render_copy_redux.c | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/tests/i915/gem_render_copy_redux.c b/tests/i915/gem_render_copy_redux.c
index 40308a6e6..6c8d7f755 100644
--- a/tests/i915/gem_render_copy_redux.c
+++ b/tests/i915/gem_render_copy_redux.c
@@ -63,7 +63,6 @@ typedef struct {
 	int fd;
 	uint32_t devid;
 	struct buf_ops *bops;
-	struct intel_bb *ibb;
 	igt_render_copyfunc_t render_copy;
 	uint32_t linear[WIDTH * HEIGHT];
 } data_t;
@@ -77,13 +76,10 @@ static void data_init(data_t *data)
 	data->render_copy = igt_get_render_copyfunc(data->devid);
 	igt_require_f(data->render_copy,
 		      "no render-copy function\n");
-
-	data->ibb = intel_bb_create(data->fd, 4096);
 }
 
 static void data_fini(data_t *data)
 {
-	intel_bb_destroy(data->ibb);
 	buf_ops_destroy(data->bops);
 	close(data->fd);
 }
@@ -126,15 +122,17 @@ scratch_buf_check(data_t *data, struct intel_buf *buf, int x, int y,
 
 static void copy(data_t *data)
 {
+	struct intel_bb *ibb;
 	struct intel_buf src, dst;
 
+	ibb = intel_bb_create(data->fd, 4096);
 	scratch_buf_init(data, &src, WIDTH, HEIGHT, STRIDE, SRC_COLOR);
 	scratch_buf_init(data, &dst, WIDTH, HEIGHT, STRIDE, DST_COLOR);
 
 	scratch_buf_check(data, &src, WIDTH / 2, HEIGHT / 2, SRC_COLOR);
 	scratch_buf_check(data, &dst, WIDTH / 2, HEIGHT / 2, DST_COLOR);
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &src, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH / 2, HEIGHT / 2);
 
@@ -143,11 +141,13 @@ static void copy(data_t *data)
 
 	scratch_buf_fini(data, &src);
 	scratch_buf_fini(data, &dst);
+	intel_bb_destroy(ibb);
 }
 
 static void copy_flink(data_t *data)
 {
 	data_t local;
+	struct intel_bb *ibb, *local_ibb;
 	struct intel_buf src, dst;
 	struct intel_buf local_src, local_dst;
 	struct intel_buf flink;
@@ -155,32 +155,36 @@ static void copy_flink(data_t *data)
 
 	data_init(&local);
 
+	ibb = intel_bb_create(data->fd, 4096);
+	local_ibb = intel_bb_create(local.fd, 4096);
 	scratch_buf_init(data, &src, WIDTH, HEIGHT, STRIDE, 0);
 	scratch_buf_init(data, &dst, WIDTH, HEIGHT, STRIDE, DST_COLOR);
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &src, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH, HEIGHT);
 
 	scratch_buf_init(&local, &local_src, WIDTH, HEIGHT, STRIDE, 0);
 	scratch_buf_init(&local, &local_dst, WIDTH, HEIGHT, STRIDE, SRC_COLOR);
 
-	local.render_copy(local.ibb,
+	local.render_copy(local_ibb,
 			  &local_src, 0, 0, WIDTH, HEIGHT,
 			  &local_dst, WIDTH, HEIGHT);
 
 	name = gem_flink(local.fd, local_dst.handle);
 	flink = local_dst;
 	flink.handle = gem_open(data->fd, name);
+	flink.ibb = ibb;
+	flink.addr.offset = INTEL_BUF_INVALID_ADDRESS;
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &flink, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH / 2, HEIGHT / 2);
 
 	scratch_buf_check(data, &dst, 10, 10, DST_COLOR);
 	scratch_buf_check(data, &dst, WIDTH - 10, HEIGHT - 10, SRC_COLOR);
 
-	intel_bb_reset(data->ibb, true);
+	intel_bb_reset(ibb, true);
 	scratch_buf_fini(data, &src);
 	scratch_buf_fini(data, &flink);
 	scratch_buf_fini(data, &dst);
@@ -188,6 +192,8 @@ static void copy_flink(data_t *data)
 	scratch_buf_fini(&local, &local_src);
 	scratch_buf_fini(&local, &local_dst);
 
+	intel_bb_destroy(local_ibb);
+	intel_bb_destroy(ibb);
 	data_fini(&local);
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 34/35] tests/perf.c: Remove buffer from batch
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (32 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 33/35] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 35/35] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
                   ` (2 subsequent siblings)
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

Currently we need to ensure intel_buf is a part of single ibb due
to fact it is acquiring address from allocator so we cannot keep
it in two separate ibbs (theoretically it is possible when different
ibbs would be in same context but it is not currently implemented).

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 tests/i915/perf.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tests/i915/perf.c b/tests/i915/perf.c
index 664fd0a94..e641d5d2d 100644
--- a/tests/i915/perf.c
+++ b/tests/i915/perf.c
@@ -3527,6 +3527,9 @@ gen8_test_single_ctx_render_target_writes_a_counter(void)
 			/* Another redundant flush to clarify batch bo is free to reuse */
 			intel_bb_flush_render(ibb0);
 
+			/* Remove intel_buf from ibb0 added implicitly in rendercopy */
+			intel_bb_remove_intel_buf(ibb0, dst_buf);
+
 			/* submit two copies on the other context to avoid a false
 			 * positive in case the driver somehow ended up filtering for
 			 * context1
@@ -3919,6 +3922,9 @@ static void gen12_single_ctx_helper(void)
 				     BO_REPORT_ID0);
 	intel_bb_flush_render(ibb0);
 
+	/* Remove intel_buf from ibb0 added implicitly in rendercopy */
+	intel_bb_remove_intel_buf(ibb0, dst_buf);
+
 	/* This is the work/context that is measured for counter increments */
 	render_copy(ibb0,
 		    &src[0], 0, 0, width, height,
@@ -3965,6 +3971,9 @@ static void gen12_single_ctx_helper(void)
 				     BO_REPORT_ID3);
 	intel_bb_flush_render(ibb1);
 
+	/* Remove intel_buf from ibb1 added implicitly in rendercopy */
+	intel_bb_remove_intel_buf(ibb1, dst_buf);
+
 	/* Submit an mi-rpc to context0 after all measurable work */
 #define BO_TIMESTAMP_OFFSET1 1032
 #define BO_REPORT_OFFSET1 256
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] [PATCH i-g-t v26 35/35] tests/gem_linear_blits: Use intel allocator
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (33 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 34/35] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
@ 2021-03-17 14:46 ` Zbigniew Kempczyński
  2021-03-17 16:03 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev29) Patchwork
  2021-03-17 17:47 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
  36 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 14:46 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Use intel allocator directly, without intel-bb infrastructure.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_linear_blits.c | 90 ++++++++++++++++++++++++++---------
 1 file changed, 68 insertions(+), 22 deletions(-)

diff --git a/tests/i915/gem_linear_blits.c b/tests/i915/gem_linear_blits.c
index cae42d52a..7b7cf05a5 100644
--- a/tests/i915/gem_linear_blits.c
+++ b/tests/i915/gem_linear_blits.c
@@ -53,10 +53,13 @@ IGT_TEST_DESCRIPTION("Test doing many blits with a working set larger than the"
 #define WIDTH 512
 #define HEIGHT 512
 
+/* We don't have alignment detection yet, so assume worst case scenario */
+#define ALIGNMENT (2048*1024)
+
 static uint32_t linear[WIDTH*HEIGHT];
 
-static void
-copy(int fd, uint32_t dst, uint32_t src)
+static void copy(int fd, uint64_t ahnd, uint32_t dst, uint32_t src,
+		 uint64_t dst_offset, uint64_t src_offset, bool do_relocs)
 {
 	uint32_t batch[12];
 	struct drm_i915_gem_relocation_entry reloc[2];
@@ -64,6 +67,20 @@ copy(int fd, uint32_t dst, uint32_t src)
 	struct drm_i915_gem_execbuffer2 exec;
 	int i = 0;
 
+	memset(obj, 0, sizeof(obj));
+	obj[0].handle = dst;
+	obj[0].offset = CANONICAL(dst_offset);
+	obj[0].flags = EXEC_OBJECT_WRITE | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	obj[1].handle = src;
+	obj[1].offset = CANONICAL(src_offset);
+	obj[1].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+	obj[2].handle = gem_create(fd, 4096);
+	obj[2].offset = intel_allocator_alloc(ahnd, obj[2].handle,
+			4096, ALIGNMENT);
+	obj[2].offset = CANONICAL(obj[2].offset);
+	obj[2].flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
 	batch[i++] = XY_SRC_COPY_BLT_CMD |
 		  XY_SRC_COPY_BLT_WRITE_ALPHA |
 		  XY_SRC_COPY_BLT_WRITE_RGB;
@@ -77,22 +94,24 @@ copy(int fd, uint32_t dst, uint32_t src)
 		  WIDTH*4;
 	batch[i++] = 0; /* dst x1,y1 */
 	batch[i++] = (HEIGHT << 16) | WIDTH; /* dst x2,y2 */
-	batch[i++] = 0; /* dst reloc */
+	batch[i++] = obj[0].offset;
 	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
-		batch[i++] = 0;
+		batch[i++] = obj[0].offset >> 32;
 	batch[i++] = 0; /* src x1,y1 */
 	batch[i++] = WIDTH*4;
-	batch[i++] = 0; /* src reloc */
+	batch[i++] = obj[1].offset;
 	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
-		batch[i++] = 0;
+		batch[i++] = obj[1].offset >> 32;
 	batch[i++] = MI_BATCH_BUFFER_END;
 	batch[i++] = MI_NOOP;
 
+	gem_write(fd, obj[2].handle, 0, batch, i * sizeof(batch[0]));
+
 	memset(reloc, 0, sizeof(reloc));
 	reloc[0].target_handle = dst;
 	reloc[0].delta = 0;
 	reloc[0].offset = 4 * sizeof(batch[0]);
-	reloc[0].presumed_offset = 0;
+	reloc[0].presumed_offset = obj[0].offset;
 	reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
 	reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
 
@@ -101,25 +120,27 @@ copy(int fd, uint32_t dst, uint32_t src)
 	reloc[1].offset = 7 * sizeof(batch[0]);
 	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
 		reloc[1].offset += sizeof(batch[0]);
-	reloc[1].presumed_offset = 0;
+	reloc[1].presumed_offset = obj[1].offset;
 	reloc[1].read_domains = I915_GEM_DOMAIN_RENDER;
 	reloc[1].write_domain = 0;
 
-	memset(obj, 0, sizeof(obj));
-	obj[0].handle = dst;
-	obj[1].handle = src;
-	obj[2].handle = gem_create(fd, 4096);
-	gem_write(fd, obj[2].handle, 0, batch, i * sizeof(batch[0]));
-	obj[2].relocation_count = 2;
-	obj[2].relocs_ptr = to_user_pointer(reloc);
+	if (do_relocs) {
+		obj[2].relocation_count = ARRAY_SIZE(reloc);
+		obj[2].relocs_ptr = to_user_pointer(reloc);
+	} else {
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		obj[2].flags |= EXEC_OBJECT_PINNED;
+	}
 
 	memset(&exec, 0, sizeof(exec));
 	exec.buffers_ptr = to_user_pointer(obj);
-	exec.buffer_count = 3;
+	exec.buffer_count = ARRAY_SIZE(obj);
 	exec.batch_len = i * sizeof(batch[0]);
 	exec.flags = gem_has_blt(fd) ? I915_EXEC_BLT : 0;
-
 	gem_execbuf(fd, &exec);
+
+	intel_allocator_free(ahnd, obj[2].handle);
 	gem_close(fd, obj[2].handle);
 }
 
@@ -157,17 +178,28 @@ check_bo(int fd, uint32_t handle, uint32_t val)
 	igt_assert_eq(num_errors, 0);
 }
 
-static void run_test(int fd, int count)
+static void run_test(int fd, int count, bool do_relocs)
 {
 	uint32_t *handle, *start_val;
+	uint64_t *offset, ahnd;
 	uint32_t start = 0;
 	int i;
 
+	ahnd = intel_allocator_open(fd, 0, do_relocs ?
+					    INTEL_ALLOCATOR_RELOC :
+					    INTEL_ALLOCATOR_SIMPLE);
+
 	handle = malloc(sizeof(uint32_t) * count * 2);
+	offset = calloc(1, sizeof(uint64_t) * count);
+	igt_assert_f(handle && offset, "Allocation failed\n");
 	start_val = handle + count;
 
 	for (i = 0; i < count; i++) {
 		handle[i] = create_bo(fd, start);
+
+		offset[i] = intel_allocator_alloc(ahnd, handle[i],
+						  sizeof(linear), ALIGNMENT);
+
 		start_val[i] = start;
 		start += 1024 * 1024 / 4;
 	}
@@ -178,17 +210,22 @@ static void run_test(int fd, int count)
 
 		if (src == dst)
 			continue;
+		copy(fd, ahnd, handle[dst], handle[src],
+		     offset[dst], offset[src], do_relocs);
 
-		copy(fd, handle[dst], handle[src]);
 		start_val[dst] = start_val[src];
 	}
 
 	for (i = 0; i < count; i++) {
 		check_bo(fd, handle[i], start_val[i]);
+		intel_allocator_free(ahnd, handle[i]);
 		gem_close(fd, handle[i]);
 	}
 
 	free(handle);
+	free(offset);
+
+	intel_allocator_close(ahnd);
 }
 
 #define MAX_32b ((1ull << 32) - 4096)
@@ -197,16 +234,21 @@ igt_main
 {
 	const int ncpus = sysconf(_SC_NPROCESSORS_ONLN);
 	uint64_t count = 0;
+	bool do_relocs;
 	int fd = -1;
 
 	igt_fixture {
 		fd = drm_open_driver(DRIVER_INTEL);
 		igt_require_gem(fd);
 		gem_require_blitter(fd);
+		do_relocs = !gem_uses_ppgtt(fd);
 
 		count = gem_aperture_size(fd);
 		if (count >> 32)
 			count = MAX_32b;
+		else
+			do_relocs = true;
+
 		count = 3 + count / (1024*1024);
 		igt_require(count > 1);
 		intel_require_memory(count, sizeof(linear), CHECK_RAM);
@@ -216,19 +258,23 @@ igt_main
 	}
 
 	igt_subtest("basic")
-		run_test(fd, 2);
+		run_test(fd, 2, do_relocs);
 
 	igt_subtest("normal") {
+		intel_allocator_multiprocess_start();
 		igt_fork(child, ncpus)
-			run_test(fd, count);
+			run_test(fd, count, do_relocs);
 		igt_waitchildren();
+		intel_allocator_multiprocess_stop();
 	}
 
 	igt_subtest("interruptible") {
+		intel_allocator_multiprocess_start();
 		igt_fork_signal_helper();
 		igt_fork(child, ncpus)
-			run_test(fd, count);
+			run_test(fd, count, do_relocs);
 		igt_waitchildren();
 		igt_stop_signal_helper();
+		intel_allocator_multiprocess_stop();
 	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev29)
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (34 preceding siblings ...)
  2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 35/35] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
@ 2021-03-17 16:03 ` Patchwork
  2021-03-17 17:47 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
  36 siblings, 0 replies; 53+ messages in thread
From: Patchwork @ 2021-03-17 16:03 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 7095 bytes --]

== Series Details ==

Series: Introduce IGT allocator (rev29)
URL   : https://patchwork.freedesktop.org/series/82954/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9865 -> IGTPW_5611
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/index.html

New tests
---------

  New tests have been introduced between CI_DRM_9865 and IGTPW_5611:

### New IGT tests (2) ###

  * igt@gem_softpin@allocator-basic:
    - Statuses : 28 pass(s) 10 skip(s)
    - Exec time: [0.0, 1.06] s

  * igt@gem_softpin@allocator-basic-reserve:
    - Statuses : 28 pass(s) 10 skip(s)
    - Exec time: [0.0, 1.04] s

  

Known issues
------------

  Here are the changes found in IGTPW_5611 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * {igt@gem_softpin@allocator-basic} (NEW):
    - fi-byt-j1900:       NOTRUN -> [SKIP][1] ([fdo#109271]) +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-byt-j1900/igt@gem_softpin@allocator-basic.html
    - fi-pnv-d510:        NOTRUN -> [SKIP][2] ([fdo#109271]) +1 similar issue
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-pnv-d510/igt@gem_softpin@allocator-basic.html
    - fi-bwr-2160:        NOTRUN -> [SKIP][3] ([fdo#109271]) +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-bwr-2160/igt@gem_softpin@allocator-basic.html
    - {fi-hsw-gt1}:       NOTRUN -> [SKIP][4] ([fdo#109271]) +1 similar issue
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-hsw-gt1/igt@gem_softpin@allocator-basic.html
    - fi-snb-2520m:       NOTRUN -> [SKIP][5] ([fdo#109271]) +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-snb-2520m/igt@gem_softpin@allocator-basic.html
    - fi-ivb-3770:        NOTRUN -> [SKIP][6] ([fdo#109271]) +1 similar issue
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-ivb-3770/igt@gem_softpin@allocator-basic.html
    - fi-snb-2600:        NOTRUN -> [SKIP][7] ([fdo#109271]) +1 similar issue
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-snb-2600/igt@gem_softpin@allocator-basic.html

  * {igt@gem_softpin@allocator-basic-reserve} (NEW):
    - fi-elk-e7500:       NOTRUN -> [SKIP][8] ([fdo#109271]) +1 similar issue
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-elk-e7500/igt@gem_softpin@allocator-basic-reserve.html
    - fi-hsw-4770:        NOTRUN -> [SKIP][9] ([fdo#109271]) +1 similar issue
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-hsw-4770/igt@gem_softpin@allocator-basic-reserve.html
    - fi-ilk-650:         NOTRUN -> [SKIP][10] ([fdo#109271]) +1 similar issue
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-ilk-650/igt@gem_softpin@allocator-basic-reserve.html

  * igt@gem_tiled_fence_blits@basic:
    - fi-kbl-8809g:       [PASS][11] -> [TIMEOUT][12] ([i915#3145])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/fi-kbl-8809g/igt@gem_tiled_fence_blits@basic.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-kbl-8809g/igt@gem_tiled_fence_blits@basic.html

  
#### Possible fixes ####

  * igt@gem_tiled_blits@basic:
    - fi-kbl-8809g:       [TIMEOUT][13] ([i915#2502] / [i915#3145]) -> [PASS][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/fi-kbl-8809g/igt@gem_tiled_blits@basic.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-kbl-8809g/igt@gem_tiled_blits@basic.html

  * igt@i915_selftest@live@gt_heartbeat:
    - fi-kbl-soraka:      [DMESG-FAIL][15] ([i915#2291] / [i915#541]) -> [PASS][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/fi-kbl-soraka/igt@i915_selftest@live@gt_heartbeat.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-kbl-soraka/igt@i915_selftest@live@gt_heartbeat.html

  
#### Warnings ####

  * igt@i915_pm_rpm@module-reload:
    - fi-glk-dsi:         [DMESG-WARN][17] ([i915#1982] / [i915#3143]) -> [DMESG-WARN][18] ([i915#3143])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/fi-glk-dsi/igt@i915_pm_rpm@module-reload.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/fi-glk-dsi/igt@i915_pm_rpm@module-reload.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2291]: https://gitlab.freedesktop.org/drm/intel/issues/2291
  [i915#2502]: https://gitlab.freedesktop.org/drm/intel/issues/2502
  [i915#3143]: https://gitlab.freedesktop.org/drm/intel/issues/3143
  [i915#3145]: https://gitlab.freedesktop.org/drm/intel/issues/3145
  [i915#541]: https://gitlab.freedesktop.org/drm/intel/issues/541


Participating hosts (46 -> 40)
------------------------------

  Missing    (6): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-dg1-1 fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_6034 -> IGTPW_5611

  CI-20190529: 20190529
  CI_DRM_9865: 8d760ba44ca4c7153e85b40b3e9ac3c070e38e70 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_5611: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/index.html
  IGT_6034: b3eff02d5400944dd7b14896037bc9bbf362343e @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools



== Testlist changes ==

+igt@api_intel_allocator@alloc-simple
+igt@api_intel_allocator@execbuf-with-allocator
+igt@api_intel_allocator@fork-simple-once
+igt@api_intel_allocator@fork-simple-stress
+igt@api_intel_allocator@fork-simple-stress-signal
+igt@api_intel_allocator@open-vm
+igt@api_intel_allocator@random-allocator
+igt@api_intel_allocator@reloc-allocator
+igt@api_intel_allocator@reopen
+igt@api_intel_allocator@reopen-fork
+igt@api_intel_allocator@reserve
+igt@api_intel_allocator@reserve-simple
+igt@api_intel_allocator@reuse
+igt@api_intel_allocator@simple-allocator
+igt@api_intel_allocator@standalone
+igt@api_intel_allocator@two-level-inception
+igt@api_intel_allocator@two-level-inception-interruptible
+igt@api_intel_bb@add-remove-objects
+igt@api_intel_bb@bb-with-allocator
+igt@api_intel_bb@bb-with-vm
+igt@api_intel_bb@blit-noreloc-keep-cache-random
+igt@api_intel_bb@blit-noreloc-purge-cache-random
+igt@api_intel_bb@destroy-bb
+igt@api_intel_bb@object-noreloc-keep-cache-random
+igt@api_intel_bb@object-noreloc-keep-cache-simple
+igt@api_intel_bb@object-noreloc-purge-cache-random
+igt@api_intel_bb@object-noreloc-purge-cache-simple
+igt@api_intel_bb@object-reloc-keep-cache
+igt@api_intel_bb@object-reloc-purge-cache
+igt@api_intel_bb@purge-bb
+igt@api_intel_bb@reset-bb
+igt@gem_softpin@allocator-basic
+igt@gem_softpin@allocator-basic-reserve
+igt@gem_softpin@allocator-fork
+igt@gem_softpin@allocator-nopin
+igt@gem_softpin@allocator-nopin-reserve
-igt@api_intel_bb@check-canonical

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/index.html

[-- Attachment #1.2: Type: text/html, Size: 8970 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 01/35] lib/igt_list: Add igt_list_del_init()
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 01/35] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
@ 2021-03-17 16:40   ` Jason Ekstrand
  0 siblings, 0 replies; 53+ messages in thread
From: Jason Ekstrand @ 2021-03-17 16:40 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: IGT GPU Tools

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>

On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
<zbigniew.kempczynski@intel.com> wrote:
>
> Add helper function for delete and reinitialize list element.
>
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Petri Latvala <petri.latvala@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  lib/igt_list.c | 6 ++++++
>  lib/igt_list.h | 1 +
>  2 files changed, 7 insertions(+)
>
> diff --git a/lib/igt_list.c b/lib/igt_list.c
> index 5e30b19b6..37ae139c4 100644
> --- a/lib/igt_list.c
> +++ b/lib/igt_list.c
> @@ -46,6 +46,12 @@ void igt_list_del(struct igt_list_head *elem)
>         elem->prev = NULL;
>  }
>
> +void igt_list_del_init(struct igt_list_head *elem)
> +{
> +       igt_list_del(elem);
> +       IGT_INIT_LIST_HEAD(elem);
> +}
> +
>  void igt_list_move(struct igt_list_head *elem, struct igt_list_head *list)
>  {
>         igt_list_del(elem);
> diff --git a/lib/igt_list.h b/lib/igt_list.h
> index dbf5f802c..cc93d7a0d 100644
> --- a/lib/igt_list.h
> +++ b/lib/igt_list.h
> @@ -75,6 +75,7 @@ struct igt_list_head {
>  void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
>  void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
>  void igt_list_del(struct igt_list_head *elem);
> +void igt_list_del_init(struct igt_list_head *elem);
>  void igt_list_move(struct igt_list_head *elem, struct igt_list_head *list);
>  void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
>  int igt_list_length(const struct igt_list_head *head);
> --
> 2.26.0
>
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation.
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation Zbigniew Kempczyński
@ 2021-03-17 16:43   ` Jason Ekstrand
  2021-03-17 17:43     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 53+ messages in thread
From: Jason Ekstrand @ 2021-03-17 16:43 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: IGT GPU Tools

Would it be better to pull the hash table implementation from Mesa?
Or use https://cgit.freedesktop.org/~anholt/hash_table/ which should
be identical, though it may be a bit out-of-date.  I've poured enough
hours of my life into finding and fixing the bugs in the Mesa hash
table that IGT rolling its own doesn't really fill me with happy
feelings.

--Jason

On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
<zbigniew.kempczynski@intel.com> wrote:
>
> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>
> Double linked lists with a single pointer list head implementation,
> based on similar in the kernel.
>
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  lib/igt_list.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  lib/igt_list.h | 50 +++++++++++++++++++++++++++++++++--
>  2 files changed, 120 insertions(+), 2 deletions(-)
>
> diff --git a/lib/igt_list.c b/lib/igt_list.c
> index 37ae139c4..43200f9b3 100644
> --- a/lib/igt_list.c
> +++ b/lib/igt_list.c
> @@ -22,6 +22,7 @@
>   *
>   */
>
> +#include "assert.h"
>  #include "igt_list.h"
>
>  void IGT_INIT_LIST_HEAD(struct igt_list_head *list)
> @@ -81,3 +82,74 @@ bool igt_list_empty(const struct igt_list_head *head)
>  {
>         return head->next == head;
>  }
> +
> +void igt_hlist_init(struct igt_hlist_node *h)
> +{
> +       h->next = NULL;
> +       h->pprev = NULL;
> +}
> +
> +int igt_hlist_unhashed(const struct igt_hlist_node *h)
> +{
> +       return !h->pprev;
> +}
> +
> +int igt_hlist_empty(const struct igt_hlist_head *h)
> +{
> +       return !h->first;
> +}
> +
> +static void __igt_hlist_del(struct igt_hlist_node *n)
> +{
> +       struct igt_hlist_node *next = n->next;
> +       struct igt_hlist_node **pprev = n->pprev;
> +
> +       *pprev = next;
> +       if (next)
> +               next->pprev = pprev;
> +}
> +
> +void igt_hlist_del(struct igt_hlist_node *n)
> +{
> +       __igt_hlist_del(n);
> +       n->next = NULL;
> +       n->pprev = NULL;
> +}
> +
> +void igt_hlist_del_init(struct igt_hlist_node *n)
> +{
> +       if (!igt_hlist_unhashed(n)) {
> +               __igt_hlist_del(n);
> +               igt_hlist_init(n);
> +       }
> +}
> +
> +void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h)
> +{
> +       struct igt_hlist_node *first = h->first;
> +
> +       n->next = first;
> +       if (first)
> +               first->pprev = &n->next;
> +       h->first = n;
> +       n->pprev = &h->first;
> +}
> +
> +void igt_hlist_add_before(struct igt_hlist_node *n, struct igt_hlist_node *next)
> +{
> +       assert(next);
> +       n->pprev = next->pprev;
> +       n->next = next;
> +       next->pprev = &n->next;
> +       *(n->pprev) = n;
> +}
> +
> +void igt_hlist_add_behind(struct igt_hlist_node *n, struct igt_hlist_node *prev)
> +{
> +       n->next = prev->next;
> +       prev->next = n;
> +       n->pprev = &prev->next;
> +
> +       if (n->next)
> +               n->next->pprev  = &n->next;
> +}
> diff --git a/lib/igt_list.h b/lib/igt_list.h
> index cc93d7a0d..78e761e05 100644
> --- a/lib/igt_list.h
> +++ b/lib/igt_list.h
> @@ -40,6 +40,10 @@
>   * igt_list is a doubly-linked list where an instance of igt_list_head is a
>   * head sentinel and has to be initialized.
>   *
> + * igt_hist is also an double linked lists, but with a single pointer list head.
> + * Mostly useful for hash tables where the two pointer list head is
> + * too wasteful. You lose the ability to access the tail in O(1).
> + *
>   * Example usage:
>   *
>   * |[<!-- language="C" -->
> @@ -71,6 +75,13 @@ struct igt_list_head {
>         struct igt_list_head *next;
>  };
>
> +struct igt_hlist_head {
> +       struct igt_hlist_node *first;
> +};
> +
> +struct igt_hlist_node {
> +       struct igt_hlist_node *next, **pprev;
> +};
>
>  void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
>  void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
> @@ -81,6 +92,17 @@ void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
>  int igt_list_length(const struct igt_list_head *head);
>  bool igt_list_empty(const struct igt_list_head *head);
>
> +void igt_hlist_init(struct igt_hlist_node *h);
> +int igt_hlist_unhashed(const struct igt_hlist_node *h);
> +int igt_hlist_empty(const struct igt_hlist_head *h);
> +void igt_hlist_del(struct igt_hlist_node *n);
> +void igt_hlist_del_init(struct igt_hlist_node *n);
> +void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h);
> +void igt_hlist_add_before(struct igt_hlist_node *n,
> +                         struct igt_hlist_node *next);
> +void igt_hlist_add_behind(struct igt_hlist_node *n,
> +                         struct igt_hlist_node *prev);
> +
>  #define igt_container_of(ptr, sample, member)                          \
>         (__typeof__(sample))((char *)(ptr) -                            \
>                                 offsetof(__typeof__(*sample), member))
> @@ -96,9 +118,10 @@ bool igt_list_empty(const struct igt_list_head *head);
>   * Safe against removel of the *current* list element. To achive this it
>   * requires an extra helper variable `tmp` with the same type as `pos`.
>   */
> -#define igt_list_for_each_entry_safe(pos, tmp, head, member)                   \
> +
> +#define igt_list_for_each_entry_safe(pos, tmp, head, member)           \
>         for (pos = igt_container_of((head)->next, pos, member),         \
> -            tmp = igt_container_of((pos)->member.next, tmp, member);   \
> +            tmp = igt_container_of((pos)->member.next, tmp, member);   \
>              &pos->member != (head);                                    \
>              pos = tmp,                                                 \
>              tmp = igt_container_of((pos)->member.next, tmp, member))
> @@ -108,6 +131,27 @@ bool igt_list_empty(const struct igt_list_head *head);
>              &pos->member != (head);                                    \
>              pos = igt_container_of((pos)->member.prev, pos, member))
>
> +#define igt_list_for_each_entry_safe_reverse(pos, tmp, head, member)           \
> +       for (pos = igt_container_of((head)->prev, pos, member),         \
> +            tmp = igt_container_of((pos)->member.prev, tmp, member);   \
> +            &pos->member != (head);                                    \
> +            pos = tmp,                                                 \
> +            tmp = igt_container_of((pos)->member.prev, tmp, member))
> +
> +#define igt_hlist_entry_safe(ptr, sample, member) \
> +       ({ typeof(ptr) ____ptr = (ptr); \
> +          ____ptr ? igt_container_of(____ptr, sample, member) : NULL; \
> +       })
> +
> +#define igt_hlist_for_each_entry(pos, head, member)                    \
> +       for (pos = igt_hlist_entry_safe((head)->first, pos, member);    \
> +            pos;                                                       \
> +            pos = igt_hlist_entry_safe((pos)->member.next, pos, member))
> +
> +#define igt_hlist_for_each_entry_safe(pos, n, head, member)            \
> +       for (pos = igt_hlist_entry_safe((head)->first, pos, member);    \
> +            pos && ({ n = pos->member.next; 1; });                     \
> +            pos = igt_hlist_entry_safe(n, pos, member))
>
>  /* IGT custom helpers */
>
> @@ -127,4 +171,6 @@ bool igt_list_empty(const struct igt_list_head *head);
>  #define igt_list_last_entry(head, type, member) \
>         igt_container_of((head)->prev, (type), member)
>
> +#define IGT_INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
> +
>  #endif /* IGT_LIST_H */
> --
> 2.26.0
>
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation.
  2021-03-17 16:43   ` Jason Ekstrand
@ 2021-03-17 17:43     ` Zbigniew Kempczyński
  2021-03-17 18:02       ` Jason Ekstrand
  0 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 17:43 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: IGT GPU Tools

On Wed, Mar 17, 2021 at 11:43:38AM -0500, Jason Ekstrand wrote:
> Would it be better to pull the hash table implementation from Mesa?
> Or use https://cgit.freedesktop.org/~anholt/hash_table/ which should
> be identical, though it may be a bit out-of-date.  I've poured enough
> hours of my life into finding and fixing the bugs in the Mesa hash
> table that IGT rolling its own doesn't really fill me with happy
> feelings.
> 
> --Jason

I'm not going to be a devil advocate, I'll just ask Dominik how 
confident is he about that implementation, at least we can provide
some stress test with multithreading scenarios.

He's ported kernel hashtable so if he didn't introduce a non obvious
bug it should work.

Regarding Eric implementation - we need to adopt that implementation
to at least rename to igt_* and fix the code which is using. 
Should take few days max. 

If you'll take a look to tests api_intel_allocator@two-level-inception
you'll see we're stress testing that hashtable much from many
processes / threads and we see no problem with losing data coherency
in the map.

--
Zbigniew
> 
> On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
> <zbigniew.kempczynski@intel.com> wrote:
> >
> > From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> >
> > Double linked lists with a single pointer list head implementation,
> > based on similar in the kernel.
> >
> > Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > ---
> >  lib/igt_list.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++
> >  lib/igt_list.h | 50 +++++++++++++++++++++++++++++++++--
> >  2 files changed, 120 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/igt_list.c b/lib/igt_list.c
> > index 37ae139c4..43200f9b3 100644
> > --- a/lib/igt_list.c
> > +++ b/lib/igt_list.c
> > @@ -22,6 +22,7 @@
> >   *
> >   */
> >
> > +#include "assert.h"
> >  #include "igt_list.h"
> >
> >  void IGT_INIT_LIST_HEAD(struct igt_list_head *list)
> > @@ -81,3 +82,74 @@ bool igt_list_empty(const struct igt_list_head *head)
> >  {
> >         return head->next == head;
> >  }
> > +
> > +void igt_hlist_init(struct igt_hlist_node *h)
> > +{
> > +       h->next = NULL;
> > +       h->pprev = NULL;
> > +}
> > +
> > +int igt_hlist_unhashed(const struct igt_hlist_node *h)
> > +{
> > +       return !h->pprev;
> > +}
> > +
> > +int igt_hlist_empty(const struct igt_hlist_head *h)
> > +{
> > +       return !h->first;
> > +}
> > +
> > +static void __igt_hlist_del(struct igt_hlist_node *n)
> > +{
> > +       struct igt_hlist_node *next = n->next;
> > +       struct igt_hlist_node **pprev = n->pprev;
> > +
> > +       *pprev = next;
> > +       if (next)
> > +               next->pprev = pprev;
> > +}
> > +
> > +void igt_hlist_del(struct igt_hlist_node *n)
> > +{
> > +       __igt_hlist_del(n);
> > +       n->next = NULL;
> > +       n->pprev = NULL;
> > +}
> > +
> > +void igt_hlist_del_init(struct igt_hlist_node *n)
> > +{
> > +       if (!igt_hlist_unhashed(n)) {
> > +               __igt_hlist_del(n);
> > +               igt_hlist_init(n);
> > +       }
> > +}
> > +
> > +void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h)
> > +{
> > +       struct igt_hlist_node *first = h->first;
> > +
> > +       n->next = first;
> > +       if (first)
> > +               first->pprev = &n->next;
> > +       h->first = n;
> > +       n->pprev = &h->first;
> > +}
> > +
> > +void igt_hlist_add_before(struct igt_hlist_node *n, struct igt_hlist_node *next)
> > +{
> > +       assert(next);
> > +       n->pprev = next->pprev;
> > +       n->next = next;
> > +       next->pprev = &n->next;
> > +       *(n->pprev) = n;
> > +}
> > +
> > +void igt_hlist_add_behind(struct igt_hlist_node *n, struct igt_hlist_node *prev)
> > +{
> > +       n->next = prev->next;
> > +       prev->next = n;
> > +       n->pprev = &prev->next;
> > +
> > +       if (n->next)
> > +               n->next->pprev  = &n->next;
> > +}
> > diff --git a/lib/igt_list.h b/lib/igt_list.h
> > index cc93d7a0d..78e761e05 100644
> > --- a/lib/igt_list.h
> > +++ b/lib/igt_list.h
> > @@ -40,6 +40,10 @@
> >   * igt_list is a doubly-linked list where an instance of igt_list_head is a
> >   * head sentinel and has to be initialized.
> >   *
> > + * igt_hist is also an double linked lists, but with a single pointer list head.
> > + * Mostly useful for hash tables where the two pointer list head is
> > + * too wasteful. You lose the ability to access the tail in O(1).
> > + *
> >   * Example usage:
> >   *
> >   * |[<!-- language="C" -->
> > @@ -71,6 +75,13 @@ struct igt_list_head {
> >         struct igt_list_head *next;
> >  };
> >
> > +struct igt_hlist_head {
> > +       struct igt_hlist_node *first;
> > +};
> > +
> > +struct igt_hlist_node {
> > +       struct igt_hlist_node *next, **pprev;
> > +};
> >
> >  void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
> >  void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
> > @@ -81,6 +92,17 @@ void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
> >  int igt_list_length(const struct igt_list_head *head);
> >  bool igt_list_empty(const struct igt_list_head *head);
> >
> > +void igt_hlist_init(struct igt_hlist_node *h);
> > +int igt_hlist_unhashed(const struct igt_hlist_node *h);
> > +int igt_hlist_empty(const struct igt_hlist_head *h);
> > +void igt_hlist_del(struct igt_hlist_node *n);
> > +void igt_hlist_del_init(struct igt_hlist_node *n);
> > +void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h);
> > +void igt_hlist_add_before(struct igt_hlist_node *n,
> > +                         struct igt_hlist_node *next);
> > +void igt_hlist_add_behind(struct igt_hlist_node *n,
> > +                         struct igt_hlist_node *prev);
> > +
> >  #define igt_container_of(ptr, sample, member)                          \
> >         (__typeof__(sample))((char *)(ptr) -                            \
> >                                 offsetof(__typeof__(*sample), member))
> > @@ -96,9 +118,10 @@ bool igt_list_empty(const struct igt_list_head *head);
> >   * Safe against removel of the *current* list element. To achive this it
> >   * requires an extra helper variable `tmp` with the same type as `pos`.
> >   */
> > -#define igt_list_for_each_entry_safe(pos, tmp, head, member)                   \
> > +
> > +#define igt_list_for_each_entry_safe(pos, tmp, head, member)           \
> >         for (pos = igt_container_of((head)->next, pos, member),         \
> > -            tmp = igt_container_of((pos)->member.next, tmp, member);   \
> > +            tmp = igt_container_of((pos)->member.next, tmp, member);   \
> >              &pos->member != (head);                                    \
> >              pos = tmp,                                                 \
> >              tmp = igt_container_of((pos)->member.next, tmp, member))
> > @@ -108,6 +131,27 @@ bool igt_list_empty(const struct igt_list_head *head);
> >              &pos->member != (head);                                    \
> >              pos = igt_container_of((pos)->member.prev, pos, member))
> >
> > +#define igt_list_for_each_entry_safe_reverse(pos, tmp, head, member)           \
> > +       for (pos = igt_container_of((head)->prev, pos, member),         \
> > +            tmp = igt_container_of((pos)->member.prev, tmp, member);   \
> > +            &pos->member != (head);                                    \
> > +            pos = tmp,                                                 \
> > +            tmp = igt_container_of((pos)->member.prev, tmp, member))
> > +
> > +#define igt_hlist_entry_safe(ptr, sample, member) \
> > +       ({ typeof(ptr) ____ptr = (ptr); \
> > +          ____ptr ? igt_container_of(____ptr, sample, member) : NULL; \
> > +       })
> > +
> > +#define igt_hlist_for_each_entry(pos, head, member)                    \
> > +       for (pos = igt_hlist_entry_safe((head)->first, pos, member);    \
> > +            pos;                                                       \
> > +            pos = igt_hlist_entry_safe((pos)->member.next, pos, member))
> > +
> > +#define igt_hlist_for_each_entry_safe(pos, n, head, member)            \
> > +       for (pos = igt_hlist_entry_safe((head)->first, pos, member);    \
> > +            pos && ({ n = pos->member.next; 1; });                     \
> > +            pos = igt_hlist_entry_safe(n, pos, member))
> >
> >  /* IGT custom helpers */
> >
> > @@ -127,4 +171,6 @@ bool igt_list_empty(const struct igt_list_head *head);
> >  #define igt_list_last_entry(head, type, member) \
> >         igt_container_of((head)->prev, (type), member)
> >
> > +#define IGT_INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
> > +
> >  #endif /* IGT_LIST_H */
> > --
> > 2.26.0
> >
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [igt-dev] ✓ Fi.CI.IGT: success for Introduce IGT allocator (rev29)
  2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (35 preceding siblings ...)
  2021-03-17 16:03 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev29) Patchwork
@ 2021-03-17 17:47 ` Patchwork
  36 siblings, 0 replies; 53+ messages in thread
From: Patchwork @ 2021-03-17 17:47 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 30249 bytes --]

== Series Details ==

Series: Introduce IGT allocator (rev29)
URL   : https://patchwork.freedesktop.org/series/82954/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9865_full -> IGTPW_5611_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/index.html

New tests
---------

  New tests have been introduced between CI_DRM_9865_full and IGTPW_5611_full:

### New IGT tests (47) ###

  * igt@api_intel_allocator@alloc-simple:
    - Statuses : 5 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@execbuf-with-allocator:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_allocator@fork-simple-once:
    - Statuses : 5 pass(s)
    - Exec time: [0.01, 0.04] s

  * igt@api_intel_allocator@fork-simple-stress:
    - Statuses : 5 pass(s)
    - Exec time: [5.40, 5.52] s

  * igt@api_intel_allocator@fork-simple-stress-signal:
    - Statuses : 6 pass(s)
    - Exec time: [5.41, 5.53] s

  * igt@api_intel_allocator@open-vm:
    - Statuses : 5 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@random-allocator:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@random-allocator@basic:
    - Statuses : 6 pass(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_allocator@random-allocator@parallel-one:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_allocator@random-allocator@print:
    - Statuses : 6 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reloc-allocator:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@reloc-allocator@basic:
    - Statuses : 5 pass(s)
    - Exec time: [0.0, 0.00] s

  * igt@api_intel_allocator@reloc-allocator@parallel-one:
    - Statuses : 5 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_allocator@reloc-allocator@print:
    - Statuses : 5 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reopen:
    - Statuses : 4 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_allocator@reopen-fork:
    - Statuses : 4 pass(s)
    - Exec time: [3.24, 3.27] s

  * igt@api_intel_allocator@reserve:
    - Statuses : 5 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reserve-simple:
    - Statuses : 6 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reuse:
    - Statuses : 6 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@simple-allocator:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@simple-allocator@basic:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_allocator@simple-allocator@parallel-one:
    - Statuses : 6 pass(s)
    - Exec time: [0.08, 0.17] s

  * igt@api_intel_allocator@simple-allocator@print:
    - Statuses : 6 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@simple-allocator@reserve:
    - Statuses : 6 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@simple-allocator@reuse:
    - Statuses : 6 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@standalone:
    - Statuses : 4 pass(s)
    - Exec time: [0.01, 0.04] s

  * igt@api_intel_allocator@two-level-inception:
    - Statuses : 6 pass(s)
    - Exec time: [5.41, 5.53] s

  * igt@api_intel_allocator@two-level-inception-interruptible:
    - Statuses : 6 pass(s)
    - Exec time: [5.41, 5.53] s

  * igt@api_intel_bb@add-remove-objects:
    - Statuses : 1 pass(s)
    - Exec time: [0.00] s

  * igt@api_intel_bb@bb-with-allocator:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@bb-with-vm:
    - Statuses : 4 pass(s)
    - Exec time: [0.01, 0.03] s

  * igt@api_intel_bb@blit-noreloc-keep-cache-random:
    - Statuses : 4 pass(s)
    - Exec time: [0.00, 0.02] s

  * igt@api_intel_bb@blit-noreloc-purge-cache-random:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@destroy-bb:
    - Statuses : 5 pass(s)
    - Exec time: [0.01, 0.02] s

  * igt@api_intel_bb@object-noreloc-keep-cache-random:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@object-noreloc-keep-cache-simple:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@object-noreloc-purge-cache-random:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_bb@object-noreloc-purge-cache-simple:
    - Statuses : 4 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@object-reloc-keep-cache:
    - Statuses : 4 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@object-reloc-purge-cache:
    - Statuses : 5 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@purge-bb:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@reset-bb:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@gem_softpin@allocator-basic:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.29] s

  * igt@gem_softpin@allocator-basic-reserve:
    - Statuses : 4 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.21] s

  * igt@gem_softpin@allocator-fork:
    - Statuses : 4 pass(s)
    - Exec time: [2.20, 2.35] s

  * igt@gem_softpin@allocator-nopin:
    - Statuses : 4 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.29] s

  * igt@gem_softpin@allocator-nopin-reserve:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.28] s

  

Known issues
------------

  Here are the changes found in IGTPW_5611_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@api_intel_bb@blit-noreloc-purge-cache:
    - shard-snb:          [PASS][1] -> [SKIP][2] ([fdo#109271])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-snb2/igt@api_intel_bb@blit-noreloc-purge-cache.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-snb5/igt@api_intel_bb@blit-noreloc-purge-cache.html

  * igt@gem_ctx_persistence@legacy-engines-queued:
    - shard-snb:          NOTRUN -> [SKIP][3] ([fdo#109271] / [i915#1099]) +4 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-snb5/igt@gem_ctx_persistence@legacy-engines-queued.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-glk:          [PASS][4] -> [FAIL][5] ([i915#2846])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-glk1/igt@gem_exec_fair@basic-deadline.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk3/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][6] ([i915#2842])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb4/igt@gem_exec_fair@basic-none@vcs1.html

  * igt@gem_exec_fair@basic-none@vecs0:
    - shard-kbl:          [PASS][7] -> [FAIL][8] ([i915#2842])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-kbl4/igt@gem_exec_fair@basic-none@vecs0.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl7/igt@gem_exec_fair@basic-none@vecs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-tglb:         [PASS][9] -> [FAIL][10] ([i915#2842])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-tglb8/igt@gem_exec_fair@basic-pace@rcs0.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb1/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_reloc@basic-wide-active@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][11] ([i915#2389])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb4/igt@gem_exec_reloc@basic-wide-active@vcs1.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][12] ([fdo#109271] / [i915#2190])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl1/igt@gem_huc_copy@huc-copy.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-apl:          NOTRUN -> [WARN][13] ([i915#2658])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl7/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_softpin@evict-snoop:
    - shard-iclb:         NOTRUN -> [SKIP][14] ([fdo#109312])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb2/igt@gem_softpin@evict-snoop.html
    - shard-tglb:         NOTRUN -> [SKIP][15] ([fdo#109312])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb7/igt@gem_softpin@evict-snoop.html

  * igt@gem_userptr_blits@input-checking:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][16] ([i915#3002])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl6/igt@gem_userptr_blits@input-checking.html
    - shard-snb:          NOTRUN -> [DMESG-WARN][17] ([i915#3002])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-snb7/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@process-exit-mmap@wb:
    - shard-apl:          NOTRUN -> [SKIP][18] ([fdo#109271] / [i915#1699]) +3 similar issues
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl2/igt@gem_userptr_blits@process-exit-mmap@wb.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-apl:          NOTRUN -> [INCOMPLETE][19] ([i915#2502] / [i915#2667])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl2/igt@gem_userptr_blits@vma-merge.html
    - shard-kbl:          NOTRUN -> [INCOMPLETE][20] ([i915#2502] / [i915#2667])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl6/igt@gem_userptr_blits@vma-merge.html

  * igt@gen7_exec_parse@basic-offset:
    - shard-apl:          NOTRUN -> [SKIP][21] ([fdo#109271]) +198 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl8/igt@gen7_exec_parse@basic-offset.html

  * igt@gen9_exec_parse@basic-rejected-ctx-param:
    - shard-tglb:         NOTRUN -> [SKIP][22] ([fdo#112306])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb5/igt@gen9_exec_parse@basic-rejected-ctx-param.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-tglb:         NOTRUN -> [FAIL][23] ([i915#454])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb2/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_rc6_residency@rc6-idle:
    - shard-tglb:         NOTRUN -> [WARN][24] ([i915#2681] / [i915#2684])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb1/igt@i915_pm_rc6_residency@rc6-idle.html
    - shard-iclb:         NOTRUN -> [WARN][25] ([i915#1804] / [i915#2684])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb4/igt@i915_pm_rc6_residency@rc6-idle.html

  * igt@i915_pm_rpm@dpms-mode-unset-non-lpsp:
    - shard-tglb:         NOTRUN -> [SKIP][26] ([fdo#111644] / [i915#1397] / [i915#2411])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb2/igt@i915_pm_rpm@dpms-mode-unset-non-lpsp.html
    - shard-iclb:         NOTRUN -> [SKIP][27] ([fdo#110892])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb4/igt@i915_pm_rpm@dpms-mode-unset-non-lpsp.html

  * igt@i915_selftest@live@client:
    - shard-glk:          [PASS][28] -> [DMESG-FAIL][29] ([i915#3047])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-glk2/igt@i915_selftest@live@client.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk6/igt@i915_selftest@live@client.html

  * igt@kms_async_flips@alternate-sync-async-flip:
    - shard-snb:          [PASS][30] -> [FAIL][31] ([i915#2521])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-snb7/igt@kms_async_flips@alternate-sync-async-flip.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-snb2/igt@kms_async_flips@alternate-sync-async-flip.html

  * igt@kms_big_fb@linear-64bpp-rotate-90:
    - shard-iclb:         NOTRUN -> [SKIP][32] ([fdo#110725] / [fdo#111614])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb1/igt@kms_big_fb@linear-64bpp-rotate-90.html
    - shard-tglb:         NOTRUN -> [SKIP][33] ([fdo#111614])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb2/igt@kms_big_fb@linear-64bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-16bpp-rotate-90:
    - shard-tglb:         NOTRUN -> [SKIP][34] ([fdo#111615]) +2 similar issues
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb1/igt@kms_big_fb@yf-tiled-16bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-270:
    - shard-iclb:         NOTRUN -> [SKIP][35] ([fdo#110723])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb6/igt@kms_big_fb@yf-tiled-8bpp-rotate-270.html

  * igt@kms_big_joiner@basic:
    - shard-tglb:         NOTRUN -> [SKIP][36] ([i915#2705])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb8/igt@kms_big_joiner@basic.html
    - shard-kbl:          NOTRUN -> [SKIP][37] ([fdo#109271] / [i915#2705])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl4/igt@kms_big_joiner@basic.html
    - shard-iclb:         NOTRUN -> [SKIP][38] ([i915#2705])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb1/igt@kms_big_joiner@basic.html
    - shard-glk:          NOTRUN -> [SKIP][39] ([fdo#109271] / [i915#2705])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk1/igt@kms_big_joiner@basic.html
    - shard-apl:          NOTRUN -> [SKIP][40] ([fdo#109271] / [i915#2705]) +1 similar issue
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl2/igt@kms_big_joiner@basic.html

  * igt@kms_chamelium@dp-mode-timings:
    - shard-apl:          NOTRUN -> [SKIP][41] ([fdo#109271] / [fdo#111827]) +22 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl3/igt@kms_chamelium@dp-mode-timings.html

  * igt@kms_chamelium@vga-hpd-fast:
    - shard-tglb:         NOTRUN -> [SKIP][42] ([fdo#109284] / [fdo#111827]) +7 similar issues
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb3/igt@kms_chamelium@vga-hpd-fast.html

  * igt@kms_color@pipe-a-ctm-0-25:
    - shard-iclb:         NOTRUN -> [FAIL][43] ([i915#1149] / [i915#315])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb5/igt@kms_color@pipe-a-ctm-0-25.html
    - shard-tglb:         NOTRUN -> [FAIL][44] ([i915#1149] / [i915#315])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb3/igt@kms_color@pipe-a-ctm-0-25.html

  * igt@kms_color_chamelium@pipe-a-ctm-blue-to-red:
    - shard-iclb:         NOTRUN -> [SKIP][45] ([fdo#109284] / [fdo#111827]) +3 similar issues
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb2/igt@kms_color_chamelium@pipe-a-ctm-blue-to-red.html

  * igt@kms_color_chamelium@pipe-b-ctm-max:
    - shard-snb:          NOTRUN -> [SKIP][46] ([fdo#109271] / [fdo#111827]) +20 similar issues
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-snb6/igt@kms_color_chamelium@pipe-b-ctm-max.html

  * igt@kms_color_chamelium@pipe-invalid-gamma-lut-sizes:
    - shard-kbl:          NOTRUN -> [SKIP][47] ([fdo#109271] / [fdo#111827]) +12 similar issues
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl6/igt@kms_color_chamelium@pipe-invalid-gamma-lut-sizes.html
    - shard-glk:          NOTRUN -> [SKIP][48] ([fdo#109271] / [fdo#111827]) +1 similar issue
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk4/igt@kms_color_chamelium@pipe-invalid-gamma-lut-sizes.html

  * igt@kms_content_protection@lic:
    - shard-apl:          NOTRUN -> [TIMEOUT][49] ([i915#1319]) +2 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl3/igt@kms_content_protection@lic.html

  * igt@kms_cursor_crc@pipe-b-cursor-512x170-offscreen:
    - shard-iclb:         NOTRUN -> [SKIP][50] ([fdo#109278] / [fdo#109279])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb2/igt@kms_cursor_crc@pipe-b-cursor-512x170-offscreen.html
    - shard-tglb:         NOTRUN -> [SKIP][51] ([fdo#109279])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb7/igt@kms_cursor_crc@pipe-b-cursor-512x170-offscreen.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][52] ([i915#180])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl3/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_cursor_crc@pipe-d-cursor-suspend:
    - shard-kbl:          NOTRUN -> [SKIP][53] ([fdo#109271]) +103 similar issues
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl1/igt@kms_cursor_crc@pipe-d-cursor-suspend.html
    - shard-glk:          NOTRUN -> [SKIP][54] ([fdo#109271]) +11 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk9/igt@kms_cursor_crc@pipe-d-cursor-suspend.html

  * igt@kms_flip@2x-flip-vs-fences:
    - shard-iclb:         NOTRUN -> [SKIP][55] ([fdo#109274]) +1 similar issue
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb1/igt@kms_flip@2x-flip-vs-fences.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
    - shard-kbl:          [PASS][56] -> [DMESG-WARN][57] ([i915#180]) +2 similar issues
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-kbl4/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl7/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile:
    - shard-apl:          NOTRUN -> [SKIP][58] ([fdo#109271] / [i915#2642])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl6/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile:
    - shard-kbl:          NOTRUN -> [FAIL][59] ([i915#2641])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl6/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs:
    - shard-apl:          NOTRUN -> [SKIP][60] ([fdo#109271] / [i915#2672])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl6/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-pwrite:
    - shard-snb:          NOTRUN -> [SKIP][61] ([fdo#109271]) +350 similar issues
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-snb2/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-mmap-wc:
    - shard-iclb:         NOTRUN -> [SKIP][62] ([fdo#109280]) +6 similar issues
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb7/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-cur-indfb-move:
    - shard-tglb:         NOTRUN -> [SKIP][63] ([fdo#111825]) +18 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb3/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-cur-indfb-move.html

  * igt@kms_hdr@static-swap:
    - shard-tglb:         NOTRUN -> [SKIP][64] ([i915#1187])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb3/igt@kms_hdr@static-swap.html
    - shard-iclb:         NOTRUN -> [SKIP][65] ([i915#1187])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb3/igt@kms_hdr@static-swap.html

  * igt@kms_pipe_crc_basic@read-crc-pipe-d:
    - shard-kbl:          NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#533]) +1 similar issue
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl6/igt@kms_pipe_crc_basic@read-crc-pipe-d.html

  * igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence:
    - shard-iclb:         NOTRUN -> [SKIP][67] ([fdo#109278]) +5 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb7/igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence.html
    - shard-glk:          NOTRUN -> [SKIP][68] ([fdo#109271] / [i915#533])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk6/igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
    - shard-apl:          NOTRUN -> [DMESG-WARN][69] ([i915#180])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl2/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html

  * igt@kms_plane_alpha_blend@pipe-b-constant-alpha-max:
    - shard-kbl:          NOTRUN -> [FAIL][70] ([fdo#108145] / [i915#265])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl2/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-max.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-basic:
    - shard-apl:          NOTRUN -> [FAIL][71] ([fdo#108145] / [i915#265]) +2 similar issues
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl8/igt@kms_plane_alpha_blend@pipe-c-alpha-basic.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-4:
    - shard-apl:          NOTRUN -> [SKIP][72] ([fdo#109271] / [i915#658]) +5 similar issues
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl6/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-4.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5:
    - shard-iclb:         NOTRUN -> [SKIP][73] ([i915#658])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb5/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1:
    - shard-kbl:          NOTRUN -> [SKIP][74] ([fdo#109271] / [i915#658]) +3 similar issues
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html

  * igt@kms_psr2_su@frontbuffer:
    - shard-iclb:         [PASS][75] -> [SKIP][76] ([fdo#109642] / [fdo#111068] / [i915#658])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-iclb2/igt@kms_psr2_su@frontbuffer.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb4/igt@kms_psr2_su@frontbuffer.html

  * igt@kms_psr@psr2_basic:
    - shard-iclb:         [PASS][77] -> [SKIP][78] ([fdo#109441])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-iclb2/igt@kms_psr@psr2_basic.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb1/igt@kms_psr@psr2_basic.html

  * igt@kms_psr@psr2_primary_mmap_cpu:
    - shard-iclb:         NOTRUN -> [SKIP][79] ([fdo#109441]) +1 similar issue
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb8/igt@kms_psr@psr2_primary_mmap_cpu.html

  * igt@kms_vblank@pipe-d-wait-idle:
    - shard-apl:          NOTRUN -> [SKIP][80] ([fdo#109271] / [i915#533]) +4 similar issues
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl7/igt@kms_vblank@pipe-d-wait-idle.html

  * igt@kms_writeback@writeback-check-output:
    - shard-kbl:          NOTRUN -> [SKIP][81] ([fdo#109271] / [i915#2437])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl4/igt@kms_writeback@writeback-check-output.html

  * igt@nouveau_crc@pipe-b-source-rg:
    - shard-iclb:         NOTRUN -> [SKIP][82] ([i915#2530]) +1 similar issue
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb4/igt@nouveau_crc@pipe-b-source-rg.html

  * igt@nouveau_crc@pipe-c-source-outp-complete:
    - shard-tglb:         NOTRUN -> [SKIP][83] ([i915#2530]) +2 similar issues
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb2/igt@nouveau_crc@pipe-c-source-outp-complete.html

  * igt@perf@gen12-mi-rpc:
    - shard-iclb:         NOTRUN -> [SKIP][84] ([fdo#109289])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb3/igt@perf@gen12-mi-rpc.html

  * igt@perf@mi-rpc:
    - shard-tglb:         NOTRUN -> [SKIP][85] ([fdo#109289])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb3/igt@perf@mi-rpc.html

  * igt@prime_nv_pcopy@test1_macro:
    - shard-tglb:         NOTRUN -> [SKIP][86] ([fdo#109291])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb8/igt@prime_nv_pcopy@test1_macro.html

  * igt@runner@aborted:
    - shard-snb:          NOTRUN -> [FAIL][87] ([i915#3002] / [i915#698])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-snb7/igt@runner@aborted.html

  * igt@sysfs_clients@split-10@bcs0:
    - shard-kbl:          NOTRUN -> [SKIP][88] ([fdo#109271] / [i915#3026])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl2/igt@sysfs_clients@split-10@bcs0.html

  
#### Possible fixes ####

  * igt@gem_ctx_persistence@close-replace-race:
    - shard-glk:          [TIMEOUT][89] ([i915#2918]) -> [PASS][90]
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-glk5/igt@gem_ctx_persistence@close-replace-race.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk3/igt@gem_ctx_persistence@close-replace-race.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-kbl:          [FAIL][91] ([i915#2846]) -> [PASS][92]
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-kbl4/igt@gem_exec_fair@basic-deadline.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl2/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-flow@rcs0:
    - shard-tglb:         [FAIL][93] ([i915#2842]) -> [PASS][94] +1 similar issue
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-tglb2/igt@gem_exec_fair@basic-flow@rcs0.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb5/igt@gem_exec_fair@basic-flow@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-glk:          [FAIL][95] ([i915#2842]) -> [PASS][96] +3 similar issues
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-glk1/igt@gem_exec_fair@basic-none@vcs0.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk3/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-kbl:          [FAIL][97] ([i915#2842]) -> [PASS][98] +1 similar issue
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-kbl3/igt@gem_exec_fair@basic-pace@vecs0.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl2/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_exec_schedule@u-fairslice@bcs0:
    - shard-iclb:         [DMESG-WARN][99] ([i915#2803]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-iclb5/igt@gem_exec_schedule@u-fairslice@bcs0.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb6/igt@gem_exec_schedule@u-fairslice@bcs0.html
    - shard-tglb:         [DMESG-WARN][101] ([i915#2803]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-tglb1/igt@gem_exec_schedule@u-fairslice@bcs0.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb1/igt@gem_exec_schedule@u-fairslice@bcs0.html

  * igt@gem_exec_suspend@basic-s3:
    - shard-kbl:          [DMESG-WARN][103] ([i915#180]) -> [PASS][104] +2 similar issues
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-kbl7/igt@gem_exec_suspend@basic-s3.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl4/igt@gem_exec_suspend@basic-s3.html

  * igt@gem_exec_whisper@basic-contexts-priority-all:
    - shard-glk:          [DMESG-WARN][105] ([i915#118] / [i915#95]) -> [PASS][106]
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-glk8/igt@gem_exec_whisper@basic-contexts-priority-all.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-glk8/igt@gem_exec_whisper@basic-contexts-priority-all.html

  * igt@gem_mmap_gtt@cpuset-big-copy-xy:
    - shard-iclb:         [FAIL][107] ([i915#307]) -> [PASS][108]
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-iclb1/igt@gem_mmap_gtt@cpuset-big-copy-xy.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb6/igt@gem_mmap_gtt@cpuset-big-copy-xy.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
    - shard-apl:          [DMESG-WARN][109] ([i915#180]) -> [PASS][110] +2 similar issues
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-apl6/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-apl6/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-move:
    - shard-iclb:         [FAIL][111] ([i915#49]) -> [PASS][112]
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-iclb8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-move.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-move.html

  * igt@kms_hdmi_inject@inject-audio:
    - shard-tglb:         [SKIP][113] ([i915#433]) -> [PASS][114]
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-tglb7/igt@kms_hdmi_inject@inject-audio.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-tglb2/igt@kms_hdmi_inject@inject-audio.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes:
    - shard-kbl:          [DMESG-WARN][115] ([i915#180] / [i915#533]) -> [PASS][116]
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-kbl3/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-kbl3/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html

  * igt@kms_psr@psr2_cursor_plane_move:
    - shard-iclb:         [SKIP][117] ([fdo#109441]) -> [PASS][118] +1 similar issue
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9865/shard-iclb4/igt@kms_psr@psr2_cursor_plane_move.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/shard-iclb2/igt@kms_psr@psr2_cursor_plane_move.html

  * igt@sysfs_clients@recycle:
    - shard-kbl:          [FAIL][119] ([i915#3028]) -> [PASS][120]
   [1

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5611/index.html

[-- Attachment #1.2: Type: text/html, Size: 34807 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation.
  2021-03-17 17:43     ` Zbigniew Kempczyński
@ 2021-03-17 18:02       ` Jason Ekstrand
  2021-03-17 19:13         ` Zbigniew Kempczyński
  0 siblings, 1 reply; 53+ messages in thread
From: Jason Ekstrand @ 2021-03-17 18:02 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: IGT GPU Tools

On Wed, Mar 17, 2021 at 12:43 PM Zbigniew Kempczyński
<zbigniew.kempczynski@intel.com> wrote:
>
> On Wed, Mar 17, 2021 at 11:43:38AM -0500, Jason Ekstrand wrote:
> > Would it be better to pull the hash table implementation from Mesa?
> > Or use https://cgit.freedesktop.org/~anholt/hash_table/ which should
> > be identical, though it may be a bit out-of-date.  I've poured enough
> > hours of my life into finding and fixing the bugs in the Mesa hash
> > table that IGT rolling its own doesn't really fill me with happy
> > feelings.
> >
> > --Jason
>
> I'm not going to be a devil advocate, I'll just ask Dominik how
> confident is he about that implementation, at least we can provide
> some stress test with multithreading scenarios.
>
> He's ported kernel hashtable so if he didn't introduce a non obvious
> bug it should work.

If this is, indeed, a port of the kernel hash table, then it's
probably ok.  Not sure how that works out on licenses, but it should
be correct.  If this has been freshly hand-rolled expressly for this
purpose, then it makes me nervous.

> Regarding Eric implementation - we need to adopt that implementation
> to at least rename to igt_* and fix the code which is using.
> Should take few days max.
>
> If you'll take a look to tests api_intel_allocator@two-level-inception
> you'll see we're stress testing that hashtable much from many
> processes / threads and we see no problem with losing data coherency
> in the map.

I'm less worried about threads, so long as it's locked, as I am about
weird insert/remove patterns.  Most of the bugs I've had to fix in
mesa were because certain patterns of insert/remove with the right
hashes would cause it to blow up.  If most of what you do is just add
a bunch of stuff without a lot of removal, they can be unlikely and
hard to find.  That said, given that it's linked list based and not
re-hash based, a lot of those corner cases are less likely than they
were with the Mesa implementation.  Again, my primary concern is with
a NEW hash table impl. :-)

--Jason

> --
> Zbigniew
> >
> > On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
> > <zbigniew.kempczynski@intel.com> wrote:
> > >
> > > From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > >
> > > Double linked lists with a single pointer list head implementation,
> > > based on similar in the kernel.
> > >
> > > Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > ---
> > >  lib/igt_list.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  lib/igt_list.h | 50 +++++++++++++++++++++++++++++++++--
> > >  2 files changed, 120 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/lib/igt_list.c b/lib/igt_list.c
> > > index 37ae139c4..43200f9b3 100644
> > > --- a/lib/igt_list.c
> > > +++ b/lib/igt_list.c
> > > @@ -22,6 +22,7 @@
> > >   *
> > >   */
> > >
> > > +#include "assert.h"
> > >  #include "igt_list.h"
> > >
> > >  void IGT_INIT_LIST_HEAD(struct igt_list_head *list)
> > > @@ -81,3 +82,74 @@ bool igt_list_empty(const struct igt_list_head *head)
> > >  {
> > >         return head->next == head;
> > >  }
> > > +
> > > +void igt_hlist_init(struct igt_hlist_node *h)
> > > +{
> > > +       h->next = NULL;
> > > +       h->pprev = NULL;
> > > +}
> > > +
> > > +int igt_hlist_unhashed(const struct igt_hlist_node *h)
> > > +{
> > > +       return !h->pprev;
> > > +}
> > > +
> > > +int igt_hlist_empty(const struct igt_hlist_head *h)
> > > +{
> > > +       return !h->first;
> > > +}
> > > +
> > > +static void __igt_hlist_del(struct igt_hlist_node *n)
> > > +{
> > > +       struct igt_hlist_node *next = n->next;
> > > +       struct igt_hlist_node **pprev = n->pprev;
> > > +
> > > +       *pprev = next;
> > > +       if (next)
> > > +               next->pprev = pprev;
> > > +}
> > > +
> > > +void igt_hlist_del(struct igt_hlist_node *n)
> > > +{
> > > +       __igt_hlist_del(n);
> > > +       n->next = NULL;
> > > +       n->pprev = NULL;
> > > +}
> > > +
> > > +void igt_hlist_del_init(struct igt_hlist_node *n)
> > > +{
> > > +       if (!igt_hlist_unhashed(n)) {
> > > +               __igt_hlist_del(n);
> > > +               igt_hlist_init(n);
> > > +       }
> > > +}
> > > +
> > > +void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h)
> > > +{
> > > +       struct igt_hlist_node *first = h->first;
> > > +
> > > +       n->next = first;
> > > +       if (first)
> > > +               first->pprev = &n->next;
> > > +       h->first = n;
> > > +       n->pprev = &h->first;
> > > +}
> > > +
> > > +void igt_hlist_add_before(struct igt_hlist_node *n, struct igt_hlist_node *next)
> > > +{
> > > +       assert(next);
> > > +       n->pprev = next->pprev;
> > > +       n->next = next;
> > > +       next->pprev = &n->next;
> > > +       *(n->pprev) = n;
> > > +}
> > > +
> > > +void igt_hlist_add_behind(struct igt_hlist_node *n, struct igt_hlist_node *prev)
> > > +{
> > > +       n->next = prev->next;
> > > +       prev->next = n;
> > > +       n->pprev = &prev->next;
> > > +
> > > +       if (n->next)
> > > +               n->next->pprev  = &n->next;
> > > +}
> > > diff --git a/lib/igt_list.h b/lib/igt_list.h
> > > index cc93d7a0d..78e761e05 100644
> > > --- a/lib/igt_list.h
> > > +++ b/lib/igt_list.h
> > > @@ -40,6 +40,10 @@
> > >   * igt_list is a doubly-linked list where an instance of igt_list_head is a
> > >   * head sentinel and has to be initialized.
> > >   *
> > > + * igt_hist is also an double linked lists, but with a single pointer list head.
> > > + * Mostly useful for hash tables where the two pointer list head is
> > > + * too wasteful. You lose the ability to access the tail in O(1).
> > > + *
> > >   * Example usage:
> > >   *
> > >   * |[<!-- language="C" -->
> > > @@ -71,6 +75,13 @@ struct igt_list_head {
> > >         struct igt_list_head *next;
> > >  };
> > >
> > > +struct igt_hlist_head {
> > > +       struct igt_hlist_node *first;
> > > +};
> > > +
> > > +struct igt_hlist_node {
> > > +       struct igt_hlist_node *next, **pprev;
> > > +};
> > >
> > >  void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
> > >  void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
> > > @@ -81,6 +92,17 @@ void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
> > >  int igt_list_length(const struct igt_list_head *head);
> > >  bool igt_list_empty(const struct igt_list_head *head);
> > >
> > > +void igt_hlist_init(struct igt_hlist_node *h);
> > > +int igt_hlist_unhashed(const struct igt_hlist_node *h);
> > > +int igt_hlist_empty(const struct igt_hlist_head *h);
> > > +void igt_hlist_del(struct igt_hlist_node *n);
> > > +void igt_hlist_del_init(struct igt_hlist_node *n);
> > > +void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h);
> > > +void igt_hlist_add_before(struct igt_hlist_node *n,
> > > +                         struct igt_hlist_node *next);
> > > +void igt_hlist_add_behind(struct igt_hlist_node *n,
> > > +                         struct igt_hlist_node *prev);
> > > +
> > >  #define igt_container_of(ptr, sample, member)                          \
> > >         (__typeof__(sample))((char *)(ptr) -                            \
> > >                                 offsetof(__typeof__(*sample), member))
> > > @@ -96,9 +118,10 @@ bool igt_list_empty(const struct igt_list_head *head);
> > >   * Safe against removel of the *current* list element. To achive this it
> > >   * requires an extra helper variable `tmp` with the same type as `pos`.
> > >   */
> > > -#define igt_list_for_each_entry_safe(pos, tmp, head, member)                   \
> > > +
> > > +#define igt_list_for_each_entry_safe(pos, tmp, head, member)           \
> > >         for (pos = igt_container_of((head)->next, pos, member),         \
> > > -            tmp = igt_container_of((pos)->member.next, tmp, member);   \
> > > +            tmp = igt_container_of((pos)->member.next, tmp, member);   \
> > >              &pos->member != (head);                                    \
> > >              pos = tmp,                                                 \
> > >              tmp = igt_container_of((pos)->member.next, tmp, member))
> > > @@ -108,6 +131,27 @@ bool igt_list_empty(const struct igt_list_head *head);
> > >              &pos->member != (head);                                    \
> > >              pos = igt_container_of((pos)->member.prev, pos, member))
> > >
> > > +#define igt_list_for_each_entry_safe_reverse(pos, tmp, head, member)           \
> > > +       for (pos = igt_container_of((head)->prev, pos, member),         \
> > > +            tmp = igt_container_of((pos)->member.prev, tmp, member);   \
> > > +            &pos->member != (head);                                    \
> > > +            pos = tmp,                                                 \
> > > +            tmp = igt_container_of((pos)->member.prev, tmp, member))
> > > +
> > > +#define igt_hlist_entry_safe(ptr, sample, member) \
> > > +       ({ typeof(ptr) ____ptr = (ptr); \
> > > +          ____ptr ? igt_container_of(____ptr, sample, member) : NULL; \
> > > +       })
> > > +
> > > +#define igt_hlist_for_each_entry(pos, head, member)                    \
> > > +       for (pos = igt_hlist_entry_safe((head)->first, pos, member);    \
> > > +            pos;                                                       \
> > > +            pos = igt_hlist_entry_safe((pos)->member.next, pos, member))
> > > +
> > > +#define igt_hlist_for_each_entry_safe(pos, n, head, member)            \
> > > +       for (pos = igt_hlist_entry_safe((head)->first, pos, member);    \
> > > +            pos && ({ n = pos->member.next; 1; });                     \
> > > +            pos = igt_hlist_entry_safe(n, pos, member))
> > >
> > >  /* IGT custom helpers */
> > >
> > > @@ -127,4 +171,6 @@ bool igt_list_empty(const struct igt_list_head *head);
> > >  #define igt_list_last_entry(head, type, member) \
> > >         igt_container_of((head)->prev, (type), member)
> > >
> > > +#define IGT_INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
> > > +
> > >  #endif /* IGT_LIST_H */
> > > --
> > > 2.26.0
> > >
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation.
  2021-03-17 18:02       ` Jason Ekstrand
@ 2021-03-17 19:13         ` Zbigniew Kempczyński
  2021-03-17 20:44           ` Grzegorzek, Dominik
  0 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-17 19:13 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: IGT GPU Tools

On Wed, Mar 17, 2021 at 01:02:53PM -0500, Jason Ekstrand wrote:
> On Wed, Mar 17, 2021 at 12:43 PM Zbigniew Kempczyński
> <zbigniew.kempczynski@intel.com> wrote:
> >
> > On Wed, Mar 17, 2021 at 11:43:38AM -0500, Jason Ekstrand wrote:
> > > Would it be better to pull the hash table implementation from Mesa?
> > > Or use https://cgit.freedesktop.org/~anholt/hash_table/ which should
> > > be identical, though it may be a bit out-of-date.  I've poured enough
> > > hours of my life into finding and fixing the bugs in the Mesa hash
> > > table that IGT rolling its own doesn't really fill me with happy
> > > feelings.
> > >
> > > --Jason
> >
> > I'm not going to be a devil advocate, I'll just ask Dominik how
> > confident is he about that implementation, at least we can provide
> > some stress test with multithreading scenarios.
> >
> > He's ported kernel hashtable so if he didn't introduce a non obvious
> > bug it should work.
> 
> If this is, indeed, a port of the kernel hash table, then it's
> probably ok.  Not sure how that works out on licenses, but it should
> be correct.  If this has been freshly hand-rolled expressly for this
> purpose, then it makes me nervous.
> 
> > Regarding Eric implementation - we need to adopt that implementation
> > to at least rename to igt_* and fix the code which is using.
> > Should take few days max.
> >
> > If you'll take a look to tests api_intel_allocator@two-level-inception
> > you'll see we're stress testing that hashtable much from many
> > processes / threads and we see no problem with losing data coherency
> > in the map.
> 
> I'm less worried about threads, so long as it's locked, as I am about
> weird insert/remove patterns.  Most of the bugs I've had to fix in
> mesa were because certain patterns of insert/remove with the right
> hashes would cause it to blow up.  If most of what you do is just add
> a bunch of stuff without a lot of removal, they can be unlikely and
> hard to find.  That said, given that it's linked list based and not
> re-hash based, a lot of those corner cases are less likely than they
> were with the Mesa implementation.  Again, my primary concern is with
> a NEW hash table impl. :-)

I'm not sure I've understand you correctly - it is re-hash based (see
igt_map_extend()), and it grows if necessary. Works properly when you
don't forget about locking during operations (not only add, but find
too because it is not funny when someone is taking the rug from under
your feet).

All multiprocess/multithread tests in api_intel_allocator heavily
exercise insert/remove scenarios (open/close allocator just give 
you new handle inserted/removed from handles hash table so lack 
of consistency would be quickly fount imo).

--
Zbigniew

> 
> --Jason
> 
> > --
> > Zbigniew
> > >
> > > On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
> > > <zbigniew.kempczynski@intel.com> wrote:
> > > >
> > > > From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > >
> > > > Double linked lists with a single pointer list head implementation,
> > > > based on similar in the kernel.
> > > >
> > > > Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > > Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > > > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > ---
> > > >  lib/igt_list.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  lib/igt_list.h | 50 +++++++++++++++++++++++++++++++++--
> > > >  2 files changed, 120 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/lib/igt_list.c b/lib/igt_list.c
> > > > index 37ae139c4..43200f9b3 100644
> > > > --- a/lib/igt_list.c
> > > > +++ b/lib/igt_list.c
> > > > @@ -22,6 +22,7 @@
> > > >   *
> > > >   */
> > > >
> > > > +#include "assert.h"
> > > >  #include "igt_list.h"
> > > >
> > > >  void IGT_INIT_LIST_HEAD(struct igt_list_head *list)
> > > > @@ -81,3 +82,74 @@ bool igt_list_empty(const struct igt_list_head *head)
> > > >  {
> > > >         return head->next == head;
> > > >  }
> > > > +
> > > > +void igt_hlist_init(struct igt_hlist_node *h)
> > > > +{
> > > > +       h->next = NULL;
> > > > +       h->pprev = NULL;
> > > > +}
> > > > +
> > > > +int igt_hlist_unhashed(const struct igt_hlist_node *h)
> > > > +{
> > > > +       return !h->pprev;
> > > > +}
> > > > +
> > > > +int igt_hlist_empty(const struct igt_hlist_head *h)
> > > > +{
> > > > +       return !h->first;
> > > > +}
> > > > +
> > > > +static void __igt_hlist_del(struct igt_hlist_node *n)
> > > > +{
> > > > +       struct igt_hlist_node *next = n->next;
> > > > +       struct igt_hlist_node **pprev = n->pprev;
> > > > +
> > > > +       *pprev = next;
> > > > +       if (next)
> > > > +               next->pprev = pprev;
> > > > +}
> > > > +
> > > > +void igt_hlist_del(struct igt_hlist_node *n)
> > > > +{
> > > > +       __igt_hlist_del(n);
> > > > +       n->next = NULL;
> > > > +       n->pprev = NULL;
> > > > +}
> > > > +
> > > > +void igt_hlist_del_init(struct igt_hlist_node *n)
> > > > +{
> > > > +       if (!igt_hlist_unhashed(n)) {
> > > > +               __igt_hlist_del(n);
> > > > +               igt_hlist_init(n);
> > > > +       }
> > > > +}
> > > > +
> > > > +void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h)
> > > > +{
> > > > +       struct igt_hlist_node *first = h->first;
> > > > +
> > > > +       n->next = first;
> > > > +       if (first)
> > > > +               first->pprev = &n->next;
> > > > +       h->first = n;
> > > > +       n->pprev = &h->first;
> > > > +}
> > > > +
> > > > +void igt_hlist_add_before(struct igt_hlist_node *n, struct igt_hlist_node *next)
> > > > +{
> > > > +       assert(next);
> > > > +       n->pprev = next->pprev;
> > > > +       n->next = next;
> > > > +       next->pprev = &n->next;
> > > > +       *(n->pprev) = n;
> > > > +}
> > > > +
> > > > +void igt_hlist_add_behind(struct igt_hlist_node *n, struct igt_hlist_node *prev)
> > > > +{
> > > > +       n->next = prev->next;
> > > > +       prev->next = n;
> > > > +       n->pprev = &prev->next;
> > > > +
> > > > +       if (n->next)
> > > > +               n->next->pprev  = &n->next;
> > > > +}
> > > > diff --git a/lib/igt_list.h b/lib/igt_list.h
> > > > index cc93d7a0d..78e761e05 100644
> > > > --- a/lib/igt_list.h
> > > > +++ b/lib/igt_list.h
> > > > @@ -40,6 +40,10 @@
> > > >   * igt_list is a doubly-linked list where an instance of igt_list_head is a
> > > >   * head sentinel and has to be initialized.
> > > >   *
> > > > + * igt_hist is also an double linked lists, but with a single pointer list head.
> > > > + * Mostly useful for hash tables where the two pointer list head is
> > > > + * too wasteful. You lose the ability to access the tail in O(1).
> > > > + *
> > > >   * Example usage:
> > > >   *
> > > >   * |[<!-- language="C" -->
> > > > @@ -71,6 +75,13 @@ struct igt_list_head {
> > > >         struct igt_list_head *next;
> > > >  };
> > > >
> > > > +struct igt_hlist_head {
> > > > +       struct igt_hlist_node *first;
> > > > +};
> > > > +
> > > > +struct igt_hlist_node {
> > > > +       struct igt_hlist_node *next, **pprev;
> > > > +};
> > > >
> > > >  void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
> > > >  void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
> > > > @@ -81,6 +92,17 @@ void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
> > > >  int igt_list_length(const struct igt_list_head *head);
> > > >  bool igt_list_empty(const struct igt_list_head *head);
> > > >
> > > > +void igt_hlist_init(struct igt_hlist_node *h);
> > > > +int igt_hlist_unhashed(const struct igt_hlist_node *h);
> > > > +int igt_hlist_empty(const struct igt_hlist_head *h);
> > > > +void igt_hlist_del(struct igt_hlist_node *n);
> > > > +void igt_hlist_del_init(struct igt_hlist_node *n);
> > > > +void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h);
> > > > +void igt_hlist_add_before(struct igt_hlist_node *n,
> > > > +                         struct igt_hlist_node *next);
> > > > +void igt_hlist_add_behind(struct igt_hlist_node *n,
> > > > +                         struct igt_hlist_node *prev);
> > > > +
> > > >  #define igt_container_of(ptr, sample, member)                          \
> > > >         (__typeof__(sample))((char *)(ptr) -                            \
> > > >                                 offsetof(__typeof__(*sample), member))
> > > > @@ -96,9 +118,10 @@ bool igt_list_empty(const struct igt_list_head *head);
> > > >   * Safe against removel of the *current* list element. To achive this it
> > > >   * requires an extra helper variable `tmp` with the same type as `pos`.
> > > >   */
> > > > -#define igt_list_for_each_entry_safe(pos, tmp, head, member)                   \
> > > > +
> > > > +#define igt_list_for_each_entry_safe(pos, tmp, head, member)           \
> > > >         for (pos = igt_container_of((head)->next, pos, member),         \
> > > > -            tmp = igt_container_of((pos)->member.next, tmp, member);   \
> > > > +            tmp = igt_container_of((pos)->member.next, tmp, member);   \
> > > >              &pos->member != (head);                                    \
> > > >              pos = tmp,                                                 \
> > > >              tmp = igt_container_of((pos)->member.next, tmp, member))
> > > > @@ -108,6 +131,27 @@ bool igt_list_empty(const struct igt_list_head *head);
> > > >              &pos->member != (head);                                    \
> > > >              pos = igt_container_of((pos)->member.prev, pos, member))
> > > >
> > > > +#define igt_list_for_each_entry_safe_reverse(pos, tmp, head, member)           \
> > > > +       for (pos = igt_container_of((head)->prev, pos, member),         \
> > > > +            tmp = igt_container_of((pos)->member.prev, tmp, member);   \
> > > > +            &pos->member != (head);                                    \
> > > > +            pos = tmp,                                                 \
> > > > +            tmp = igt_container_of((pos)->member.prev, tmp, member))
> > > > +
> > > > +#define igt_hlist_entry_safe(ptr, sample, member) \
> > > > +       ({ typeof(ptr) ____ptr = (ptr); \
> > > > +          ____ptr ? igt_container_of(____ptr, sample, member) : NULL; \
> > > > +       })
> > > > +
> > > > +#define igt_hlist_for_each_entry(pos, head, member)                    \
> > > > +       for (pos = igt_hlist_entry_safe((head)->first, pos, member);    \
> > > > +            pos;                                                       \
> > > > +            pos = igt_hlist_entry_safe((pos)->member.next, pos, member))
> > > > +
> > > > +#define igt_hlist_for_each_entry_safe(pos, n, head, member)            \
> > > > +       for (pos = igt_hlist_entry_safe((head)->first, pos, member);    \
> > > > +            pos && ({ n = pos->member.next; 1; });                     \
> > > > +            pos = igt_hlist_entry_safe(n, pos, member))
> > > >
> > > >  /* IGT custom helpers */
> > > >
> > > > @@ -127,4 +171,6 @@ bool igt_list_empty(const struct igt_list_head *head);
> > > >  #define igt_list_last_entry(head, type, member) \
> > > >         igt_container_of((head)->prev, (type), member)
> > > >
> > > > +#define IGT_INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
> > > > +
> > > >  #endif /* IGT_LIST_H */
> > > > --
> > > > 2.26.0
> > > >
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 05/35] lib/intel_allocator_simple: Add simple allocator
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 05/35] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
@ 2021-03-17 19:38   ` Jason Ekstrand
  2021-03-18 10:40     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 53+ messages in thread
From: Jason Ekstrand @ 2021-03-17 19:38 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: IGT GPU Tools

On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
<zbigniew.kempczynski@intel.com> wrote:
>
> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>
> Simple allocator borrowed from Mesa adopted for IGT use.
>
> From default we prefer an allocation from the top of vm address space
> (we can catch addressing issues pro-actively). When function
> intel_allocator_simple_create() is used we exclude last page as HW
> tends to hang on the render engine when full 3D pipeline is executed from
> the last page. For more control of vm range user can specify range using
> intel_allocator_simple_create_full() (with the respect of the gtt size).
>
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  lib/intel_allocator_simple.c | 744 +++++++++++++++++++++++++++++++++++
>  1 file changed, 744 insertions(+)
>  create mode 100644 lib/intel_allocator_simple.c
>
> diff --git a/lib/intel_allocator_simple.c b/lib/intel_allocator_simple.c
> new file mode 100644
> index 000000000..cc207c8e9
> --- /dev/null
> +++ b/lib/intel_allocator_simple.c
> @@ -0,0 +1,744 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2021 Intel Corporation
> + */
> +
> +#include <sys/ioctl.h>
> +#include <stdlib.h>
> +#include "igt.h"
> +#include "igt_x86.h"
> +#include "intel_allocator.h"
> +#include "intel_bufops.h"
> +#include "igt_map.h"
> +
> +/*
> + * We limit allocator space to avoid hang when batch would be
> + * pinned in the last page.
> + */
> +#define RESERVED 4096
> +
> +/* Avoid compilation warning */
> +struct intel_allocator *intel_allocator_simple_create(int fd);
> +struct intel_allocator *
> +intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
> +                                  enum allocator_strategy strategy);
> +
> +struct simple_vma_heap {
> +       struct igt_list_head holes;
> +
> +       /* If true, simple_vma_heap_alloc will prefer high addresses
> +        *
> +        * Default is true.
> +        */
> +       bool alloc_high;
> +};
> +
> +struct simple_vma_hole {
> +       struct igt_list_head link;
> +       uint64_t offset;
> +       uint64_t size;
> +};
> +
> +struct intel_allocator_simple {
> +       struct igt_map objects;
> +       struct igt_map reserved;
> +       struct simple_vma_heap heap;
> +
> +       uint64_t start;
> +       uint64_t end;
> +
> +       /* statistics */
> +       uint64_t total_size;
> +       uint64_t allocated_size;
> +       uint64_t allocated_objects;
> +       uint64_t reserved_size;
> +       uint64_t reserved_areas;
> +};
> +
> +struct intel_allocator_record {
> +       uint32_t handle;
> +       uint64_t offset;
> +       uint64_t size;
> +};
> +
> +#define simple_vma_foreach_hole(_hole, _heap) \
> +       igt_list_for_each_entry(_hole, &(_heap)->holes, link)
> +
> +#define simple_vma_foreach_hole_safe(_hole, _heap, _tmp) \
> +       igt_list_for_each_entry_safe(_hole, _tmp,  &(_heap)->holes, link)
> +
> +#define simple_vma_foreach_hole_safe_rev(_hole, _heap, _tmp) \
> +       igt_list_for_each_entry_safe_reverse(_hole, _tmp,  &(_heap)->holes, link)
> +
> +#define GEN8_GTT_ADDRESS_WIDTH 48
> +#define DECANONICAL(offset) (offset & ((1ull << GEN8_GTT_ADDRESS_WIDTH) - 1))
> +
> +static void simple_vma_heap_validate(struct simple_vma_heap *heap)
> +{
> +       uint64_t prev_offset = 0;
> +       struct simple_vma_hole *hole;
> +
> +       simple_vma_foreach_hole(hole, heap) {
> +               igt_assert(hole->size > 0);
> +
> +               if (&hole->link == heap->holes.next) {
> +               /* This must be the top-most hole.  Assert that,
> +                * if it overflows, it overflows to 0, i.e. 2^64.
> +                */

This indent is weird.

> +                       igt_assert(hole->size + hole->offset == 0 ||
> +                                  hole->size + hole->offset > hole->offset);
> +               } else {
> +               /* This is not the top-most hole so it must not overflow and,
> +                * in fact, must be strictly lower than the top-most hole.  If
> +                * hole->size + hole->offset == prev_offset, then we failed to
> +                * join holes during a simple_vma_heap_free.
> +                */

Weird indent.

> +                       igt_assert(hole->size + hole->offset > hole->offset &&
> +                                  hole->size + hole->offset < prev_offset);
> +               }
> +               prev_offset = hole->offset;
> +       }
> +}
> +
> +
> +static void simple_vma_heap_free(struct simple_vma_heap *heap,
> +                                uint64_t offset, uint64_t size)
> +{
> +       struct simple_vma_hole *high_hole = NULL, *low_hole = NULL, *hole;
> +       bool high_adjacent, low_adjacent;
> +
> +       /* Freeing something with a size of 0 is not valid. */
> +       igt_assert(size > 0);
> +
> +       /* It's possible for offset + size to wrap around if we touch the top of
> +        * the 64-bit address space, but we cannot go any higher than 2^64.
> +        */
> +       igt_assert(offset + size == 0 || offset + size > offset);
> +
> +       simple_vma_heap_validate(heap);
> +
> +       /* Find immediately higher and lower holes if they exist. */
> +       simple_vma_foreach_hole(hole, heap) {
> +               if (hole->offset <= offset) {
> +                       low_hole = hole;
> +                       break;
> +               }
> +               high_hole = hole;
> +       }
> +
> +       if (high_hole)
> +               igt_assert(offset + size <= high_hole->offset);
> +       high_adjacent = high_hole && offset + size == high_hole->offset;
> +
> +       if (low_hole) {
> +               igt_assert(low_hole->offset + low_hole->size > low_hole->offset);
> +               igt_assert(low_hole->offset + low_hole->size <= offset);
> +       }
> +       low_adjacent = low_hole && low_hole->offset + low_hole->size == offset;
> +
> +       if (low_adjacent && high_adjacent) {
> +               /* Merge the two holes */
> +               low_hole->size += size + high_hole->size;
> +               igt_list_del(&high_hole->link);
> +               free(high_hole);
> +       } else if (low_adjacent) {
> +               /* Merge into the low hole */
> +               low_hole->size += size;
> +       } else if (high_adjacent) {
> +               /* Merge into the high hole */
> +               high_hole->offset = offset;
> +               high_hole->size += size;
> +       } else {
> +               /* Neither hole is adjacent; make a new one */
> +               hole = calloc(1, sizeof(*hole));
> +               igt_assert(hole);
> +
> +               hole->offset = offset;
> +               hole->size = size;
> +               /* Add it after the high hole so we maintain high-to-low
> +                *  ordering
> +                */
> +               if (high_hole)
> +                       igt_list_add(&hole->link, &high_hole->link);
> +               else
> +                       igt_list_add(&hole->link, &heap->holes);
> +       }
> +
> +       simple_vma_heap_validate(heap);
> +}
> +
> +static void simple_vma_heap_init(struct simple_vma_heap *heap,
> +                                uint64_t start, uint64_t size,
> +                                enum allocator_strategy strategy)
> +{
> +       IGT_INIT_LIST_HEAD(&heap->holes);
> +       simple_vma_heap_free(heap, start, size);
> +
> +       switch (strategy) {
> +       case ALLOC_STRATEGY_LOW_TO_HIGH:
> +               heap->alloc_high = false;
> +               break;
> +       case ALLOC_STRATEGY_HIGH_TO_LOW:
> +       default:
> +               heap->alloc_high = true;
> +       }
> +}
> +
> +static void simple_vma_heap_finish(struct simple_vma_heap *heap)
> +{
> +       struct simple_vma_hole *hole, *tmp;
> +
> +       simple_vma_foreach_hole_safe(hole, heap, tmp)
> +               free(hole);
> +}
> +
> +static void simple_vma_hole_alloc(struct simple_vma_hole *hole,
> +                                 uint64_t offset, uint64_t size)
> +{
> +       struct simple_vma_hole *high_hole;
> +       uint64_t waste;
> +
> +       igt_assert(hole->offset <= offset);
> +       igt_assert(hole->size >= offset - hole->offset + size);
> +
> +       if (offset == hole->offset && size == hole->size) {
> +               /* Just get rid of the hole. */
> +               igt_list_del(&hole->link);
> +               free(hole);
> +               return;
> +       }
> +
> +       igt_assert(offset - hole->offset <= hole->size - size);
> +       waste = (hole->size - size) - (offset - hole->offset);
> +       if (waste == 0) {
> +               /* We allocated at the top->  Shrink the hole down. */
> +               hole->size -= size;
> +               return;
> +       }
> +
> +       if (offset == hole->offset) {
> +               /* We allocated at the bottom. Shrink the hole up-> */
> +               hole->offset += size;
> +               hole->size -= size;
> +               return;
> +       }
> +
> +   /* We allocated in the middle.  We need to split the old hole into two
> +    * holes, one high and one low.
> +    */

Weird indent

> +       high_hole = calloc(1, sizeof(*hole));
> +       igt_assert(high_hole);
> +
> +       high_hole->offset = offset + size;
> +       high_hole->size = waste;
> +
> +   /* Adjust the hole to be the amount of space left at he bottom of the
> +    * original hole.
> +    */

Weird indent

> +       hole->size = offset - hole->offset;
> +
> +   /* Place the new hole before the old hole so that the list is in order
> +    * from high to low.
> +    */

Weird indent

> +       igt_list_add_tail(&high_hole->link, &hole->link);
> +}
> +
> +static bool simple_vma_heap_alloc(struct simple_vma_heap *heap,
> +                                 uint64_t *offset, uint64_t size,
> +                                 uint64_t alignment)
> +{
> +       struct simple_vma_hole *hole, *tmp;
> +       uint64_t misalign;
> +
> +       /* The caller is expected to reject zero-size allocations */
> +       igt_assert(size > 0);
> +       igt_assert(alignment > 0);
> +
> +       simple_vma_heap_validate(heap);
> +
> +       if (heap->alloc_high) {
> +               simple_vma_foreach_hole_safe(hole, heap, tmp) {
> +                       if (size > hole->size)
> +                               continue;
> +
> +       /* Compute the offset as the highest address where a chunk of the
> +        * given size can be without going over the top of the hole.
> +        *
> +        * This calculation is known to not overflow because we know that
> +        * hole->size + hole->offset can only overflow to 0 and size > 0.
> +        */

Weird indent

> +                       *offset = (hole->size - size) + hole->offset;
> +
> +         /* Align the offset.  We align down and not up because we are
> +          *
> +          * allocating from the top of the hole and not the bottom.
> +          */

Weird indent

> +                       *offset = (*offset / alignment) * alignment;
> +
> +                       if (*offset < hole->offset)
> +                               continue;
> +
> +                       simple_vma_hole_alloc(hole, *offset, size);
> +                       simple_vma_heap_validate(heap);
> +                       return true;
> +               }
> +       } else {
> +               simple_vma_foreach_hole_safe_rev(hole, heap, tmp) {
> +                       if (size > hole->size)
> +                               continue;
> +
> +                       *offset = hole->offset;
> +
> +                       /* Align the offset */
> +                       misalign = *offset % alignment;
> +                       if (misalign) {
> +                               uint64_t pad = alignment - misalign;
> +
> +                               if (pad > hole->size - size)
> +                                       continue;
> +
> +                               *offset += pad;
> +                       }
> +
> +                       simple_vma_hole_alloc(hole, *offset, size);
> +                       simple_vma_heap_validate(heap);
> +                       return true;
> +               }
> +       }
> +
> +       /* Failed to allocate */
> +       return false;
> +}
> +
> +static void intel_allocator_simple_get_address_range(struct intel_allocator *ial,
> +                                                    uint64_t *startp,
> +                                                    uint64_t *endp)
> +{
> +       struct intel_allocator_simple *ials = ial->priv;
> +
> +       if (startp)
> +               *startp = ials->start;
> +
> +       if (endp)
> +               *endp = ials->end;
> +}
> +
> +static bool simple_vma_heap_alloc_addr(struct intel_allocator_simple *ials,
> +                                      uint64_t offset, uint64_t size)
> +{
> +       struct simple_vma_heap *heap = &ials->heap;
> +       struct simple_vma_hole *hole, *tmp;
> +       /* Allocating something with a size of 0 is not valid. */
> +       igt_assert(size > 0);
> +
> +       /* It's possible for offset + size to wrap around if we touch the top of
> +        * the 64-bit address space, but we cannot go any higher than 2^64.
> +        */
> +       igt_assert(offset + size == 0 || offset + size > offset);
> +
> +       /* Find the hole if one exists. */
> +       simple_vma_foreach_hole_safe(hole, heap, tmp) {
> +               if (hole->offset > offset)
> +                       continue;
> +
> +       /* Holes are ordered high-to-low so the first hole we find with
> +        * hole->offset <= is our hole.  If it's not big enough to contain the
> +        * requested range, then the allocation fails.
> +        */

Weird indent.

> +               igt_assert(hole->offset <= offset);
> +               if (hole->size < offset - hole->offset + size)
> +                       return false;
> +
> +               simple_vma_hole_alloc(hole, offset, size);
> +               return true;
> +       }
> +
> +       /* We didn't find a suitable hole */
> +       return false;
> +}
> +
> +static uint64_t intel_allocator_simple_alloc(struct intel_allocator *ial,
> +                                            uint32_t handle, uint64_t size,
> +                                            uint64_t alignment)
> +{
> +       struct intel_allocator_record *rec;
> +       struct intel_allocator_simple *ials;
> +       uint64_t offset;
> +
> +       igt_assert(ial);
> +       ials = (struct intel_allocator_simple *) ial->priv;
> +       igt_assert(ials);
> +       igt_assert(handle);
> +       alignment = alignment > 0 ? alignment : 1;
> +       rec = igt_map_find(&ials->objects, &handle);
> +       if (rec) {
> +               offset = rec->offset;
> +               igt_assert(rec->size == size);
> +       } else {
> +               igt_assert(simple_vma_heap_alloc(&ials->heap, &offset,
> +                                                size, alignment));
> +               rec = malloc(sizeof(*rec));
> +               rec->handle = handle;
> +               rec->offset = offset;
> +               rec->size = size;
> +
> +               igt_map_add(&ials->objects, &rec->handle, rec);
> +               ials->allocated_objects++;
> +               ials->allocated_size += size;
> +       }
> +
> +       return offset;
> +}
> +
> +static bool intel_allocator_simple_free(struct intel_allocator *ial, uint32_t handle)
> +{
> +       struct intel_allocator_record *rec = NULL;
> +       struct intel_allocator_simple *ials;
> +
> +       igt_assert(ial);
> +       ials = (struct intel_allocator_simple *) ial->priv;
> +       igt_assert(ials);
> +
> +       rec = igt_map_del(&ials->objects, &handle);
> +       if (rec) {
> +               simple_vma_heap_free(&ials->heap, rec->offset, rec->size);
> +               ials->allocated_objects--;
> +               ials->allocated_size -= rec->size;
> +               free(rec);
> +
> +               return true;
> +       }
> +
> +       return false;
> +}
> +
> +static inline bool __same(const struct intel_allocator_record *rec,
> +                         uint32_t handle, uint64_t size, uint64_t offset)
> +{
> +       return rec->handle == handle && rec->size == size &&
> +                       DECANONICAL(rec->offset) == DECANONICAL(offset);
> +}
> +
> +static bool intel_allocator_simple_is_allocated(struct intel_allocator *ial,
> +                                               uint32_t handle, uint64_t size,
> +                                               uint64_t offset)
> +{
> +       struct intel_allocator_record *rec;
> +       struct intel_allocator_simple *ials;
> +       bool same = false;
> +
> +       igt_assert(ial);
> +       ials = (struct intel_allocator_simple *) ial->priv;
> +       igt_assert(ials);
> +       igt_assert(handle);
> +
> +       rec = igt_map_find(&ials->objects, &handle);
> +       if (rec && __same(rec, handle, size, offset))
> +               same = true;
> +
> +       return same;
> +}
> +
> +static bool intel_allocator_simple_reserve(struct intel_allocator *ial,
> +                                          uint32_t handle,
> +                                          uint64_t start, uint64_t end)
> +{
> +       uint64_t size = end - start;

With canonical addresses, our size calculations aren't going to be
correct if start and end are on different sides of the 47-bit
boundary.  I'm not sure how to deal with that, TBH.  Most of the time,
I think you get saved by the fact that you're only really likely to
hit that if you have a > 128 TB range which never happens.  One simple
thing we could do is

igt_assert(end >> GEN8_GT_ADDRESS_WIDTH == start >> GEN8_GT_ADDRESS_WIDTH);

or similar.  Same goes for the 3-4 other cases below.

> +       struct intel_allocator_record *rec = NULL;
> +       struct intel_allocator_simple *ials;
> +
> +       igt_assert(ial);
> +       ials = (struct intel_allocator_simple *) ial->priv;
> +       igt_assert(ials);
> +
> +       /* clear [63:48] bits to get rid of canonical form */
> +       start = DECANONICAL(start);
> +       end = DECANONICAL(end);
> +       igt_assert(end > start || end == 0);

With always reserving the top 4K, end = 0 should never happen.

> +
> +       if (simple_vma_heap_alloc_addr(ials, start, size)) {
> +               rec = malloc(sizeof(*rec));
> +               rec->handle = handle;
> +               rec->offset = start;
> +               rec->size = size;
> +
> +               igt_map_add(&ials->reserved, &rec->offset, rec);
> +
> +               ials->reserved_areas++;
> +               ials->reserved_size += rec->size;
> +               return true;
> +       }
> +
> +       igt_debug("Failed to reserve %llx + %llx\n", (long long)start, (long long)size);
> +       return false;
> +}
> +
> +static bool intel_allocator_simple_unreserve(struct intel_allocator *ial,
> +                                            uint32_t handle,
> +                                            uint64_t start, uint64_t end)
> +{
> +       uint64_t size = end - start;
> +       struct intel_allocator_record *rec = NULL;
> +       struct intel_allocator_simple *ials;
> +
> +       igt_assert(ial);
> +       ials = (struct intel_allocator_simple *) ial->priv;
> +       igt_assert(ials);
> +
> +       /* clear [63:48] bits to get rid of canonical form */
> +       start = DECANONICAL(start);
> +       end = DECANONICAL(end);
> +
> +       igt_assert(end > start || end == 0);
> +
> +       rec = igt_map_find(&ials->reserved, &start);
> +
> +       if (!rec) {
> +               igt_debug("Only reserved blocks can be unreserved\n");
> +               return false;
> +       }
> +
> +       if (rec->size != size) {
> +               igt_debug("Only the whole block unreservation allowed\n");
> +               return false;
> +       }
> +
> +       if (rec->handle != handle) {
> +               igt_debug("Handle %u doesn't match reservation handle: %u\n",
> +                        rec->handle, handle);
> +               return false;
> +       }
> +
> +       igt_map_del(&ials->reserved, &start);
> +
> +       ials->reserved_areas--;
> +       ials->reserved_size -= rec->size;
> +       free(rec);
> +       simple_vma_heap_free(&ials->heap, start, size);
> +
> +       return true;
> +}
> +
> +static bool intel_allocator_simple_is_reserved(struct intel_allocator *ial,
> +                                              uint64_t start, uint64_t end)
> +{
> +       uint64_t size = end - start;
> +       struct intel_allocator_record *rec = NULL;
> +       struct intel_allocator_simple *ials;
> +
> +       igt_assert(ial);
> +       ials = (struct intel_allocator_simple *) ial->priv;
> +       igt_assert(ials);
> +
> +       /* clear [63:48] bits to get rid of canonical form */
> +       start = DECANONICAL(start);
> +       end = DECANONICAL(end);
> +
> +       igt_assert(end > start || end == 0);
> +
> +       rec = igt_map_find(&ials->reserved, &start);
> +
> +       if (!rec)
> +               return false;
> +
> +       if (rec->offset == start && rec->size == size)
> +               return true;
> +
> +       return false;
> +}
> +
> +static bool equal_8bytes(const void *key1, const void *key2)
> +{
> +       const uint64_t *k1 = key1, *k2 = key2;
> +       return *k1 == *k2;
> +}
> +
> +static void intel_allocator_simple_destroy(struct intel_allocator *ial)
> +{
> +       struct intel_allocator_simple *ials;
> +       struct igt_map_entry *pos;
> +       struct igt_map *map;
> +       int i;
> +
> +       igt_assert(ial);
> +       ials = (struct intel_allocator_simple *) ial->priv;
> +       simple_vma_heap_finish(&ials->heap);
> +
> +       map = &ials->objects;
> +       igt_map_for_each(map, i, pos)
> +               free(pos->value);
> +       igt_map_free(&ials->objects);
> +
> +       map = &ials->reserved;
> +       igt_map_for_each(map, i, pos)
> +               free(pos->value);
> +       igt_map_free(&ials->reserved);
> +
> +       free(ial->priv);
> +       free(ial);
> +}
> +
> +static bool intel_allocator_simple_is_empty(struct intel_allocator *ial)
> +{
> +       struct intel_allocator_simple *ials = ial->priv;
> +
> +       igt_debug("<ial: %p, fd: %d> objects: %" PRId64
> +                 ", reserved_areas: %" PRId64 "\n",
> +                 ial, ial->fd,
> +                 ials->allocated_objects, ials->reserved_areas);
> +
> +       return !ials->allocated_objects && !ials->reserved_areas;
> +}
> +
> +static void intel_allocator_simple_print(struct intel_allocator *ial, bool full)
> +{
> +       struct intel_allocator_simple *ials;
> +       struct simple_vma_hole *hole;
> +       struct simple_vma_heap *heap;
> +       struct igt_map_entry *pos;
> +       struct igt_map *map;
> +       uint64_t total_free = 0, allocated_size = 0, allocated_objects = 0;
> +       uint64_t reserved_size = 0, reserved_areas = 0;
> +       int i;
> +
> +       igt_assert(ial);
> +       ials = (struct intel_allocator_simple *) ial->priv;
> +       igt_assert(ials);
> +       heap = &ials->heap;
> +
> +       igt_info("intel_allocator_simple <ial: %p, fd: %d> on "
> +                "[0x%"PRIx64" : 0x%"PRIx64"]:\n", ial, ial->fd,
> +                ials->start, ials->end);
> +
> +       if (full) {
> +               igt_info("holes:\n");
> +               simple_vma_foreach_hole(hole, heap) {
> +                       igt_info("offset = %"PRIu64" (0x%"PRIx64", "
> +                                "size = %"PRIu64" (0x%"PRIx64")\n",
> +                                hole->offset, hole->offset, hole->size,
> +                                hole->size);
> +                       total_free += hole->size;
> +               }
> +               igt_assert(total_free <= ials->total_size);
> +               igt_info("total_free: %" PRIx64
> +                        ", total_size: %" PRIx64
> +                        ", allocated_size: %" PRIx64
> +                        ", reserved_size: %" PRIx64 "\n",
> +                        total_free, ials->total_size, ials->allocated_size,
> +                        ials->reserved_size);
> +               igt_assert(total_free ==
> +                          ials->total_size - ials->allocated_size - ials->reserved_size);
> +
> +               igt_info("objects:\n");
> +               map = &ials->objects;
> +               igt_map_for_each(map, i, pos) {
> +                       struct intel_allocator_record *rec = pos->value;
> +
> +                       igt_info("handle = %d, offset = %"PRIu64" "
> +                               "(0x%"PRIx64", size = %"PRIu64" (0x%"PRIx64")\n",
> +                                rec->handle, rec->offset, rec->offset,
> +                                rec->size, rec->size);
> +                       allocated_objects++;
> +                       allocated_size += rec->size;
> +               }
> +               igt_assert(ials->allocated_size == allocated_size);
> +               igt_assert(ials->allocated_objects == allocated_objects);
> +
> +               igt_info("reserved areas:\n");
> +               map = &ials->reserved;
> +               igt_map_for_each(map, i, pos) {
> +                       struct intel_allocator_record *rec = pos->value;
> +
> +                       igt_info("offset = %"PRIu64" (0x%"PRIx64", "
> +                                "size = %"PRIu64" (0x%"PRIx64")\n",
> +                                rec->offset, rec->offset,
> +                                rec->size, rec->size);
> +                       reserved_areas++;
> +                       reserved_size += rec->size;
> +               }
> +               igt_assert(ials->reserved_areas == reserved_areas);
> +               igt_assert(ials->reserved_size == reserved_size);
> +       } else {
> +               simple_vma_foreach_hole(hole, heap)
> +                       total_free += hole->size;
> +       }
> +
> +       igt_info("free space: %"PRIu64"B (0x%"PRIx64") (%.2f%% full)\n"
> +                "allocated objects: %"PRIu64", reserved areas: %"PRIu64"\n",
> +                total_free, total_free,
> +                ((double) (ials->total_size - total_free) /
> +                 (double) ials->total_size) * 100,
> +                ials->allocated_objects, ials->reserved_areas);
> +}
> +
> +static struct intel_allocator *
> +__intel_allocator_simple_create(int fd, uint64_t start, uint64_t end,
> +                               enum allocator_strategy strategy)
> +{
> +       struct intel_allocator *ial;
> +       struct intel_allocator_simple *ials;
> +
> +       igt_debug("Using simple allocator\n");
> +
> +       ial = calloc(1, sizeof(*ial));
> +       igt_assert(ial);
> +
> +       ial->fd = fd;
> +       ial->get_address_range = intel_allocator_simple_get_address_range;
> +       ial->alloc = intel_allocator_simple_alloc;
> +       ial->free = intel_allocator_simple_free;
> +       ial->is_allocated = intel_allocator_simple_is_allocated;
> +       ial->reserve = intel_allocator_simple_reserve;
> +       ial->unreserve = intel_allocator_simple_unreserve;
> +       ial->is_reserved = intel_allocator_simple_is_reserved;
> +       ial->destroy = intel_allocator_simple_destroy;
> +       ial->is_empty = intel_allocator_simple_is_empty;
> +       ial->print = intel_allocator_simple_print;
> +       ials = ial->priv = malloc(sizeof(struct intel_allocator_simple));
> +       igt_assert(ials);
> +
> +       igt_map_init(&ials->objects);
> +       /* Reserved addresses hashtable is indexed by an offset */
> +       __igt_map_init(&ials->reserved, equal_8bytes, NULL, 3);

We have this same problem with Mesa.  Maybe just make the hash map
take a uint64_t key instead of a void*.  Then it'll work for both
cases easily at the cost of a little extra memory on 32-bit platforms.

Looking a bit more, I guess it's not quite as bad as the Mesa case
because you have the uint64_t key in the object you're adding so you
don't have to malloc a bunch of uint64_t's just to use as keys.

> +
> +       ials->start = start;
> +       ials->end = end;
> +       ials->total_size = end - start;
> +       simple_vma_heap_init(&ials->heap, ials->start, ials->total_size,
> +                            strategy);
> +
> +       ials->allocated_size = 0;
> +       ials->allocated_objects = 0;
> +       ials->reserved_size = 0;
> +       ials->reserved_areas = 0;
> +
> +       return ial;
> +}
> +
> +struct intel_allocator *
> +intel_allocator_simple_create(int fd)
> +{
> +       uint64_t gtt_size = gem_aperture_size(fd);
> +
> +       if (!gem_uses_full_ppgtt(fd))
> +               gtt_size /= 2;
> +       else
> +               gtt_size -= RESERVED;
> +
> +       return __intel_allocator_simple_create(fd, 0, gtt_size,
> +                                              ALLOC_STRATEGY_HIGH_TO_LOW);
> +}
> +
> +struct intel_allocator *
> +intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
> +                                  enum allocator_strategy strategy)
> +{
> +       uint64_t gtt_size = gem_aperture_size(fd);
> +
> +       igt_assert(end <= gtt_size);
> +       if (!gem_uses_full_ppgtt(fd))
> +               gtt_size /= 2;
> +       igt_assert(end - start <= gtt_size);

Don't you want just `end <= gtt_size`?  When is something only going
to use the top half?

> +
> +       return __intel_allocator_simple_create(fd, start, end, strategy);
> +}
> --
> 2.26.0
>
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation.
  2021-03-17 19:13         ` Zbigniew Kempczyński
@ 2021-03-17 20:44           ` Grzegorzek, Dominik
  2021-03-18 13:26             ` Grzegorzek, Dominik
  0 siblings, 1 reply; 53+ messages in thread
From: Grzegorzek, Dominik @ 2021-03-17 20:44 UTC (permalink / raw)
  To: Kempczynski, Zbigniew, jason; +Cc: igt-dev

On Wed, 2021-03-17 at 20:13 +0100, Zbigniew Kempczyński wrote:
> On Wed, Mar 17, 2021 at 01:02:53PM -0500, Jason Ekstrand wrote:
> > On Wed, Mar 17, 2021 at 12:43 PM Zbigniew Kempczyński
> > <zbigniew.kempczynski@intel.com> wrote:
> > > On Wed, Mar 17, 2021 at 11:43:38AM -0500, Jason Ekstrand wrote:
> > > > Would it be better to pull the hash table implementation from
> > > > Mesa?
> > > > Or use https://cgit.freedesktop.org/~anholt/hash_table/ which
> > > > should
> > > > be identical, though it may be a bit out-of-date.  I've poured
> > > > enough
> > > > hours of my life into finding and fixing the bugs in the Mesa
> > > > hash
> > > > table that IGT rolling its own doesn't really fill me with
> > > > happy
> > > > feelings.
> > > > 
> > > > --Jason
> > > 
> > > I'm not going to be a devil advocate, I'll just ask Dominik how
> > > confident is he about that implementation, at least we can
> > > provide
> > > some stress test with multithreading scenarios.
> > > 
> > > He's ported kernel hashtable so if he didn't introduce a non
> > > obvious
> > > bug it should work.
> > 
> > If this is, indeed, a port of the kernel hash table, then it's
> > probably ok.  Not sure how that works out on licenses, but it
> > should
> > be correct.  If this has been freshly hand-rolled expressly for
> > this
> > purpose, then it makes me nervous.
> > 
> > > Regarding Eric implementation - we need to adopt that
> > > implementation
> > > to at least rename to igt_* and fix the code which is using.
> > > Should take few days max.
> > > 
> > > If you'll take a look to tests 
> > > api_intel_allocator@two-level-inception
> > > you'll see we're stress testing that hashtable much from many
> > > processes / threads and we see no problem with losing data
> > > coherency
> > > in the map.
> > 
> > I'm less worried about threads, so long as it's locked, as I am
> > about
> > weird insert/remove patterns.  Most of the bugs I've had to fix in
> > mesa were because certain patterns of insert/remove with the right
> > hashes would cause it to blow up.  If most of what you do is just
> > add
> > a bunch of stuff without a lot of removal, they can be unlikely and
> > hard to find.  That said, given that it's linked list based and not
> > re-hash based, a lot of those corner cases are less likely than
> > they
> > were with the Mesa implementation.  Again, my primary concern is
> > with
> > a NEW hash table impl. :-)
> 
> I'm not sure I've understand you correctly - it is re-hash based (see
> igt_map_extend()), and it grows if necessary. Works properly when you
> don't forget about locking during operations (not only add, but find
> too because it is not funny when someone is taking the rug from under
> your feet).
> 
> All multiprocess/multithread tests in api_intel_allocator heavily
> exercise insert/remove scenarios (open/close allocator just give 
> you new handle inserted/removed from handles hash table so lack 
> of consistency would be quickly fount imo).
> 
> --
> Zbigniew

To be clear, this is not an exact kernel's hash table implementation.
The one in the kernel is statically sized, but it was based on the one
in the kernel. As you both noticed it is based on a linked list which
is an exact kernel's port. I kept igt_map as simple as it was possible,
just to avoid surprises. I carried also some Valgrind tests as Chris
proposed in a previous iteration of that series, the results were
clean. This, and the fact it has already been tested in a vast number
of cases during the time the series was developed, makes me feel
certain about the implementation.

~Dominik

> 
> > --Jason
> > 
> > > --
> > > Zbigniew
> > > > On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
> > > > <zbigniew.kempczynski@intel.com> wrote:
> > > > > From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > > > 
> > > > > Double linked lists with a single pointer list head
> > > > > implementation,
> > > > > based on similar in the kernel.
> > > > > 
> > > > > Signed-off-by: Dominik Grzegorzek <
> > > > > dominik.grzegorzek@intel.com>
> > > > > Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > > > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > > > > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > > ---
> > > > >  lib/igt_list.c | 72
> > > > > ++++++++++++++++++++++++++++++++++++++++++++++++++
> > > > >  lib/igt_list.h | 50 +++++++++++++++++++++++++++++++++--
> > > > >  2 files changed, 120 insertions(+), 2 deletions(-)
> > > > > 
> > > > > diff --git a/lib/igt_list.c b/lib/igt_list.c
> > > > > index 37ae139c4..43200f9b3 100644
> > > > > --- a/lib/igt_list.c
> > > > > +++ b/lib/igt_list.c
> > > > > @@ -22,6 +22,7 @@
> > > > >   *
> > > > >   */
> > > > > 
> > > > > +#include "assert.h"
> > > > >  #include "igt_list.h"
> > > > > 
> > > > >  void IGT_INIT_LIST_HEAD(struct igt_list_head *list)
> > > > > @@ -81,3 +82,74 @@ bool igt_list_empty(const struct
> > > > > igt_list_head *head)
> > > > >  {
> > > > >         return head->next == head;
> > > > >  }
> > > > > +
> > > > > +void igt_hlist_init(struct igt_hlist_node *h)
> > > > > +{
> > > > > +       h->next = NULL;
> > > > > +       h->pprev = NULL;
> > > > > +}
> > > > > +
> > > > > +int igt_hlist_unhashed(const struct igt_hlist_node *h)
> > > > > +{
> > > > > +       return !h->pprev;
> > > > > +}
> > > > > +
> > > > > +int igt_hlist_empty(const struct igt_hlist_head *h)
> > > > > +{
> > > > > +       return !h->first;
> > > > > +}
> > > > > +
> > > > > +static void __igt_hlist_del(struct igt_hlist_node *n)
> > > > > +{
> > > > > +       struct igt_hlist_node *next = n->next;
> > > > > +       struct igt_hlist_node **pprev = n->pprev;
> > > > > +
> > > > > +       *pprev = next;
> > > > > +       if (next)
> > > > > +               next->pprev = pprev;
> > > > > +}
> > > > > +
> > > > > +void igt_hlist_del(struct igt_hlist_node *n)
> > > > > +{
> > > > > +       __igt_hlist_del(n);
> > > > > +       n->next = NULL;
> > > > > +       n->pprev = NULL;
> > > > > +}
> > > > > +
> > > > > +void igt_hlist_del_init(struct igt_hlist_node *n)
> > > > > +{
> > > > > +       if (!igt_hlist_unhashed(n)) {
> > > > > +               __igt_hlist_del(n);
> > > > > +               igt_hlist_init(n);
> > > > > +       }
> > > > > +}
> > > > > +
> > > > > +void igt_hlist_add_head(struct igt_hlist_node *n, struct
> > > > > igt_hlist_head *h)
> > > > > +{
> > > > > +       struct igt_hlist_node *first = h->first;
> > > > > +
> > > > > +       n->next = first;
> > > > > +       if (first)
> > > > > +               first->pprev = &n->next;
> > > > > +       h->first = n;
> > > > > +       n->pprev = &h->first;
> > > > > +}
> > > > > +
> > > > > +void igt_hlist_add_before(struct igt_hlist_node *n, struct
> > > > > igt_hlist_node *next)
> > > > > +{
> > > > > +       assert(next);
> > > > > +       n->pprev = next->pprev;
> > > > > +       n->next = next;
> > > > > +       next->pprev = &n->next;
> > > > > +       *(n->pprev) = n;
> > > > > +}
> > > > > +
> > > > > +void igt_hlist_add_behind(struct igt_hlist_node *n, struct
> > > > > igt_hlist_node *prev)
> > > > > +{
> > > > > +       n->next = prev->next;
> > > > > +       prev->next = n;
> > > > > +       n->pprev = &prev->next;
> > > > > +
> > > > > +       if (n->next)
> > > > > +               n->next->pprev  = &n->next;
> > > > > +}
> > > > > diff --git a/lib/igt_list.h b/lib/igt_list.h
> > > > > index cc93d7a0d..78e761e05 100644
> > > > > --- a/lib/igt_list.h
> > > > > +++ b/lib/igt_list.h
> > > > > @@ -40,6 +40,10 @@
> > > > >   * igt_list is a doubly-linked list where an instance of
> > > > > igt_list_head is a
> > > > >   * head sentinel and has to be initialized.
> > > > >   *
> > > > > + * igt_hist is also an double linked lists, but with a
> > > > > single pointer list head.
> > > > > + * Mostly useful for hash tables where the two pointer list
> > > > > head is
> > > > > + * too wasteful. You lose the ability to access the tail in
> > > > > O(1).
> > > > > + *
> > > > >   * Example usage:
> > > > >   *
> > > > >   * |[<!-- language="C" -->
> > > > > @@ -71,6 +75,13 @@ struct igt_list_head {
> > > > >         struct igt_list_head *next;
> > > > >  };
> > > > > 
> > > > > +struct igt_hlist_head {
> > > > > +       struct igt_hlist_node *first;
> > > > > +};
> > > > > +
> > > > > +struct igt_hlist_node {
> > > > > +       struct igt_hlist_node *next, **pprev;
> > > > > +};
> > > > > 
> > > > >  void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
> > > > >  void igt_list_add(struct igt_list_head *elem, struct
> > > > > igt_list_head *head);
> > > > > @@ -81,6 +92,17 @@ void igt_list_move_tail(struct
> > > > > igt_list_head *elem, struct igt_list_head *list);
> > > > >  int igt_list_length(const struct igt_list_head *head);
> > > > >  bool igt_list_empty(const struct igt_list_head *head);
> > > > > 
> > > > > +void igt_hlist_init(struct igt_hlist_node *h);
> > > > > +int igt_hlist_unhashed(const struct igt_hlist_node *h);
> > > > > +int igt_hlist_empty(const struct igt_hlist_head *h);
> > > > > +void igt_hlist_del(struct igt_hlist_node *n);
> > > > > +void igt_hlist_del_init(struct igt_hlist_node *n);
> > > > > +void igt_hlist_add_head(struct igt_hlist_node *n, struct
> > > > > igt_hlist_head *h);
> > > > > +void igt_hlist_add_before(struct igt_hlist_node *n,
> > > > > +                         struct igt_hlist_node *next);
> > > > > +void igt_hlist_add_behind(struct igt_hlist_node *n,
> > > > > +                         struct igt_hlist_node *prev);
> > > > > +
> > > > >  #define igt_container_of(ptr, sample,
> > > > > member)                          \
> > > > >         (__typeof__(sample))((char *)(ptr)
> > > > > -                            \
> > > > >                                 offsetof(__typeof__(*sample),
> > > > > member))
> > > > > @@ -96,9 +118,10 @@ bool igt_list_empty(const struct
> > > > > igt_list_head *head);
> > > > >   * Safe against removel of the *current* list element. To
> > > > > achive this it
> > > > >   * requires an extra helper variable `tmp` with the same
> > > > > type as `pos`.
> > > > >   */
> > > > > -#define igt_list_for_each_entry_safe(pos, tmp, head,
> > > > > member)                   \
> > > > > +
> > > > > +#define igt_list_for_each_entry_safe(pos, tmp, head,
> > > > > member)           \
> > > > >         for (pos = igt_container_of((head)->next, pos,
> > > > > member),         \
> > > > > -            tmp = igt_container_of((pos)->member.next, tmp,
> > > > > member);   \
> > > > > +            tmp = igt_container_of((pos)->member.next, tmp,
> > > > > member);   \
> > > > >              &pos->member !=
> > > > > (head);                                    \
> > > > >              pos =
> > > > > tmp,                                                 \
> > > > >              tmp = igt_container_of((pos)->member.next, tmp,
> > > > > member))
> > > > > @@ -108,6 +131,27 @@ bool igt_list_empty(const struct
> > > > > igt_list_head *head);
> > > > >              &pos->member !=
> > > > > (head);                                    \
> > > > >              pos = igt_container_of((pos)->member.prev, pos,
> > > > > member))
> > > > > 
> > > > > +#define igt_list_for_each_entry_safe_reverse(pos, tmp, head,
> > > > > member)           \
> > > > > +       for (pos = igt_container_of((head)->prev, pos,
> > > > > member),         \
> > > > > +            tmp = igt_container_of((pos)->member.prev, tmp,
> > > > > member);   \
> > > > > +            &pos->member !=
> > > > > (head);                                    \
> > > > > +            pos =
> > > > > tmp,                                                 \
> > > > > +            tmp = igt_container_of((pos)->member.prev, tmp,
> > > > > member))
> > > > > +
> > > > > +#define igt_hlist_entry_safe(ptr, sample, member) \
> > > > > +       ({ typeof(ptr) ____ptr = (ptr); \
> > > > > +          ____ptr ? igt_container_of(____ptr, sample,
> > > > > member) : NULL; \
> > > > > +       })
> > > > > +
> > > > > +#define igt_hlist_for_each_entry(pos, head,
> > > > > member)                    \
> > > > > +       for (pos = igt_hlist_entry_safe((head)->first, pos,
> > > > > member);    \
> > > > > +            pos;                                            
> > > > >            \
> > > > > +            pos = igt_hlist_entry_safe((pos)->member.next,
> > > > > pos, member))
> > > > > +
> > > > > +#define igt_hlist_for_each_entry_safe(pos, n, head,
> > > > > member)            \
> > > > > +       for (pos = igt_hlist_entry_safe((head)->first, pos,
> > > > > member);    \
> > > > > +            pos && ({ n = pos->member.next; 1;
> > > > > });                     \
> > > > > +            pos = igt_hlist_entry_safe(n, pos, member))
> > > > > 
> > > > >  /* IGT custom helpers */
> > > > > 
> > > > > @@ -127,4 +171,6 @@ bool igt_list_empty(const struct
> > > > > igt_list_head *head);
> > > > >  #define igt_list_last_entry(head, type, member) \
> > > > >         igt_container_of((head)->prev, (type), member)
> > > > > 
> > > > > +#define IGT_INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
> > > > > +
> > > > >  #endif /* IGT_LIST_H */
> > > > > --
> > > > > 2.26.0
> > > > > 
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 12/35] lib/intel_bufops: Change size from 32->64 bit
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 12/35] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
@ 2021-03-17 21:33   ` Jason Ekstrand
  0 siblings, 0 replies; 53+ messages in thread
From: Jason Ekstrand @ 2021-03-17 21:33 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: IGT GPU Tools

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>

On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
<zbigniew.kempczynski@intel.com> wrote:
>
> 1. Buffer size from 32 -> 64 bit was changed to be consistent
>    with drm code.
> 2. Remember buffer creation size to avoid recalculation.
>
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  lib/intel_bufops.c        | 17 ++++++++---------
>  lib/intel_bufops.h        |  7 +++++--
>  tests/i915/api_intel_bb.c |  6 +++---
>  3 files changed, 16 insertions(+), 14 deletions(-)
>
> diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
> index a50035e40..eb5ac4dad 100644
> --- a/lib/intel_bufops.c
> +++ b/lib/intel_bufops.c
> @@ -711,7 +711,7 @@ static void __intel_buf_init(struct buf_ops *bops,
>                              uint32_t req_tiling, uint32_t compression)
>  {
>         uint32_t tiling = req_tiling;
> -       uint32_t size;
> +       uint64_t size;
>         uint32_t devid;
>         int tile_width;
>         int align_h = 1;
> @@ -776,6 +776,9 @@ static void __intel_buf_init(struct buf_ops *bops,
>                 size = buf->surface[0].stride * ALIGN(height, align_h);
>         }
>
> +       /* Store real bo size to avoid mistakes in calculating it again */
> +       buf->size = size;
> +
>         if (handle)
>                 buf->handle = handle;
>         else
> @@ -1001,8 +1004,8 @@ void intel_buf_flush_and_unmap(struct intel_buf *buf)
>  void intel_buf_print(const struct intel_buf *buf)
>  {
>         igt_info("[name: %s]\n", buf->name);
> -       igt_info("[%u]: w: %u, h: %u, stride: %u, size: %u, bo-size: %u, "
> -                "bpp: %u, tiling: %u, compress: %u\n",
> +       igt_info("[%u]: w: %u, h: %u, stride: %u, size: %" PRIx64
> +                ", bo-size: %" PRIx64 ", bpp: %u, tiling: %u, compress: %u\n",
>                  buf->handle, intel_buf_width(buf), intel_buf_height(buf),
>                  buf->surface[0].stride, buf->surface[0].size,
>                  intel_buf_bo_size(buf), buf->bpp,
> @@ -1208,13 +1211,9 @@ static void idempotency_selftest(struct buf_ops *bops, uint32_t tiling)
>         buf_ops_set_software_tiling(bops, tiling, false);
>  }
>
> -uint32_t intel_buf_bo_size(const struct intel_buf *buf)
> +uint64_t intel_buf_bo_size(const struct intel_buf *buf)
>  {
> -       int offset = CCS_OFFSET(buf) ?: buf->surface[0].size;
> -       int ccs_size =
> -               buf->compression ? CCS_SIZE(buf->bops->intel_gen, buf) : 0;
> -
> -       return offset + ccs_size;
> +       return buf->size;
>  }
>
>  static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency)
> diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
> index 8debe7f22..5619fc6fa 100644
> --- a/lib/intel_bufops.h
> +++ b/lib/intel_bufops.h
> @@ -9,10 +9,13 @@ struct buf_ops;
>
>  #define INTEL_BUF_INVALID_ADDRESS (-1ull)
>  #define INTEL_BUF_NAME_MAXSIZE 32
> +#define INVALID_ADDR(x) ((x) == INTEL_BUF_INVALID_ADDRESS)
> +
>  struct intel_buf {
>         struct buf_ops *bops;
>         bool is_owner;
>         uint32_t handle;
> +       uint64_t size;
>         uint32_t tiling;
>         uint32_t bpp;
>         uint32_t compression;
> @@ -23,7 +26,7 @@ struct intel_buf {
>         struct {
>                 uint32_t offset;
>                 uint32_t stride;
> -               uint32_t size;
> +               uint64_t size;
>         } surface[2];
>         struct {
>                 uint32_t offset;
> @@ -88,7 +91,7 @@ intel_buf_ccs_height(int gen, const struct intel_buf *buf)
>         return DIV_ROUND_UP(intel_buf_height(buf), 512) * 32;
>  }
>
> -uint32_t intel_buf_bo_size(const struct intel_buf *buf);
> +uint64_t intel_buf_bo_size(const struct intel_buf *buf);
>
>  struct buf_ops *buf_ops_create(int fd);
>  struct buf_ops *buf_ops_create_with_selftest(int fd);
> diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
> index cc1d1be6e..14bfeadb3 100644
> --- a/tests/i915/api_intel_bb.c
> +++ b/tests/i915/api_intel_bb.c
> @@ -123,9 +123,9 @@ static void print_buf(struct intel_buf *buf, const char *name)
>
>         ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
>                                         buf->surface[0].size, PROT_READ);
> -       igt_debug("[%s] Buf handle: %d, size: %d, v: 0x%02x, presumed_addr: %p\n",
> +       igt_debug("[%s] Buf handle: %d, size: %" PRIx64 ", v: 0x%02x, presumed_addr: %p\n",
>                   name, buf->handle, buf->surface[0].size, ptr[0],
> -                       from_user_pointer(buf->addr.offset));
> +                 from_user_pointer(buf->addr.offset));
>         munmap(ptr, buf->surface[0].size);
>  }
>
> @@ -677,7 +677,7 @@ static int dump_base64(const char *name, struct intel_buf *buf)
>         if (ret != Z_OK) {
>                 igt_warn("error compressing, ret: %d\n", ret);
>         } else {
> -               igt_info("compressed %u -> %lu\n",
> +               igt_info("compressed %" PRIx64 " -> %lu\n",
>                          buf->surface[0].size, outsize);
>
>                 igt_info("--- %s ---\n", name);
> --
> 2.26.0
>
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 13/35] lib/intel_bufops: Add init with handle and size function
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 13/35] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
@ 2021-03-17 21:36   ` Jason Ekstrand
  2021-03-18  7:32     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 53+ messages in thread
From: Jason Ekstrand @ 2021-03-17 21:36 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: IGT GPU Tools

On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
<zbigniew.kempczynski@intel.com> wrote:
>
> For some cases (fb with compression) fb size is bigger than calculated
> intel_buf what lead to execbuf failure when allocator is used
> along with EXEC_OBJECT_PINNED flag set for all objects.
>
> We need to create intel_buf with size equal to fb so new function
> in which we pass handle and size is required.
>
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  lib/intel_bufops.c | 33 ++++++++++++++++++++++++++++-----
>  lib/intel_bufops.h |  7 +++++++
>  2 files changed, 35 insertions(+), 5 deletions(-)
>
> diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
> index eb5ac4dad..d8eb64e3a 100644
> --- a/lib/intel_bufops.c
> +++ b/lib/intel_bufops.c
> @@ -708,7 +708,8 @@ static void __intel_buf_init(struct buf_ops *bops,
>                              uint32_t handle,
>                              struct intel_buf *buf,
>                              int width, int height, int bpp, int alignment,
> -                            uint32_t req_tiling, uint32_t compression)
> +                            uint32_t req_tiling, uint32_t compression,
> +                            uint64_t bo_size)
>  {
>         uint32_t tiling = req_tiling;
>         uint64_t size;
> @@ -758,7 +759,7 @@ static void __intel_buf_init(struct buf_ops *bops,
>                 buf->ccs[0].offset = buf->surface[0].stride * ALIGN(height, 32);
>                 buf->ccs[0].stride = aux_width;
>
> -               size = buf->ccs[0].offset + aux_width * aux_height;
> +               size = max(bo_size, buf->ccs[0].offset + aux_width * aux_height);
>         } else {
>                 if (tiling) {
>                         devid =  intel_get_drm_devid(bops->fd);
> @@ -773,7 +774,7 @@ static void __intel_buf_init(struct buf_ops *bops,
>                 buf->tiling = tiling;
>                 buf->bpp = bpp;
>
> -               size = buf->surface[0].stride * ALIGN(height, align_h);
> +               size = max(bo_size, buf->surface[0].stride * ALIGN(height, align_h));

Do we want to take a max() here or do something like this afterwards:

if (bo_size > 0) {
    igt_assert(bo_size >= size);
    buf->size = bo_size;
} else {
    buf->size = size;
}

>         }
>
>         /* Store real bo size to avoid mistakes in calculating it again */
> @@ -809,7 +810,7 @@ void intel_buf_init(struct buf_ops *bops,
>                     uint32_t tiling, uint32_t compression)
>  {
>         __intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
> -                        tiling, compression);
> +                        tiling, compression, 0);
>
>         intel_buf_set_ownership(buf, true);
>  }
> @@ -858,7 +859,7 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
>                                  uint32_t req_tiling, uint32_t compression)
>  {
>         __intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
> -                        req_tiling, compression);
> +                        req_tiling, compression, 0);
>  }
>
>  /**
> @@ -927,6 +928,28 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
>         return buf;
>  }
>
> +struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
> +                                                        uint32_t handle,
> +                                                        int width, int height,
> +                                                        int bpp, int alignment,
> +                                                        uint32_t req_tiling,
> +                                                        uint32_t compression,
> +                                                        uint64_t size)
> +{
> +       struct intel_buf *buf;
> +
> +       igt_assert(bops);
> +
> +       buf = calloc(1, sizeof(*buf));
> +       igt_assert(buf);
> +
> +       __intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
> +                        req_tiling, compression, size);
> +
> +       return buf;
> +}
> +
> +
>  /**
>   * intel_buf_destroy
>   * @buf: intel_buf
> diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
> index 5619fc6fa..54480bff6 100644
> --- a/lib/intel_bufops.h
> +++ b/lib/intel_bufops.h
> @@ -139,6 +139,13 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
>                                                 uint32_t req_tiling,
>                                                 uint32_t compression);
>
> +struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
> +                                                        uint32_t handle,
> +                                                        int width, int height,
> +                                                        int bpp, int alignment,
> +                                                        uint32_t req_tiling,
> +                                                        uint32_t compression,
> +                                                        uint64_t size);
>  void intel_buf_destroy(struct intel_buf *buf);
>
>  void *intel_buf_cpu_map(struct intel_buf *buf, bool write);
> --
> 2.26.0
>
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 13/35] lib/intel_bufops: Add init with handle and size function
  2021-03-17 21:36   ` Jason Ekstrand
@ 2021-03-18  7:32     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-18  7:32 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: IGT GPU Tools

On Wed, Mar 17, 2021 at 04:36:54PM -0500, Jason Ekstrand wrote:
> On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
> <zbigniew.kempczynski@intel.com> wrote:
> >
> > For some cases (fb with compression) fb size is bigger than calculated
> > intel_buf what lead to execbuf failure when allocator is used
> > along with EXEC_OBJECT_PINNED flag set for all objects.
> >
> > We need to create intel_buf with size equal to fb so new function
> > in which we pass handle and size is required.
> >
> > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  lib/intel_bufops.c | 33 ++++++++++++++++++++++++++++-----
> >  lib/intel_bufops.h |  7 +++++++
> >  2 files changed, 35 insertions(+), 5 deletions(-)
> >
> > diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
> > index eb5ac4dad..d8eb64e3a 100644
> > --- a/lib/intel_bufops.c
> > +++ b/lib/intel_bufops.c
> > @@ -708,7 +708,8 @@ static void __intel_buf_init(struct buf_ops *bops,
> >                              uint32_t handle,
> >                              struct intel_buf *buf,
> >                              int width, int height, int bpp, int alignment,
> > -                            uint32_t req_tiling, uint32_t compression)
> > +                            uint32_t req_tiling, uint32_t compression,
> > +                            uint64_t bo_size)
> >  {
> >         uint32_t tiling = req_tiling;
> >         uint64_t size;
> > @@ -758,7 +759,7 @@ static void __intel_buf_init(struct buf_ops *bops,
> >                 buf->ccs[0].offset = buf->surface[0].stride * ALIGN(height, 32);
> >                 buf->ccs[0].stride = aux_width;
> >
> > -               size = buf->ccs[0].offset + aux_width * aux_height;
> > +               size = max(bo_size, buf->ccs[0].offset + aux_width * aux_height);
> >         } else {
> >                 if (tiling) {
> >                         devid =  intel_get_drm_devid(bops->fd);
> > @@ -773,7 +774,7 @@ static void __intel_buf_init(struct buf_ops *bops,
> >                 buf->tiling = tiling;
> >                 buf->bpp = bpp;
> >
> > -               size = buf->surface[0].stride * ALIGN(height, align_h);
> > +               size = max(bo_size, buf->surface[0].stride * ALIGN(height, align_h));
> 
> Do we want to take a max() here or do something like this afterwards:
> 
> if (bo_size > 0) {
>     igt_assert(bo_size >= size);
>     buf->size = bo_size;
> } else {
>     buf->size = size;
> }

Agree, if caller really want to pass bo_size we expect he will at least
provide valid size (according to w/h/bpp).

This is cosmetic change which provides better input data protection
so I dare to keep Chris r-b here. If you would complain I'll remove
this in next series.

--
Zbigniew


> 
> >         }
> >
> >         /* Store real bo size to avoid mistakes in calculating it again */
> > @@ -809,7 +810,7 @@ void intel_buf_init(struct buf_ops *bops,
> >                     uint32_t tiling, uint32_t compression)
> >  {
> >         __intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
> > -                        tiling, compression);
> > +                        tiling, compression, 0);
> >
> >         intel_buf_set_ownership(buf, true);
> >  }
> > @@ -858,7 +859,7 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
> >                                  uint32_t req_tiling, uint32_t compression)
> >  {
> >         __intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
> > -                        req_tiling, compression);
> > +                        req_tiling, compression, 0);
> >  }
> >
> >  /**
> > @@ -927,6 +928,28 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
> >         return buf;
> >  }
> >
> > +struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
> > +                                                        uint32_t handle,
> > +                                                        int width, int height,
> > +                                                        int bpp, int alignment,
> > +                                                        uint32_t req_tiling,
> > +                                                        uint32_t compression,
> > +                                                        uint64_t size)
> > +{
> > +       struct intel_buf *buf;
> > +
> > +       igt_assert(bops);
> > +
> > +       buf = calloc(1, sizeof(*buf));
> > +       igt_assert(buf);
> > +
> > +       __intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
> > +                        req_tiling, compression, size);
> > +
> > +       return buf;
> > +}
> > +
> > +
> >  /**
> >   * intel_buf_destroy
> >   * @buf: intel_buf
> > diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
> > index 5619fc6fa..54480bff6 100644
> > --- a/lib/intel_bufops.h
> > +++ b/lib/intel_bufops.h
> > @@ -139,6 +139,13 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
> >                                                 uint32_t req_tiling,
> >                                                 uint32_t compression);
> >
> > +struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
> > +                                                        uint32_t handle,
> > +                                                        int width, int height,
> > +                                                        int bpp, int alignment,
> > +                                                        uint32_t req_tiling,
> > +                                                        uint32_t compression,
> > +                                                        uint64_t size);
> >  void intel_buf_destroy(struct intel_buf *buf);
> >
> >  void *intel_buf_cpu_map(struct intel_buf *buf, bool write);
> > --
> > 2.26.0
> >
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 04/35] lib/igt_core: Track child process pid and tid
  2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 04/35] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
@ 2021-03-18  9:07   ` Petri Latvala
  2021-03-18 13:48     ` Zbigniew Kempczyński
  0 siblings, 1 reply; 53+ messages in thread
From: Petri Latvala @ 2021-03-18  9:07 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Wed, Mar 17, 2021 at 03:45:39PM +0100, Zbigniew Kempczyński wrote:
> Introduce variables which can decrease number of getpid()/gettid()
> calls, especially for allocator which must be aware of method
> acquiring addresses.
> 
> When child is spawned using igt_fork() we can control its initialization
> and prepare child_pid implicitly. Tracking child_tid requires our
> intervention in the code and do something like this:
> 
> if (child_tid == -1)
> 	child_tid = gettid()
> 
> Variable was created for using in TLS so each thread is created
> with variable set to -1. This will give each thread it's own "copy"
> and there's no risk to use other thread tid. For each forked child
> we reassign -1 to child_tid to avoid using already set variable.
> 
> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Petri Latvala <petri.latvala@intel.com>
> Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
> ---
>  lib/igt_core.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/lib/igt_core.c b/lib/igt_core.c
> index f9dfaa0dd..2b4182f16 100644
> --- a/lib/igt_core.c
> +++ b/lib/igt_core.c
> @@ -306,6 +306,10 @@ int num_test_children;
>  int test_children_sz;
>  bool test_child;
>  
> +/* For allocator purposes */
> +pid_t child_pid  = -1;
> +__thread pid_t child_tid  = -1;
> +
>  enum {
>  	/*
>  	 * Let the first values be used by individual tests so options don't
> @@ -2302,6 +2306,8 @@ bool __igt_fork(void)
>  	case 0:
>  		test_child = true;
>  		pthread_mutex_init(&print_mutex, NULL);
> +		child_pid = getpid();
> +		child_tid = -1;
>  		exit_handler_count = 0;
>  		reset_helper_process_list();
>  		oom_adjust_for_doom();


What about igt_fork_helper?

-- 
Petri Latvala
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 05/35] lib/intel_allocator_simple: Add simple allocator
  2021-03-17 19:38   ` Jason Ekstrand
@ 2021-03-18 10:40     ` Zbigniew Kempczyński
  0 siblings, 0 replies; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-18 10:40 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: IGT GPU Tools

On Wed, Mar 17, 2021 at 02:38:35PM -0500, Jason Ekstrand wrote:

<cut>

I've addressed all that weird indentations you've noticed.
I've seen that code a lot of times so definitely fresh 
look was refreshing.

> +
> > +static bool intel_allocator_simple_is_allocated(struct intel_allocator *ial,
> > +                                               uint32_t handle, uint64_t size,
> > +                                               uint64_t offset)
> > +{
> > +       struct intel_allocator_record *rec;
> > +       struct intel_allocator_simple *ials;
> > +       bool same = false;
> > +
> > +       igt_assert(ial);
> > +       ials = (struct intel_allocator_simple *) ial->priv;
> > +       igt_assert(ials);
> > +       igt_assert(handle);
> > +
> > +       rec = igt_map_find(&ials->objects, &handle);
> > +       if (rec && __same(rec, handle, size, offset))
> > +               same = true;
> > +
> > +       return same;
> > +}
> > +
> > +static bool intel_allocator_simple_reserve(struct intel_allocator *ial,
> > +                                          uint32_t handle,
> > +                                          uint64_t start, uint64_t end)
> > +{
> > +       uint64_t size = end - start;
> 
> With canonical addresses, our size calculations aren't going to be
> correct if start and end are on different sides of the 47-bit
> boundary.  I'm not sure how to deal with that, TBH.  Most of the time,
> I think you get saved by the fact that you're only really likely to
> hit that if you have a > 128 TB range which never happens.  One simple
> thing we could do is
> 
> igt_assert(end >> GEN8_GT_ADDRESS_WIDTH == start >> GEN8_GT_ADDRESS_WIDTH);
> 
> or similar.  Same goes for the 3-4 other cases below.

What do you say about:

static uint64_t get_size(uint64_t start, uint64_t end)
{
	end = end ? end : 1ull << GEN8_GTT_ADDRESS_WIDTH;

	return end - start;
}

then

	igt_assert(end > start || end == 0);
	size = get_size(start, end);

> 
> > +       struct intel_allocator_record *rec = NULL;
> > +       struct intel_allocator_simple *ials;
> > +
> > +       igt_assert(ial);
> > +       ials = (struct intel_allocator_simple *) ial->priv;
> > +       igt_assert(ials);
> > +
> > +       /* clear [63:48] bits to get rid of canonical form */
> > +       start = DECANONICAL(start);
> > +       end = DECANONICAL(end);
> > +       igt_assert(end > start || end == 0);
> 
> With always reserving the top 4K, end = 0 should never happen.

That's true, but limit us from catching problems with last page
- gpu can hang when using 3d pipeline with batch on that page.
Take a look to:

https://patchwork.freedesktop.org/patch/424031/?series=82954&rev=26

Daniel requested to remove that test because it hangs gpu
on glk/kbl/skl so we likely want to check that in the future
in other cases. So totally excluding last page is not we want imo.
For "normal" usage I want to skip that page from default but 
I still want to have a possibility to use it in "full" version.

--
Zbigniew

> 
> > +
> > +       if (simple_vma_heap_alloc_addr(ials, start, size)) {
> > +               rec = malloc(sizeof(*rec));
> > +               rec->handle = handle;
> > +               rec->offset = start;
> > +               rec->size = size;
> > +
> > +               igt_map_add(&ials->reserved, &rec->offset, rec);
> > +
> > +               ials->reserved_areas++;
> > +               ials->reserved_size += rec->size;
> > +               return true;
> > +       }
> > +
> > +       igt_debug("Failed to reserve %llx + %llx\n", (long long)start, (long long)size);
> > +       return false;
> > +}
> > +
> > +static bool intel_allocator_simple_unreserve(struct intel_allocator *ial,
> > +                                            uint32_t handle,
> > +                                            uint64_t start, uint64_t end)
> > +{
> > +       uint64_t size = end - start;
> > +       struct intel_allocator_record *rec = NULL;
> > +       struct intel_allocator_simple *ials;
> > +
> > +       igt_assert(ial);
> > +       ials = (struct intel_allocator_simple *) ial->priv;
> > +       igt_assert(ials);
> > +
> > +       /* clear [63:48] bits to get rid of canonical form */
> > +       start = DECANONICAL(start);
> > +       end = DECANONICAL(end);
> > +
> > +       igt_assert(end > start || end == 0);
> > +
> > +       rec = igt_map_find(&ials->reserved, &start);
> > +
> > +       if (!rec) {
> > +               igt_debug("Only reserved blocks can be unreserved\n");
> > +               return false;
> > +       }
> > +
> > +       if (rec->size != size) {
> > +               igt_debug("Only the whole block unreservation allowed\n");
> > +               return false;
> > +       }
> > +
> > +       if (rec->handle != handle) {
> > +               igt_debug("Handle %u doesn't match reservation handle: %u\n",
> > +                        rec->handle, handle);
> > +               return false;
> > +       }
> > +
> > +       igt_map_del(&ials->reserved, &start);
> > +
> > +       ials->reserved_areas--;
> > +       ials->reserved_size -= rec->size;
> > +       free(rec);
> > +       simple_vma_heap_free(&ials->heap, start, size);
> > +
> > +       return true;
> > +}
> > +
> > +static bool intel_allocator_simple_is_reserved(struct intel_allocator *ial,
> > +                                              uint64_t start, uint64_t end)
> > +{
> > +       uint64_t size = end - start;
> > +       struct intel_allocator_record *rec = NULL;
> > +       struct intel_allocator_simple *ials;
> > +
> > +       igt_assert(ial);
> > +       ials = (struct intel_allocator_simple *) ial->priv;
> > +       igt_assert(ials);
> > +
> > +       /* clear [63:48] bits to get rid of canonical form */
> > +       start = DECANONICAL(start);
> > +       end = DECANONICAL(end);
> > +
> > +       igt_assert(end > start || end == 0);
> > +
> > +       rec = igt_map_find(&ials->reserved, &start);
> > +
> > +       if (!rec)
> > +               return false;
> > +
> > +       if (rec->offset == start && rec->size == size)
> > +               return true;
> > +
> > +       return false;
> > +}
> > +
> > +static bool equal_8bytes(const void *key1, const void *key2)
> > +{
> > +       const uint64_t *k1 = key1, *k2 = key2;
> > +       return *k1 == *k2;
> > +}
> > +
> > +static void intel_allocator_simple_destroy(struct intel_allocator *ial)
> > +{
> > +       struct intel_allocator_simple *ials;
> > +       struct igt_map_entry *pos;
> > +       struct igt_map *map;
> > +       int i;
> > +
> > +       igt_assert(ial);
> > +       ials = (struct intel_allocator_simple *) ial->priv;
> > +       simple_vma_heap_finish(&ials->heap);
> > +
> > +       map = &ials->objects;
> > +       igt_map_for_each(map, i, pos)
> > +               free(pos->value);
> > +       igt_map_free(&ials->objects);
> > +
> > +       map = &ials->reserved;
> > +       igt_map_for_each(map, i, pos)
> > +               free(pos->value);
> > +       igt_map_free(&ials->reserved);
> > +
> > +       free(ial->priv);
> > +       free(ial);
> > +}
> > +
> > +static bool intel_allocator_simple_is_empty(struct intel_allocator *ial)
> > +{
> > +       struct intel_allocator_simple *ials = ial->priv;
> > +
> > +       igt_debug("<ial: %p, fd: %d> objects: %" PRId64
> > +                 ", reserved_areas: %" PRId64 "\n",
> > +                 ial, ial->fd,
> > +                 ials->allocated_objects, ials->reserved_areas);
> > +
> > +       return !ials->allocated_objects && !ials->reserved_areas;
> > +}
> > +
> > +static void intel_allocator_simple_print(struct intel_allocator *ial, bool full)
> > +{
> > +       struct intel_allocator_simple *ials;
> > +       struct simple_vma_hole *hole;
> > +       struct simple_vma_heap *heap;
> > +       struct igt_map_entry *pos;
> > +       struct igt_map *map;
> > +       uint64_t total_free = 0, allocated_size = 0, allocated_objects = 0;
> > +       uint64_t reserved_size = 0, reserved_areas = 0;
> > +       int i;
> > +
> > +       igt_assert(ial);
> > +       ials = (struct intel_allocator_simple *) ial->priv;
> > +       igt_assert(ials);
> > +       heap = &ials->heap;
> > +
> > +       igt_info("intel_allocator_simple <ial: %p, fd: %d> on "
> > +                "[0x%"PRIx64" : 0x%"PRIx64"]:\n", ial, ial->fd,
> > +                ials->start, ials->end);
> > +
> > +       if (full) {
> > +               igt_info("holes:\n");
> > +               simple_vma_foreach_hole(hole, heap) {
> > +                       igt_info("offset = %"PRIu64" (0x%"PRIx64", "
> > +                                "size = %"PRIu64" (0x%"PRIx64")\n",
> > +                                hole->offset, hole->offset, hole->size,
> > +                                hole->size);
> > +                       total_free += hole->size;
> > +               }
> > +               igt_assert(total_free <= ials->total_size);
> > +               igt_info("total_free: %" PRIx64
> > +                        ", total_size: %" PRIx64
> > +                        ", allocated_size: %" PRIx64
> > +                        ", reserved_size: %" PRIx64 "\n",
> > +                        total_free, ials->total_size, ials->allocated_size,
> > +                        ials->reserved_size);
> > +               igt_assert(total_free ==
> > +                          ials->total_size - ials->allocated_size - ials->reserved_size);
> > +
> > +               igt_info("objects:\n");
> > +               map = &ials->objects;
> > +               igt_map_for_each(map, i, pos) {
> > +                       struct intel_allocator_record *rec = pos->value;
> > +
> > +                       igt_info("handle = %d, offset = %"PRIu64" "
> > +                               "(0x%"PRIx64", size = %"PRIu64" (0x%"PRIx64")\n",
> > +                                rec->handle, rec->offset, rec->offset,
> > +                                rec->size, rec->size);
> > +                       allocated_objects++;
> > +                       allocated_size += rec->size;
> > +               }
> > +               igt_assert(ials->allocated_size == allocated_size);
> > +               igt_assert(ials->allocated_objects == allocated_objects);
> > +
> > +               igt_info("reserved areas:\n");
> > +               map = &ials->reserved;
> > +               igt_map_for_each(map, i, pos) {
> > +                       struct intel_allocator_record *rec = pos->value;
> > +
> > +                       igt_info("offset = %"PRIu64" (0x%"PRIx64", "
> > +                                "size = %"PRIu64" (0x%"PRIx64")\n",
> > +                                rec->offset, rec->offset,
> > +                                rec->size, rec->size);
> > +                       reserved_areas++;
> > +                       reserved_size += rec->size;
> > +               }
> > +               igt_assert(ials->reserved_areas == reserved_areas);
> > +               igt_assert(ials->reserved_size == reserved_size);
> > +       } else {
> > +               simple_vma_foreach_hole(hole, heap)
> > +                       total_free += hole->size;
> > +       }
> > +
> > +       igt_info("free space: %"PRIu64"B (0x%"PRIx64") (%.2f%% full)\n"
> > +                "allocated objects: %"PRIu64", reserved areas: %"PRIu64"\n",
> > +                total_free, total_free,
> > +                ((double) (ials->total_size - total_free) /
> > +                 (double) ials->total_size) * 100,
> > +                ials->allocated_objects, ials->reserved_areas);
> > +}
> > +
> > +static struct intel_allocator *
> > +__intel_allocator_simple_create(int fd, uint64_t start, uint64_t end,
> > +                               enum allocator_strategy strategy)
> > +{
> > +       struct intel_allocator *ial;
> > +       struct intel_allocator_simple *ials;
> > +
> > +       igt_debug("Using simple allocator\n");
> > +
> > +       ial = calloc(1, sizeof(*ial));
> > +       igt_assert(ial);
> > +
> > +       ial->fd = fd;
> > +       ial->get_address_range = intel_allocator_simple_get_address_range;
> > +       ial->alloc = intel_allocator_simple_alloc;
> > +       ial->free = intel_allocator_simple_free;
> > +       ial->is_allocated = intel_allocator_simple_is_allocated;
> > +       ial->reserve = intel_allocator_simple_reserve;
> > +       ial->unreserve = intel_allocator_simple_unreserve;
> > +       ial->is_reserved = intel_allocator_simple_is_reserved;
> > +       ial->destroy = intel_allocator_simple_destroy;
> > +       ial->is_empty = intel_allocator_simple_is_empty;
> > +       ial->print = intel_allocator_simple_print;
> > +       ials = ial->priv = malloc(sizeof(struct intel_allocator_simple));
> > +       igt_assert(ials);
> > +
> > +       igt_map_init(&ials->objects);
> > +       /* Reserved addresses hashtable is indexed by an offset */
> > +       __igt_map_init(&ials->reserved, equal_8bytes, NULL, 3);
> 
> We have this same problem with Mesa.  Maybe just make the hash map
> take a uint64_t key instead of a void*.  Then it'll work for both
> cases easily at the cost of a little extra memory on 32-bit platforms.
> 
> Looking a bit more, I guess it's not quite as bad as the Mesa case
> because you have the uint64_t key in the object you're adding so you
> don't have to malloc a bunch of uint64_t's just to use as keys.
> 
> > +
> > +       ials->start = start;
> > +       ials->end = end;
> > +       ials->total_size = end - start;
> > +       simple_vma_heap_init(&ials->heap, ials->start, ials->total_size,
> > +                            strategy);
> > +
> > +       ials->allocated_size = 0;
> > +       ials->allocated_objects = 0;
> > +       ials->reserved_size = 0;
> > +       ials->reserved_areas = 0;
> > +
> > +       return ial;
> > +}
> > +
> > +struct intel_allocator *
> > +intel_allocator_simple_create(int fd)
> > +{
> > +       uint64_t gtt_size = gem_aperture_size(fd);
> > +
> > +       if (!gem_uses_full_ppgtt(fd))
> > +               gtt_size /= 2;
> > +       else
> > +               gtt_size -= RESERVED;
> > +
> > +       return __intel_allocator_simple_create(fd, 0, gtt_size,
> > +                                              ALLOC_STRATEGY_HIGH_TO_LOW);
> > +}
> > +
> > +struct intel_allocator *
> > +intel_allocator_simple_create_full(int fd, uint64_t start, uint64_t end,
> > +                                  enum allocator_strategy strategy)
> > +{
> > +       uint64_t gtt_size = gem_aperture_size(fd);
> > +
> > +       igt_assert(end <= gtt_size);
> > +       if (!gem_uses_full_ppgtt(fd))
> > +               gtt_size /= 2;
> > +       igt_assert(end - start <= gtt_size);
> 
> Don't you want just `end <= gtt_size`?  When is something only going
> to use the top half?
> 
> > +
> > +       return __intel_allocator_simple_create(fd, start, end, strategy);
> > +}
> > --
> > 2.26.0
> >
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation.
  2021-03-17 20:44           ` Grzegorzek, Dominik
@ 2021-03-18 13:26             ` Grzegorzek, Dominik
  0 siblings, 0 replies; 53+ messages in thread
From: Grzegorzek, Dominik @ 2021-03-18 13:26 UTC (permalink / raw)
  To: Kempczynski, Zbigniew, jason; +Cc: igt-dev

On Wed, 2021-03-17 at 21:44 +0100, Dominik Grzegorzek wrote:
> On Wed, 2021-03-17 at 20:13 +0100, Zbigniew Kempczyński wrote:
> > On Wed, Mar 17, 2021 at 01:02:53PM -0500, Jason Ekstrand wrote:
> > > On Wed, Mar 17, 2021 at 12:43 PM Zbigniew Kempczyński
> > > <zbigniew.kempczynski@intel.com> wrote:
> > > > On Wed, Mar 17, 2021 at 11:43:38AM -0500, Jason Ekstrand wrote:
> > > > > Would it be better to pull the hash table implementation from
> > > > > Mesa?
> > > > > Or use https://cgit.freedesktop.org/~anholt/hash_table/ which
> > > > > should
> > > > > be identical, though it may be a bit out-of-date.  I've
> > > > > poured
> > > > > enough
> > > > > hours of my life into finding and fixing the bugs in the Mesa
> > > > > hash
> > > > > table that IGT rolling its own doesn't really fill me with
> > > > > happy
> > > > > feelings.
> > > > > 
> > > > > --Jason
> > > > 
> > > > I'm not going to be a devil advocate, I'll just ask Dominik how
> > > > confident is he about that implementation, at least we can
> > > > provide
> > > > some stress test with multithreading scenarios.
> > > > 
> > > > He's ported kernel hashtable so if he didn't introduce a non
> > > > obvious
> > > > bug it should work.
> > > 
> > > If this is, indeed, a port of the kernel hash table, then it's
> > > probably ok.  Not sure how that works out on licenses, but it
> > > should
> > > be correct.  If this has been freshly hand-rolled expressly for
> > > this
> > > purpose, then it makes me nervous.
> > > 
> > > > Regarding Eric implementation - we need to adopt that
> > > > implementation
> > > > to at least rename to igt_* and fix the code which is using.
> > > > Should take few days max.
> > > > 
> > > > If you'll take a look to tests 
> > > > api_intel_allocator@two-level-inception
> > > > you'll see we're stress testing that hashtable much from many
> > > > processes / threads and we see no problem with losing data
> > > > coherency
> > > > in the map.
> > > 
> > > I'm less worried about threads, so long as it's locked, as I am
> > > about
> > > weird insert/remove patterns.  Most of the bugs I've had to fix
> > > in
> > > mesa were because certain patterns of insert/remove with the
> > > right
> > > hashes would cause it to blow up.  If most of what you do is just
> > > add
> > > a bunch of stuff without a lot of removal, they can be unlikely
> > > and
> > > hard to find.  That said, given that it's linked list based and
> > > not
> > > re-hash based, a lot of those corner cases are less likely than
> > > they
> > > were with the Mesa implementation.  Again, my primary concern is
> > > with
> > > a NEW hash table impl. :-)
> > 
> > I'm not sure I've understand you correctly - it is re-hash based
> > (see
> > igt_map_extend()), and it grows if necessary. Works properly when
> > you
> > don't forget about locking during operations (not only add, but
> > find
> > too because it is not funny when someone is taking the rug from
> > under
> > your feet).
> > 
> > All multiprocess/multithread tests in api_intel_allocator heavily
> > exercise insert/remove scenarios (open/close allocator just give 
> > you new handle inserted/removed from handles hash table so lack 
> > of consistency would be quickly fount imo).
> > 
> > --
> > Zbigniew
> 
> To be clear, this is not an exact kernel's hash table implementation.
> The one in the kernel is statically sized, but it was based on the
> one
> in the kernel. As you both noticed it is based on a linked list which
> is an exact kernel's port. I kept igt_map as simple as it was
> possible,
> just to avoid surprises. I carried also some Valgrind tests as Chris
> proposed in a previous iteration of that series, the results were
> clean. This, and the fact it has already been tested in a vast number
> of cases during the time the series was developed, makes me feel
> certain about the implementation.
> 
> ~Dominik
 After consideration I will take the implementation Jason sent.
 This way will be much cleaner. Just ommit igt_map in review.
 
 ~Dominik

> 
> > > --Jason
> > > 
> > > > --
> > > > Zbigniew
> > > > > On Wed, Mar 17, 2021 at 9:46 AM Zbigniew Kempczyński
> > > > > <zbigniew.kempczynski@intel.com> wrote:
> > > > > > From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > > > > 
> > > > > > Double linked lists with a single pointer list head
> > > > > > implementation,
> > > > > > based on similar in the kernel.
> > > > > > 
> > > > > > Signed-off-by: Dominik Grzegorzek <
> > > > > > dominik.grzegorzek@intel.com>
> > > > > > Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > > > > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > > > > > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > > > ---
> > > > > >  lib/igt_list.c | 72
> > > > > > ++++++++++++++++++++++++++++++++++++++++++++++++++
> > > > > >  lib/igt_list.h | 50 +++++++++++++++++++++++++++++++++--
> > > > > >  2 files changed, 120 insertions(+), 2 deletions(-)
> > > > > > 
> > > > > > diff --git a/lib/igt_list.c b/lib/igt_list.c
> > > > > > index 37ae139c4..43200f9b3 100644
> > > > > > --- a/lib/igt_list.c
> > > > > > +++ b/lib/igt_list.c
> > > > > > @@ -22,6 +22,7 @@
> > > > > >   *
> > > > > >   */
> > > > > > 
> > > > > > +#include "assert.h"
> > > > > >  #include "igt_list.h"
> > > > > > 
> > > > > >  void IGT_INIT_LIST_HEAD(struct igt_list_head *list)
> > > > > > @@ -81,3 +82,74 @@ bool igt_list_empty(const struct
> > > > > > igt_list_head *head)
> > > > > >  {
> > > > > >         return head->next == head;
> > > > > >  }
> > > > > > +
> > > > > > +void igt_hlist_init(struct igt_hlist_node *h)
> > > > > > +{
> > > > > > +       h->next = NULL;
> > > > > > +       h->pprev = NULL;
> > > > > > +}
> > > > > > +
> > > > > > +int igt_hlist_unhashed(const struct igt_hlist_node *h)
> > > > > > +{
> > > > > > +       return !h->pprev;
> > > > > > +}
> > > > > > +
> > > > > > +int igt_hlist_empty(const struct igt_hlist_head *h)
> > > > > > +{
> > > > > > +       return !h->first;
> > > > > > +}
> > > > > > +
> > > > > > +static void __igt_hlist_del(struct igt_hlist_node *n)
> > > > > > +{
> > > > > > +       struct igt_hlist_node *next = n->next;
> > > > > > +       struct igt_hlist_node **pprev = n->pprev;
> > > > > > +
> > > > > > +       *pprev = next;
> > > > > > +       if (next)
> > > > > > +               next->pprev = pprev;
> > > > > > +}
> > > > > > +
> > > > > > +void igt_hlist_del(struct igt_hlist_node *n)
> > > > > > +{
> > > > > > +       __igt_hlist_del(n);
> > > > > > +       n->next = NULL;
> > > > > > +       n->pprev = NULL;
> > > > > > +}
> > > > > > +
> > > > > > +void igt_hlist_del_init(struct igt_hlist_node *n)
> > > > > > +{
> > > > > > +       if (!igt_hlist_unhashed(n)) {
> > > > > > +               __igt_hlist_del(n);
> > > > > > +               igt_hlist_init(n);
> > > > > > +       }
> > > > > > +}
> > > > > > +
> > > > > > +void igt_hlist_add_head(struct igt_hlist_node *n, struct
> > > > > > igt_hlist_head *h)
> > > > > > +{
> > > > > > +       struct igt_hlist_node *first = h->first;
> > > > > > +
> > > > > > +       n->next = first;
> > > > > > +       if (first)
> > > > > > +               first->pprev = &n->next;
> > > > > > +       h->first = n;
> > > > > > +       n->pprev = &h->first;
> > > > > > +}
> > > > > > +
> > > > > > +void igt_hlist_add_before(struct igt_hlist_node *n, struct
> > > > > > igt_hlist_node *next)
> > > > > > +{
> > > > > > +       assert(next);
> > > > > > +       n->pprev = next->pprev;
> > > > > > +       n->next = next;
> > > > > > +       next->pprev = &n->next;
> > > > > > +       *(n->pprev) = n;
> > > > > > +}
> > > > > > +
> > > > > > +void igt_hlist_add_behind(struct igt_hlist_node *n, struct
> > > > > > igt_hlist_node *prev)
> > > > > > +{
> > > > > > +       n->next = prev->next;
> > > > > > +       prev->next = n;
> > > > > > +       n->pprev = &prev->next;
> > > > > > +
> > > > > > +       if (n->next)
> > > > > > +               n->next->pprev  = &n->next;
> > > > > > +}
> > > > > > diff --git a/lib/igt_list.h b/lib/igt_list.h
> > > > > > index cc93d7a0d..78e761e05 100644
> > > > > > --- a/lib/igt_list.h
> > > > > > +++ b/lib/igt_list.h
> > > > > > @@ -40,6 +40,10 @@
> > > > > >   * igt_list is a doubly-linked list where an instance of
> > > > > > igt_list_head is a
> > > > > >   * head sentinel and has to be initialized.
> > > > > >   *
> > > > > > + * igt_hist is also an double linked lists, but with a
> > > > > > single pointer list head.
> > > > > > + * Mostly useful for hash tables where the two pointer
> > > > > > list
> > > > > > head is
> > > > > > + * too wasteful. You lose the ability to access the tail
> > > > > > in
> > > > > > O(1).
> > > > > > + *
> > > > > >   * Example usage:
> > > > > >   *
> > > > > >   * |[<!-- language="C" -->
> > > > > > @@ -71,6 +75,13 @@ struct igt_list_head {
> > > > > >         struct igt_list_head *next;
> > > > > >  };
> > > > > > 
> > > > > > +struct igt_hlist_head {
> > > > > > +       struct igt_hlist_node *first;
> > > > > > +};
> > > > > > +
> > > > > > +struct igt_hlist_node {
> > > > > > +       struct igt_hlist_node *next, **pprev;
> > > > > > +};
> > > > > > 
> > > > > >  void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
> > > > > >  void igt_list_add(struct igt_list_head *elem, struct
> > > > > > igt_list_head *head);
> > > > > > @@ -81,6 +92,17 @@ void igt_list_move_tail(struct
> > > > > > igt_list_head *elem, struct igt_list_head *list);
> > > > > >  int igt_list_length(const struct igt_list_head *head);
> > > > > >  bool igt_list_empty(const struct igt_list_head *head);
> > > > > > 
> > > > > > +void igt_hlist_init(struct igt_hlist_node *h);
> > > > > > +int igt_hlist_unhashed(const struct igt_hlist_node *h);
> > > > > > +int igt_hlist_empty(const struct igt_hlist_head *h);
> > > > > > +void igt_hlist_del(struct igt_hlist_node *n);
> > > > > > +void igt_hlist_del_init(struct igt_hlist_node *n);
> > > > > > +void igt_hlist_add_head(struct igt_hlist_node *n, struct
> > > > > > igt_hlist_head *h);
> > > > > > +void igt_hlist_add_before(struct igt_hlist_node *n,
> > > > > > +                         struct igt_hlist_node *next);
> > > > > > +void igt_hlist_add_behind(struct igt_hlist_node *n,
> > > > > > +                         struct igt_hlist_node *prev);
> > > > > > +
> > > > > >  #define igt_container_of(ptr, sample,
> > > > > > member)                          \
> > > > > >         (__typeof__(sample))((char *)(ptr)
> > > > > > -                            \
> > > > > >                                 offsetof(__typeof__(*sample
> > > > > > ),
> > > > > > member))
> > > > > > @@ -96,9 +118,10 @@ bool igt_list_empty(const struct
> > > > > > igt_list_head *head);
> > > > > >   * Safe against removel of the *current* list element. To
> > > > > > achive this it
> > > > > >   * requires an extra helper variable `tmp` with the same
> > > > > > type as `pos`.
> > > > > >   */
> > > > > > -#define igt_list_for_each_entry_safe(pos, tmp, head,
> > > > > > member)                   \
> > > > > > +
> > > > > > +#define igt_list_for_each_entry_safe(pos, tmp, head,
> > > > > > member)           \
> > > > > >         for (pos = igt_container_of((head)->next, pos,
> > > > > > member),         \
> > > > > > -            tmp = igt_container_of((pos)->member.next,
> > > > > > tmp,
> > > > > > member);   \
> > > > > > +            tmp = igt_container_of((pos)->member.next,
> > > > > > tmp,
> > > > > > member);   \
> > > > > >              &pos->member !=
> > > > > > (head);                                    \
> > > > > >              pos =
> > > > > > tmp,                                                 \
> > > > > >              tmp = igt_container_of((pos)->member.next,
> > > > > > tmp,
> > > > > > member))
> > > > > > @@ -108,6 +131,27 @@ bool igt_list_empty(const struct
> > > > > > igt_list_head *head);
> > > > > >              &pos->member !=
> > > > > > (head);                                    \
> > > > > >              pos = igt_container_of((pos)->member.prev,
> > > > > > pos,
> > > > > > member))
> > > > > > 
> > > > > > +#define igt_list_for_each_entry_safe_reverse(pos, tmp,
> > > > > > head,
> > > > > > member)           \
> > > > > > +       for (pos = igt_container_of((head)->prev, pos,
> > > > > > member),         \
> > > > > > +            tmp = igt_container_of((pos)->member.prev,
> > > > > > tmp,
> > > > > > member);   \
> > > > > > +            &pos->member !=
> > > > > > (head);                                    \
> > > > > > +            pos =
> > > > > > tmp,                                                 \
> > > > > > +            tmp = igt_container_of((pos)->member.prev,
> > > > > > tmp,
> > > > > > member))
> > > > > > +
> > > > > > +#define igt_hlist_entry_safe(ptr, sample, member) \
> > > > > > +       ({ typeof(ptr) ____ptr = (ptr); \
> > > > > > +          ____ptr ? igt_container_of(____ptr, sample,
> > > > > > member) : NULL; \
> > > > > > +       })
> > > > > > +
> > > > > > +#define igt_hlist_for_each_entry(pos, head,
> > > > > > member)                    \
> > > > > > +       for (pos = igt_hlist_entry_safe((head)->first, pos,
> > > > > > member);    \
> > > > > > +            pos;                                          
> > > > > >   
> > > > > >            \
> > > > > > +            pos = igt_hlist_entry_safe((pos)->member.next,
> > > > > > pos, member))
> > > > > > +
> > > > > > +#define igt_hlist_for_each_entry_safe(pos, n, head,
> > > > > > member)            \
> > > > > > +       for (pos = igt_hlist_entry_safe((head)->first, pos,
> > > > > > member);    \
> > > > > > +            pos && ({ n = pos->member.next; 1;
> > > > > > });                     \
> > > > > > +            pos = igt_hlist_entry_safe(n, pos, member))
> > > > > > 
> > > > > >  /* IGT custom helpers */
> > > > > > 
> > > > > > @@ -127,4 +171,6 @@ bool igt_list_empty(const struct
> > > > > > igt_list_head *head);
> > > > > >  #define igt_list_last_entry(head, type, member) \
> > > > > >         igt_container_of((head)->prev, (type), member)
> > > > > > 
> > > > > > +#define IGT_INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
> > > > > > +
> > > > > >  #endif /* IGT_LIST_H */
> > > > > > --
> > > > > > 2.26.0
> > > > > > 
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 04/35] lib/igt_core: Track child process pid and tid
  2021-03-18  9:07   ` Petri Latvala
@ 2021-03-18 13:48     ` Zbigniew Kempczyński
  2021-03-18 15:17       ` Petri Latvala
  0 siblings, 1 reply; 53+ messages in thread
From: Zbigniew Kempczyński @ 2021-03-18 13:48 UTC (permalink / raw)
  To: Petri Latvala; +Cc: igt-dev

On Thu, Mar 18, 2021 at 11:07:46AM +0200, Petri Latvala wrote:
> On Wed, Mar 17, 2021 at 03:45:39PM +0100, Zbigniew Kempczyński wrote:
> > Introduce variables which can decrease number of getpid()/gettid()
> > calls, especially for allocator which must be aware of method
> > acquiring addresses.
> > 
> > When child is spawned using igt_fork() we can control its initialization
> > and prepare child_pid implicitly. Tracking child_tid requires our
> > intervention in the code and do something like this:
> > 
> > if (child_tid == -1)
> > 	child_tid = gettid()
> > 
> > Variable was created for using in TLS so each thread is created
> > with variable set to -1. This will give each thread it's own "copy"
> > and there's no risk to use other thread tid. For each forked child
> > we reassign -1 to child_tid to avoid using already set variable.
> > 
> > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Petri Latvala <petri.latvala@intel.com>
> > Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
> > ---
> >  lib/igt_core.c | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/lib/igt_core.c b/lib/igt_core.c
> > index f9dfaa0dd..2b4182f16 100644
> > --- a/lib/igt_core.c
> > +++ b/lib/igt_core.c
> > @@ -306,6 +306,10 @@ int num_test_children;
> >  int test_children_sz;
> >  bool test_child;
> >  
> > +/* For allocator purposes */
> > +pid_t child_pid  = -1;
> > +__thread pid_t child_tid  = -1;
> > +
> >  enum {
> >  	/*
> >  	 * Let the first values be used by individual tests so options don't
> > @@ -2302,6 +2306,8 @@ bool __igt_fork(void)
> >  	case 0:
> >  		test_child = true;
> >  		pthread_mutex_init(&print_mutex, NULL);
> > +		child_pid = getpid();
> > +		child_tid = -1;
> >  		exit_handler_count = 0;
> >  		reset_helper_process_list();
> >  		oom_adjust_for_doom();
> 
> 
> What about igt_fork_helper?
> 
> -- 
> Petri Latvala

I'm not sure I've understand your concerns. Do you mean 
that helpers spawned by igt_fork_helper() also should 
be allowed to use allocator infrastructure?

--
Zbigniew
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [igt-dev] [PATCH i-g-t v26 04/35] lib/igt_core: Track child process pid and tid
  2021-03-18 13:48     ` Zbigniew Kempczyński
@ 2021-03-18 15:17       ` Petri Latvala
  0 siblings, 0 replies; 53+ messages in thread
From: Petri Latvala @ 2021-03-18 15:17 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev

On Thu, Mar 18, 2021 at 02:48:36PM +0100, Zbigniew Kempczyński wrote:
> On Thu, Mar 18, 2021 at 11:07:46AM +0200, Petri Latvala wrote:
> > On Wed, Mar 17, 2021 at 03:45:39PM +0100, Zbigniew Kempczyński wrote:
> > > Introduce variables which can decrease number of getpid()/gettid()
> > > calls, especially for allocator which must be aware of method
> > > acquiring addresses.
> > > 
> > > When child is spawned using igt_fork() we can control its initialization
> > > and prepare child_pid implicitly. Tracking child_tid requires our
> > > intervention in the code and do something like this:
> > > 
> > > if (child_tid == -1)
> > > 	child_tid = gettid()
> > > 
> > > Variable was created for using in TLS so each thread is created
> > > with variable set to -1. This will give each thread it's own "copy"
> > > and there's no risk to use other thread tid. For each forked child
> > > we reassign -1 to child_tid to avoid using already set variable.
> > > 
> > > Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
> > > Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > > Cc: Petri Latvala <petri.latvala@intel.com>
> > > Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
> > > ---
> > >  lib/igt_core.c | 6 ++++++
> > >  1 file changed, 6 insertions(+)
> > > 
> > > diff --git a/lib/igt_core.c b/lib/igt_core.c
> > > index f9dfaa0dd..2b4182f16 100644
> > > --- a/lib/igt_core.c
> > > +++ b/lib/igt_core.c
> > > @@ -306,6 +306,10 @@ int num_test_children;
> > >  int test_children_sz;
> > >  bool test_child;
> > >  
> > > +/* For allocator purposes */
> > > +pid_t child_pid  = -1;
> > > +__thread pid_t child_tid  = -1;
> > > +
> > >  enum {
> > >  	/*
> > >  	 * Let the first values be used by individual tests so options don't
> > > @@ -2302,6 +2306,8 @@ bool __igt_fork(void)
> > >  	case 0:
> > >  		test_child = true;
> > >  		pthread_mutex_init(&print_mutex, NULL);
> > > +		child_pid = getpid();
> > > +		child_tid = -1;
> > >  		exit_handler_count = 0;
> > >  		reset_helper_process_list();
> > >  		oom_adjust_for_doom();
> > 
> > 
> > What about igt_fork_helper?
> > 
> > -- 
> > Petri Latvala
> 
> I'm not sure I've understand your concerns. Do you mean 
> that helpers spawned by igt_fork_helper() also should 
> be allowed to use allocator infrastructure?

Not sure if they need to but maintaining their pid/tid tracking might
be useful for other stuff. I might have a big patch series coming that
could use it...

Either way it's a discrepancy, not a real concern at this point.


-- 
Petri Latvala
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2021-03-18 15:16 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-17 14:45 [igt-dev] [PATCH i-g-t v26 00/35] Introduce IGT allocator Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 01/35] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
2021-03-17 16:40   ` Jason Ekstrand
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 02/35] lib/igt_list: igt_hlist implementation Zbigniew Kempczyński
2021-03-17 16:43   ` Jason Ekstrand
2021-03-17 17:43     ` Zbigniew Kempczyński
2021-03-17 18:02       ` Jason Ekstrand
2021-03-17 19:13         ` Zbigniew Kempczyński
2021-03-17 20:44           ` Grzegorzek, Dominik
2021-03-18 13:26             ` Grzegorzek, Dominik
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 03/35] lib/igt_map: Introduce igt_map Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 04/35] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
2021-03-18  9:07   ` Petri Latvala
2021-03-18 13:48     ` Zbigniew Kempczyński
2021-03-18 15:17       ` Petri Latvala
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 05/35] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
2021-03-17 19:38   ` Jason Ekstrand
2021-03-18 10:40     ` Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 06/35] lib/intel_allocator_reloc: Add reloc allocator Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 07/35] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 08/35] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 09/35] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 10/35] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 11/35] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 12/35] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
2021-03-17 21:33   ` Jason Ekstrand
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 13/35] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
2021-03-17 21:36   ` Jason Ekstrand
2021-03-18  7:32     ` Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 14/35] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 15/35] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 16/35] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 17/35] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 18/35] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 19/35] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 20/35] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 21/35] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 22/35] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 23/35] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
2021-03-17 14:45 ` [igt-dev] [PATCH i-g-t v26 24/35] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 25/35] tests/api_intel_bb: Check switching vm in intel-bb Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 26/35] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 27/35] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 28/35] tests/api_intel_allocator: Verify child can use its standalone allocator Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 29/35] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 30/35] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 31/35] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 32/35] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 33/35] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 34/35] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
2021-03-17 14:46 ` [igt-dev] [PATCH i-g-t v26 35/35] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
2021-03-17 16:03 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev29) Patchwork
2021-03-17 17:47 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.