All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator
@ 2021-02-16 11:39 Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 01/35] lib/gem_submission: Add gem_has_relocations() check Zbigniew Kempczyński
                   ` (36 more replies)
  0 siblings, 37 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

This series introduce intel-allocator inside IGT.

v2: add comment/commit msg for allocator core
v3: enforce relocs in gem_linear_blits for ppgtt sizes <= 32b
v4: use addresses only from ppgtt range in check-canonical
v5: use relocations as default up to gen12 to avoid loosing
    coverage.
v6: migrate memory check in gem_ppgtt to avoid exec serialization
v7: keep decanonical address within intel-bb, remove check-canonical
    test
v8: adopt delta-check to use intel-bb with allocator to fully control
    addresses
v9: fix intel_allocator_is_reserved() prototype (Andrzej)
v10: separate allocator initialization for multiprocess execution
    to compile and run against address sanitizer
v11: rebase on top intel-bb and aux pgtable changes
v12: rebase, fixing api-intel-bb
v13: rebase
v14: gem_softpin: fix handle leak, add allocator tests
v15: gem_softpin: add allocator-basic and allocator-basic-reserve to BAT
     add reserve/unreserve to work with full ppgtt without bias/reservation
     from the end of vm
v16: addressing review comments - simplify code, add comments (Chris)
v17: adding SPDX license for all new files introduced in the series
v18: fix intel_bb_add/remove intel_buf list handling from O(n)->O(1) (Chris)
v19: - add igt_list_del_init()
     - migrate gem_has_relocations() check to gem_submission
     - add simple tests (intel-bb and pure execbuf) with allocator usage
       as copy-paste examples
v20: - add possibility to define vm range in simple allocator (default
       just excludes last-page)
     - add test which could reveal render issue on last-page
     - addressing reviews 
     - reorder patches to avoid compilation break
     Series doesn't cover fd/vm_id (it still assumes we use allocator
     with context). 

Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>

Dominik Grzegorzek (5):
  lib/igt_list: igt_hlist implementation.
  lib/igt_map: Introduce igt_map
  lib/intel_allocator_simple: Add simple allocator
  tests/api_intel_allocator: Simple allocator test suite
  tests/gem_linear_blits: Use intel allocator

Zbigniew Kempczyński (30):
  lib/gem_submission: Add gem_has_relocations() check
  lib/igt_list: Add igt_list_del_init()
  lib/igt_core: Track child process pid and tid
  lib/intel_allocator_random: Add random allocator
  lib/intel_allocator: Add intel_allocator core
  lib/intel_allocator: Try to stop smoothly instead of deinit
  lib/intel_allocator_msgchannel: Scale to 4k of parallel clients
  lib/intel_allocator: Separate allocator multiprocess start
  lib/intel_bufops: Change size from 32->64 bit
  lib/intel_bufops: Add init with handle and size function
  lib/intel_batchbuffer: Integrate intel_bb with allocator
  lib/intel_batchbuffer: Use relocations in intel-bb up to gen12
  lib/intel_batchbuffer: Create bb with strategy / vm ranges
  lib/intel_batchbuffer: Add tracking intel_buf to intel_bb
  lib/igt_fb: Initialize intel_buf with same size as fb
  tests/api_intel_bb: Modify test to verify intel_bb with allocator
  tests/api_intel_bb: Add subtest to check render batch on the last page
  tests/api_intel_bb: Add compressed->compressed copy
  tests/api_intel_bb: Add purge-bb test
  tests/api_intel_bb: Remove check-canonical test
  tests/api_intel_bb: Add simple intel-bb which uses allocator
  tests/api_intel_bb: Use allocator in delta-check test
  tests/api_intel_allocator: Prepare to run with sanitizer
  tests/api_intel_allocator: Add execbuf with allocator example
  tests/gem_softpin: Verify allocator and execbuf pair work together
  tests/gem|kms: Remove intel_bb from fixture
  tests/gem_mmap_offset: Use intel_buf wrapper code instead direct
  tests/gem_ppgtt: Adopt test to use intel_bb with allocator
  tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator
  tests/perf.c: Remove buffer from batch

 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |    2 +
 lib/Makefile.sources                          |    8 +
 lib/i915/gem_submission.c                     |   30 +
 lib/i915/gem_submission.h                     |    1 +
 lib/igt_core.c                                |   20 +
 lib/igt_fb.c                                  |   10 +-
 lib/igt_list.c                                |   78 ++
 lib/igt_list.h                                |   51 +-
 lib/igt_map.c                                 |  131 +++
 lib/igt_map.h                                 |  104 ++
 lib/intel_allocator.c                         | 1042 +++++++++++++++++
 lib/intel_allocator.h                         |  163 +++
 lib/intel_allocator_msgchannel.c              |  195 +++
 lib/intel_allocator_msgchannel.h              |  149 +++
 lib/intel_allocator_random.c                  |  204 ++++
 lib/intel_allocator_simple.c                  |  748 ++++++++++++
 lib/intel_aux_pgtable.c                       |   26 +-
 lib/intel_batchbuffer.c                       |  648 +++++++---
 lib/intel_batchbuffer.h                       |   34 +-
 lib/intel_bufops.c                            |   63 +-
 lib/intel_bufops.h                            |   20 +-
 lib/media_spin.c                              |    2 -
 lib/meson.build                               |    5 +
 tests/i915/api_intel_allocator.c              |  632 ++++++++++
 tests/i915/api_intel_bb.c                     |  698 ++++++++---
 tests/i915/gem_caching.c                      |   14 +-
 tests/i915/gem_linear_blits.c                 |  117 +-
 tests/i915/gem_mmap_offset.c                  |    4 +-
 tests/i915/gem_partial_pwrite_pread.c         |   40 +-
 tests/i915/gem_ppgtt.c                        |    6 +
 tests/i915/gem_render_copy.c                  |   31 +-
 tests/i915/gem_render_copy_redux.c            |   24 +-
 tests/i915/gem_softpin.c                      |  194 +++
 tests/i915/perf.c                             |    9 +
 tests/intel-ci/fast-feedback.testlist         |    2 +
 tests/kms_big_fb.c                            |   12 +-
 tests/meson.build                             |    1 +
 37 files changed, 5045 insertions(+), 473 deletions(-)
 create mode 100644 lib/igt_map.c
 create mode 100644 lib/igt_map.h
 create mode 100644 lib/intel_allocator.c
 create mode 100644 lib/intel_allocator.h
 create mode 100644 lib/intel_allocator_msgchannel.c
 create mode 100644 lib/intel_allocator_msgchannel.h
 create mode 100644 lib/intel_allocator_random.c
 create mode 100644 lib/intel_allocator_simple.c
 create mode 100644 tests/i915/api_intel_allocator.c

-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 01/35] lib/gem_submission: Add gem_has_relocations() check
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 02/35] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
                   ` (35 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Add check which probes kernel supports relocation or not
for i915 device.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
---
 lib/i915/gem_submission.c | 30 ++++++++++++++++++++++++++++++
 lib/i915/gem_submission.h |  1 +
 2 files changed, 31 insertions(+)

diff --git a/lib/i915/gem_submission.c b/lib/i915/gem_submission.c
index 320340a5d..051f9d046 100644
--- a/lib/i915/gem_submission.c
+++ b/lib/i915/gem_submission.c
@@ -398,3 +398,33 @@ unsigned int gem_submission_measure(int i915, unsigned int engine)
 
 	return size;
 }
+
+/**
+ * gem_has_relocations:
+ * @fd: opened i915 drm file descriptor
+ *
+ * Feature test macro to query whether kernel allows for generation to
+ * use relocations.
+ *
+ * Returns: true if we can use relocations, otherwise false
+ */
+
+bool gem_has_relocations(int i915)
+{
+	struct drm_i915_gem_relocation_entry reloc = {};
+	struct drm_i915_gem_exec_object2 obj = {
+		.handle = gem_create(i915, 4096),
+		.relocs_ptr = to_user_pointer(&reloc),
+		.relocation_count = 1,
+	};
+	struct drm_i915_gem_execbuffer2 execbuf = {
+		.buffers_ptr = to_user_pointer(&obj),
+		.buffer_count = 1,
+	};
+	bool has_relocs;
+
+	has_relocs = __gem_execbuf(i915, &execbuf) == -ENOENT;
+	gem_close(i915, obj.handle);
+
+	return has_relocs;
+}
diff --git a/lib/i915/gem_submission.h b/lib/i915/gem_submission.h
index 773e7b516..0faba6a3e 100644
--- a/lib/i915/gem_submission.h
+++ b/lib/i915/gem_submission.h
@@ -49,5 +49,6 @@ void gem_require_blitter(int i915);
 unsigned int gem_submission_measure(int i915, unsigned int engine);
 
 void gem_test_engine(int fd, unsigned int engine);
+bool gem_has_relocations(int fd);
 
 #endif /* GEM_SUBMISSION_H */
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 02/35] lib/igt_list: Add igt_list_del_init()
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 01/35] lib/gem_submission: Add gem_has_relocations() check Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 03/35] lib/igt_list: igt_hlist implementation Zbigniew Kempczyński
                   ` (34 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Add helper function for delete and reinitialize list element.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_list.c | 6 ++++++
 lib/igt_list.h | 1 +
 2 files changed, 7 insertions(+)

diff --git a/lib/igt_list.c b/lib/igt_list.c
index 5e30b19b6..37ae139c4 100644
--- a/lib/igt_list.c
+++ b/lib/igt_list.c
@@ -46,6 +46,12 @@ void igt_list_del(struct igt_list_head *elem)
 	elem->prev = NULL;
 }
 
+void igt_list_del_init(struct igt_list_head *elem)
+{
+	igt_list_del(elem);
+	IGT_INIT_LIST_HEAD(elem);
+}
+
 void igt_list_move(struct igt_list_head *elem, struct igt_list_head *list)
 {
 	igt_list_del(elem);
diff --git a/lib/igt_list.h b/lib/igt_list.h
index dbf5f802c..cc93d7a0d 100644
--- a/lib/igt_list.h
+++ b/lib/igt_list.h
@@ -75,6 +75,7 @@ struct igt_list_head {
 void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
 void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
 void igt_list_del(struct igt_list_head *elem);
+void igt_list_del_init(struct igt_list_head *elem);
 void igt_list_move(struct igt_list_head *elem, struct igt_list_head *list);
 void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
 int igt_list_length(const struct igt_list_head *head);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 03/35] lib/igt_list: igt_hlist implementation.
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 01/35] lib/gem_submission: Add gem_has_relocations() check Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 02/35] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 04/35] lib/igt_map: Introduce igt_map Zbigniew Kempczyński
                   ` (33 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Double linked lists with a single pointer list head implementation,
based on similar in the kernel.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_list.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++
 lib/igt_list.h | 50 +++++++++++++++++++++++++++++++++--
 2 files changed, 120 insertions(+), 2 deletions(-)

diff --git a/lib/igt_list.c b/lib/igt_list.c
index 37ae139c4..43200f9b3 100644
--- a/lib/igt_list.c
+++ b/lib/igt_list.c
@@ -22,6 +22,7 @@
  *
  */
 
+#include "assert.h"
 #include "igt_list.h"
 
 void IGT_INIT_LIST_HEAD(struct igt_list_head *list)
@@ -81,3 +82,74 @@ bool igt_list_empty(const struct igt_list_head *head)
 {
 	return head->next == head;
 }
+
+void igt_hlist_init(struct igt_hlist_node *h)
+{
+	h->next = NULL;
+	h->pprev = NULL;
+}
+
+int igt_hlist_unhashed(const struct igt_hlist_node *h)
+{
+	return !h->pprev;
+}
+
+int igt_hlist_empty(const struct igt_hlist_head *h)
+{
+	return !h->first;
+}
+
+static void __igt_hlist_del(struct igt_hlist_node *n)
+{
+	struct igt_hlist_node *next = n->next;
+	struct igt_hlist_node **pprev = n->pprev;
+
+	*pprev = next;
+	if (next)
+		next->pprev = pprev;
+}
+
+void igt_hlist_del(struct igt_hlist_node *n)
+{
+	__igt_hlist_del(n);
+	n->next = NULL;
+	n->pprev = NULL;
+}
+
+void igt_hlist_del_init(struct igt_hlist_node *n)
+{
+	if (!igt_hlist_unhashed(n)) {
+		__igt_hlist_del(n);
+		igt_hlist_init(n);
+	}
+}
+
+void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h)
+{
+	struct igt_hlist_node *first = h->first;
+
+	n->next = first;
+	if (first)
+		first->pprev = &n->next;
+	h->first = n;
+	n->pprev = &h->first;
+}
+
+void igt_hlist_add_before(struct igt_hlist_node *n, struct igt_hlist_node *next)
+{
+	assert(next);
+	n->pprev = next->pprev;
+	n->next = next;
+	next->pprev = &n->next;
+	*(n->pprev) = n;
+}
+
+void igt_hlist_add_behind(struct igt_hlist_node *n, struct igt_hlist_node *prev)
+{
+	n->next = prev->next;
+	prev->next = n;
+	n->pprev = &prev->next;
+
+	if (n->next)
+		n->next->pprev  = &n->next;
+}
diff --git a/lib/igt_list.h b/lib/igt_list.h
index cc93d7a0d..78e761e05 100644
--- a/lib/igt_list.h
+++ b/lib/igt_list.h
@@ -40,6 +40,10 @@
  * igt_list is a doubly-linked list where an instance of igt_list_head is a
  * head sentinel and has to be initialized.
  *
+ * igt_hist is also an double linked lists, but with a single pointer list head.
+ * Mostly useful for hash tables where the two pointer list head is
+ * too wasteful. You lose the ability to access the tail in O(1).
+ *
  * Example usage:
  *
  * |[<!-- language="C" -->
@@ -71,6 +75,13 @@ struct igt_list_head {
 	struct igt_list_head *next;
 };
 
+struct igt_hlist_head {
+	struct igt_hlist_node *first;
+};
+
+struct igt_hlist_node {
+	struct igt_hlist_node *next, **pprev;
+};
 
 void IGT_INIT_LIST_HEAD(struct igt_list_head *head);
 void igt_list_add(struct igt_list_head *elem, struct igt_list_head *head);
@@ -81,6 +92,17 @@ void igt_list_move_tail(struct igt_list_head *elem, struct igt_list_head *list);
 int igt_list_length(const struct igt_list_head *head);
 bool igt_list_empty(const struct igt_list_head *head);
 
+void igt_hlist_init(struct igt_hlist_node *h);
+int igt_hlist_unhashed(const struct igt_hlist_node *h);
+int igt_hlist_empty(const struct igt_hlist_head *h);
+void igt_hlist_del(struct igt_hlist_node *n);
+void igt_hlist_del_init(struct igt_hlist_node *n);
+void igt_hlist_add_head(struct igt_hlist_node *n, struct igt_hlist_head *h);
+void igt_hlist_add_before(struct igt_hlist_node *n,
+			  struct igt_hlist_node *next);
+void igt_hlist_add_behind(struct igt_hlist_node *n,
+			  struct igt_hlist_node *prev);
+
 #define igt_container_of(ptr, sample, member)				\
 	(__typeof__(sample))((char *)(ptr) -				\
 				offsetof(__typeof__(*sample), member))
@@ -96,9 +118,10 @@ bool igt_list_empty(const struct igt_list_head *head);
  * Safe against removel of the *current* list element. To achive this it
  * requires an extra helper variable `tmp` with the same type as `pos`.
  */
-#define igt_list_for_each_entry_safe(pos, tmp, head, member)			\
+
+#define igt_list_for_each_entry_safe(pos, tmp, head, member)		\
 	for (pos = igt_container_of((head)->next, pos, member),		\
-	     tmp = igt_container_of((pos)->member.next, tmp, member); 	\
+	     tmp = igt_container_of((pos)->member.next, tmp, member);	\
 	     &pos->member != (head);					\
 	     pos = tmp,							\
 	     tmp = igt_container_of((pos)->member.next, tmp, member))
@@ -108,6 +131,27 @@ bool igt_list_empty(const struct igt_list_head *head);
 	     &pos->member != (head);					\
 	     pos = igt_container_of((pos)->member.prev, pos, member))
 
+#define igt_list_for_each_entry_safe_reverse(pos, tmp, head, member)		\
+	for (pos = igt_container_of((head)->prev, pos, member),		\
+	     tmp = igt_container_of((pos)->member.prev, tmp, member);	\
+	     &pos->member != (head);					\
+	     pos = tmp,							\
+	     tmp = igt_container_of((pos)->member.prev, tmp, member))
+
+#define igt_hlist_entry_safe(ptr, sample, member) \
+	({ typeof(ptr) ____ptr = (ptr); \
+	   ____ptr ? igt_container_of(____ptr, sample, member) : NULL; \
+	})
+
+#define igt_hlist_for_each_entry(pos, head, member)			\
+	for (pos = igt_hlist_entry_safe((head)->first, pos, member);	\
+	     pos;							\
+	     pos = igt_hlist_entry_safe((pos)->member.next, pos, member))
+
+#define igt_hlist_for_each_entry_safe(pos, n, head, member)		\
+	for (pos = igt_hlist_entry_safe((head)->first, pos, member);	\
+	     pos && ({ n = pos->member.next; 1; });			\
+	     pos = igt_hlist_entry_safe(n, pos, member))
 
 /* IGT custom helpers */
 
@@ -127,4 +171,6 @@ bool igt_list_empty(const struct igt_list_head *head);
 #define igt_list_last_entry(head, type, member) \
 	igt_container_of((head)->prev, (type), member)
 
+#define IGT_INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
+
 #endif /* IGT_LIST_H */
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 04/35] lib/igt_map: Introduce igt_map
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (2 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 03/35] lib/igt_list: igt_hlist implementation Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 05/35] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
                   ` (32 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Implementation of generic, non-thread safe, dynamic size hash map.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |   1 +
 lib/Makefile.sources                          |   2 +
 lib/igt_map.c                                 | 131 ++++++++++++++++++
 lib/igt_map.h                                 | 104 ++++++++++++++
 lib/meson.build                               |   1 +
 5 files changed, 239 insertions(+)
 create mode 100644 lib/igt_map.c
 create mode 100644 lib/igt_map.h

diff --git a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
index 9c9aa8f1d..bf5ac5428 100644
--- a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
+++ b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
@@ -33,6 +33,7 @@
     <xi:include href="xml/igt_kmod.xml"/>
     <xi:include href="xml/igt_kms.xml"/>
     <xi:include href="xml/igt_list.xml"/>
+    <xi:include href="xml/igt_map.xml"/>
     <xi:include href="xml/igt_pm.xml"/>
     <xi:include href="xml/igt_primes.xml"/>
     <xi:include href="xml/igt_rand.xml"/>
diff --git a/lib/Makefile.sources b/lib/Makefile.sources
index 4f6389f8a..84fd7b49c 100644
--- a/lib/Makefile.sources
+++ b/lib/Makefile.sources
@@ -48,6 +48,8 @@ lib_source_list =	 	\
 	igt_infoframe.h		\
 	igt_list.c		\
 	igt_list.h		\
+	igt_map.c		\
+	igt_map.h		\
 	igt_matrix.c		\
 	igt_matrix.h		\
 	igt_params.c		\
diff --git a/lib/igt_map.c b/lib/igt_map.c
new file mode 100644
index 000000000..ca58efbd9
--- /dev/null
+++ b/lib/igt_map.c
@@ -0,0 +1,131 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt_map.h"
+#include "igt.h"
+#include "igt_x86.h"
+
+#define MIN_CAPACITY_BITS 2
+
+static bool igt_map_filled(struct igt_map *map)
+{
+	return IGT_MAP_CAPACITY(map) * 4 / 5 <= map->size;
+}
+
+static void igt_map_extend(struct igt_map *map)
+{
+	struct igt_hlist_head *new_heads;
+	struct igt_map_entry *pos;
+	struct igt_hlist_node *tmp;
+	uint32_t new_bits = map->capacity_bits + 1;
+	int i;
+
+	new_heads = calloc(1ULL << new_bits,
+			   sizeof(struct igt_hlist_head));
+	igt_assert(new_heads);
+
+	igt_map_for_each_safe(map, i, tmp, pos)
+		igt_hlist_add_head(&pos->link, &new_heads[map->hash_fn(pos->key, new_bits)]);
+
+	free(map->heads);
+	map->capacity_bits++;
+
+	map->heads = new_heads;
+}
+
+static bool equal_4bytes(const void *key1, const void *key2)
+{
+	const uint32_t *k1 = key1, *k2 = key2;
+	return *k1 == *k2;
+}
+
+/*  2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
+#define GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001ULL
+
+static inline uint64_t hash_64_4bytes(const void *val, unsigned int bits)
+{
+	uint64_t hash = *(uint32_t *)val;
+
+	hash = hash * GOLDEN_RATIO_PRIME_64;
+	/* High bits are more random, so use them. */
+	return hash >> (64 - bits);
+}
+
+void __igt_map_init(struct igt_map *map, igt_map_equal_fn eq_fn,
+		 igt_map_hash_fn hash_fn, uint32_t initial_bits)
+{
+	map->equal_fn = eq_fn == NULL ? equal_4bytes : eq_fn;
+	map->hash_fn = hash_fn == NULL ? hash_64_4bytes : hash_fn;
+	map->capacity_bits = initial_bits > 0 ? initial_bits
+					      : MIN_CAPACITY_BITS;
+	map->heads = calloc(IGT_MAP_CAPACITY(map),
+			    sizeof(struct igt_hlist_head));
+
+	igt_assert(map->heads);
+	map->size = 0;
+}
+
+void igt_map_add(struct igt_map *map, const void *key, void *value)
+{
+	struct igt_map_entry *entry;
+
+	if (igt_map_filled(map))
+		igt_map_extend(map);
+
+	entry = malloc(sizeof(struct igt_map_entry));
+	entry->value = value;
+	entry->key = key;
+	igt_hlist_add_head(&entry->link,
+			   &map->heads[map->hash_fn(key, map->capacity_bits)]);
+	map->size++;
+}
+
+void *igt_map_del(struct igt_map *map, const void *key)
+{
+	struct igt_map_entry *pos;
+	struct igt_hlist_node *tmp;
+	void *val = NULL;
+
+	igt_map_for_each_possible_safe(map, pos, tmp, key) {
+		if (map->equal_fn(pos->key, key)) {
+			igt_hlist_del(&pos->link);
+			val = pos->value;
+			free(pos);
+		}
+	}
+	return val;
+}
+
+void *igt_map_find(struct igt_map *map, const void *key)
+{
+	struct igt_map_entry *pos = NULL;
+
+	igt_map_for_each_possible(map, pos, key)
+		if (map->equal_fn(pos->key, key))
+			break;
+
+	return pos ? pos->value : NULL;
+}
+
+void igt_map_free(struct igt_map *map)
+{
+	struct igt_map_entry *pos;
+	struct igt_hlist_node *tmp;
+	int i;
+
+	igt_map_for_each_safe(map, i, tmp, pos) {
+		igt_hlist_del(&pos->link);
+		free(pos);
+	}
+
+	free(map->heads);
+}
+
+bool igt_map_empty(struct igt_map *map)
+{
+	return map->size;
+}
diff --git a/lib/igt_map.h b/lib/igt_map.h
new file mode 100644
index 000000000..7924c8911
--- /dev/null
+++ b/lib/igt_map.h
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef IGT_MAP_H
+#define IGT_MAP_H
+
+#include <stdint.h>
+#include "igt_list.h"
+
+/**
+ * SECTION:igt_map
+ * @short_description: a dynamic sized hashmap implementation
+ * @title: IGT Map
+ * @include: igt_map.h
+ *
+ * igt_map is a dynamic sized, non-thread-safe implementation of a hashmap.
+ * This map grows exponentially when it's over 80% filled. The structure allows
+ * indexing records by any key through the hash and equal functions.
+ * By default, hashmap compares and hashes the first four bytes of a key.
+ *
+ * Example usage:
+ *
+ * |[<!-- language="C" -->
+ * struct igt_map *map;
+ *
+ * struct record {
+ * 	int foo;
+ * 	uint32_t unique_identifier;
+ * };
+ *
+ * struct record r1, r2, *r_ptr;
+ *
+ * map = malloc(sizeof(*map));
+ * igt_map_init(map); // initiaize the map with default parameters
+ * igt_map_add(map, &r1.unique_identifier, &r1);
+ * igt_map_add(map, &r2.unique_identifier, &r2);
+ *
+ * struct igt_map_entry *pos; int i;
+ * igt_map_for_each(map, i, pos) {
+ * 	r_ptr = pos->value;
+ * 	printf("key: %u, foo: %d\n", *(uint32_t*) pos->key, r_ptr->foo);
+ * }
+ *
+ * uint32_t key = r1.unique_identifier;
+ * r_ptr = igt_map_find(map, &key); // get r1
+ *
+ * r_ptr = igt_map_del(map, &r2.unique_identifier);
+ * if (r_ptr)
+ * 	printf("record with key %u deleted\n", r_ptr->unique_identifier);
+ *
+ * igt_map_free(map);
+ * free(map);
+ * ]|
+ */
+
+typedef bool (*igt_map_equal_fn)(const void *key1, const void *key2);
+typedef uint64_t (*igt_map_hash_fn)(const void *val, unsigned int bits);
+struct igt_map {
+	uint32_t size;
+	uint32_t capacity_bits;
+	igt_map_equal_fn equal_fn;
+	igt_map_hash_fn hash_fn;
+	struct igt_hlist_head *heads;
+};
+
+struct igt_map_entry {
+	const void *key;
+	void *value;
+	struct igt_hlist_node link;
+};
+
+void __igt_map_init(struct igt_map *map, igt_map_equal_fn eq_fn,
+		  igt_map_hash_fn hash_fn, uint32_t initial_bits);
+void igt_map_add(struct igt_map *map, const void *key, void *value);
+void *igt_map_del(struct igt_map *map, const void *key);
+void *igt_map_find(struct igt_map *map, const void *key);
+void igt_map_free(struct igt_map *map);
+bool igt_map_empty(struct igt_map *map);
+
+#define igt_map_init(map) __igt_map_init(map, NULL, NULL, 8)
+
+#define IGT_MAP_CAPACITY(map) (1ULL << map->capacity_bits)
+
+#define igt_map_for_each_safe(map, bkt, tmp, obj) \
+	for ((bkt) = 0, obj = NULL; obj == NULL && (bkt) < IGT_MAP_CAPACITY(map); \
+	(bkt)++)\
+	igt_hlist_for_each_entry_safe(obj, tmp, &map->heads[bkt], link)
+
+#define igt_map_for_each(map, bkt, obj) \
+	for ((bkt) = 0, obj = NULL; obj == NULL && (bkt) < IGT_MAP_CAPACITY(map); \
+	(bkt)++)\
+	igt_hlist_for_each_entry(obj, &map->heads[bkt], link)
+
+#define igt_map_for_each_possible(map, obj, key) \
+	igt_hlist_for_each_entry(obj, \
+		&map->heads[map->hash_fn(key, map->capacity_bits)], link)
+
+#define igt_map_for_each_possible_safe(map, obj, tmp, key) \
+	igt_hlist_for_each_entry_safe(obj, tmp, \
+		&map->heads[map->hash_fn(key, map->capacity_bits)], link)
+
+#endif /* IGT_MAP_H */
diff --git a/lib/meson.build b/lib/meson.build
index 02ecef53e..c2e176447 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -61,6 +61,7 @@ lib_sources = [
 	'igt_core.c',
 	'igt_draw.c',
 	'igt_list.c',
+	'igt_map.c',
 	'igt_pm.c',
 	'igt_dummyload.c',
 	'uwildmat/uwildmat.c',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 05/35] lib/igt_core: Track child process pid and tid
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (3 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 04/35] lib/igt_map: Introduce igt_map Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 06/35] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
                   ` (31 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Introduce variables which can decrease number of getpid()/gettid()
calls, especially for allocator which must be aware of method
acquiring addresses.

When child is spawned using igt_fork() we can control its initialization
and prepare child_pid implicitly. Tracking child_tid requires our
intervention in the code and do something like this:

if (child_tid == -1)
	child_tid = gettid()

Variable was created for using in TLS so each thread is created
with variable set to -1. This will give each thread it's own "copy"
and there's no risk to use other thread tid. For each forked child
we reassign -1 to child_tid to avoid using already set variable.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
---
 lib/igt_core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/igt_core.c b/lib/igt_core.c
index f9dfaa0dd..2b4182f16 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -306,6 +306,10 @@ int num_test_children;
 int test_children_sz;
 bool test_child;
 
+/* For allocator purposes */
+pid_t child_pid  = -1;
+__thread pid_t child_tid  = -1;
+
 enum {
 	/*
 	 * Let the first values be used by individual tests so options don't
@@ -2302,6 +2306,8 @@ bool __igt_fork(void)
 	case 0:
 		test_child = true;
 		pthread_mutex_init(&print_mutex, NULL);
+		child_pid = getpid();
+		child_tid = -1;
 		exit_handler_count = 0;
 		reset_helper_process_list();
 		oom_adjust_for_doom();
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 06/35] lib/intel_allocator_simple: Add simple allocator
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (4 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 05/35] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 07/35] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
                   ` (30 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Simple allocator borrowed from Mesa adopted for IGT use.

From default we prefer an allocation from the top of vm address space
(we can catch addressing issues pro-actively). When function
intel_allocator_simple_create() is used we exclude last page as HW
tends to hang on the render engine when full 3D pipeline is executed from
the last page. For more control of vm range user can specify range using
intel_allocator_simple_create_full() (with the respect of the gtt size).

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator_simple.c | 748 +++++++++++++++++++++++++++++++++++
 1 file changed, 748 insertions(+)
 create mode 100644 lib/intel_allocator_simple.c

diff --git a/lib/intel_allocator_simple.c b/lib/intel_allocator_simple.c
new file mode 100644
index 000000000..6e02d7c47
--- /dev/null
+++ b/lib/intel_allocator_simple.c
@@ -0,0 +1,748 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt.h"
+#include "igt_x86.h"
+#include "intel_allocator.h"
+#include "intel_bufops.h"
+#include "igt_map.h"
+
+/*
+ * We limit allocator space to avoid hang when batch would be
+ * pinned in the last page.
+ */
+#define RESERVED 4096
+
+/* Avoid compilation warning */
+struct intel_allocator *intel_allocator_simple_create(int fd, uint32_t ctx);
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy);
+
+struct simple_vma_heap {
+	struct igt_list_head holes;
+
+	/* If true, simple_vma_heap_alloc will prefer high addresses
+	 *
+	 * Default is true.
+	 */
+	bool alloc_high;
+};
+
+struct simple_vma_hole {
+	struct igt_list_head link;
+	uint64_t offset;
+	uint64_t size;
+};
+
+struct intel_allocator_simple {
+	struct igt_map objects;
+	struct igt_map reserved;
+	struct simple_vma_heap heap;
+
+	uint64_t start;
+	uint64_t end;
+
+	/* statistics */
+	uint64_t total_size;
+	uint64_t allocated_size;
+	uint64_t allocated_objects;
+	uint64_t reserved_size;
+	uint64_t reserved_areas;
+};
+
+struct intel_allocator_record {
+	uint32_t handle;
+	uint64_t offset;
+	uint64_t size;
+};
+
+#define simple_vma_foreach_hole(_hole, _heap) \
+	igt_list_for_each_entry(_hole, &(_heap)->holes, link)
+
+#define simple_vma_foreach_hole_safe(_hole, _heap, _tmp) \
+	igt_list_for_each_entry_safe(_hole, _tmp,  &(_heap)->holes, link)
+
+#define simple_vma_foreach_hole_safe_rev(_hole, _heap, _tmp) \
+	igt_list_for_each_entry_safe_reverse(_hole, _tmp,  &(_heap)->holes, link)
+
+#define GEN8_GTT_ADDRESS_WIDTH 48
+#define DECANONICAL(offset) (offset & ((1ull << GEN8_GTT_ADDRESS_WIDTH) - 1))
+
+static void simple_vma_heap_validate(struct simple_vma_heap *heap)
+{
+	uint64_t prev_offset = 0;
+	struct simple_vma_hole *hole;
+
+	simple_vma_foreach_hole(hole, heap) {
+		igt_assert(hole->size > 0);
+
+		if (&hole->link == heap->holes.next) {
+		/* This must be the top-most hole.  Assert that,
+		 * if it overflows, it overflows to 0, i.e. 2^64.
+		 */
+			igt_assert(hole->size + hole->offset == 0 ||
+				   hole->size + hole->offset > hole->offset);
+		} else {
+		/* This is not the top-most hole so it must not overflow and,
+		 * in fact, must be strictly lower than the top-most hole.  If
+		 * hole->size + hole->offset == prev_offset, then we failed to
+		 * join holes during a simple_vma_heap_free.
+		 */
+			igt_assert(hole->size + hole->offset > hole->offset &&
+				   hole->size + hole->offset < prev_offset);
+		}
+		prev_offset = hole->offset;
+	}
+}
+
+
+static void simple_vma_heap_free(struct simple_vma_heap *heap,
+				 uint64_t offset, uint64_t size)
+{
+	struct simple_vma_hole *high_hole = NULL, *low_hole = NULL, *hole;
+	bool high_adjacent, low_adjacent;
+
+	/* Freeing something with a size of 0 is not valid. */
+	igt_assert(size > 0);
+
+	/* It's possible for offset + size to wrap around if we touch the top of
+	 * the 64-bit address space, but we cannot go any higher than 2^64.
+	 */
+	igt_assert(offset + size == 0 || offset + size > offset);
+
+	simple_vma_heap_validate(heap);
+
+	/* Find immediately higher and lower holes if they exist. */
+	simple_vma_foreach_hole(hole, heap) {
+		if (hole->offset <= offset) {
+			low_hole = hole;
+			break;
+		}
+		high_hole = hole;
+	}
+
+	if (high_hole)
+		igt_assert(offset + size <= high_hole->offset);
+	high_adjacent = high_hole && offset + size == high_hole->offset;
+
+	if (low_hole) {
+		igt_assert(low_hole->offset + low_hole->size > low_hole->offset);
+		igt_assert(low_hole->offset + low_hole->size <= offset);
+	}
+	low_adjacent = low_hole && low_hole->offset + low_hole->size == offset;
+
+	if (low_adjacent && high_adjacent) {
+		/* Merge the two holes */
+		low_hole->size += size + high_hole->size;
+		igt_list_del(&high_hole->link);
+		free(high_hole);
+	} else if (low_adjacent) {
+		/* Merge into the low hole */
+		low_hole->size += size;
+	} else if (high_adjacent) {
+		/* Merge into the high hole */
+		high_hole->offset = offset;
+		high_hole->size += size;
+	} else {
+		/* Neither hole is adjacent; make a new one */
+		hole = calloc(1, sizeof(*hole));
+		igt_assert(hole);
+
+		hole->offset = offset;
+		hole->size = size;
+		/* Add it after the high hole so we maintain high-to-low
+		 *  ordering
+		 */
+		if (high_hole)
+			igt_list_add(&hole->link, &high_hole->link);
+		else
+			igt_list_add(&hole->link, &heap->holes);
+	}
+
+	simple_vma_heap_validate(heap);
+}
+
+static void simple_vma_heap_init(struct simple_vma_heap *heap,
+				 uint64_t start, uint64_t size,
+				 enum allocator_strategy strategy)
+{
+	IGT_INIT_LIST_HEAD(&heap->holes);
+	simple_vma_heap_free(heap, start, size);
+
+	switch (strategy) {
+	case ALLOC_STRATEGY_LOW_TO_HIGH:
+		heap->alloc_high = false;
+		break;
+	case ALLOC_STRATEGY_HIGH_TO_LOW:
+	default:
+		heap->alloc_high = true;
+	}
+}
+
+static void simple_vma_heap_finish(struct simple_vma_heap *heap)
+{
+	struct simple_vma_hole *hole, *tmp;
+
+	simple_vma_foreach_hole_safe(hole, heap, tmp)
+		free(hole);
+}
+
+static void simple_vma_hole_alloc(struct simple_vma_hole *hole,
+				  uint64_t offset, uint64_t size)
+{
+	struct simple_vma_hole *high_hole;
+	uint64_t waste;
+
+	igt_assert(hole->offset <= offset);
+	igt_assert(hole->size >= offset - hole->offset + size);
+
+	if (offset == hole->offset && size == hole->size) {
+		/* Just get rid of the hole. */
+		igt_list_del(&hole->link);
+		free(hole);
+		return;
+	}
+
+	igt_assert(offset - hole->offset <= hole->size - size);
+	waste = (hole->size - size) - (offset - hole->offset);
+	if (waste == 0) {
+		/* We allocated at the top->  Shrink the hole down. */
+		hole->size -= size;
+		return;
+	}
+
+	if (offset == hole->offset) {
+		/* We allocated at the bottom. Shrink the hole up-> */
+		hole->offset += size;
+		hole->size -= size;
+		return;
+	}
+
+   /* We allocated in the middle.  We need to split the old hole into two
+    * holes, one high and one low.
+    */
+	high_hole = calloc(1, sizeof(*hole));
+	igt_assert(high_hole);
+
+	high_hole->offset = offset + size;
+	high_hole->size = waste;
+
+   /* Adjust the hole to be the amount of space left at he bottom of the
+    * original hole.
+    */
+	hole->size = offset - hole->offset;
+
+   /* Place the new hole before the old hole so that the list is in order
+    * from high to low.
+    */
+	igt_list_add_tail(&high_hole->link, &hole->link);
+}
+
+static bool simple_vma_heap_alloc(struct simple_vma_heap *heap,
+				  uint64_t *offset, uint64_t size,
+				  uint64_t alignment)
+{
+	struct simple_vma_hole *hole, *tmp;
+	uint64_t misalign;
+
+	/* The caller is expected to reject zero-size allocations */
+	igt_assert(size > 0);
+	igt_assert(alignment > 0);
+
+	simple_vma_heap_validate(heap);
+
+	if (heap->alloc_high) {
+		simple_vma_foreach_hole_safe(hole, heap, tmp) {
+			if (size > hole->size)
+				continue;
+
+	/* Compute the offset as the highest address where a chunk of the
+	 * given size can be without going over the top of the hole.
+	 *
+	 * This calculation is known to not overflow because we know that
+	 * hole->size + hole->offset can only overflow to 0 and size > 0.
+	 */
+			*offset = (hole->size - size) + hole->offset;
+
+	  /* Align the offset.  We align down and not up because we are
+	   *
+	   * allocating from the top of the hole and not the bottom.
+	   */
+			*offset = (*offset / alignment) * alignment;
+
+			if (*offset < hole->offset)
+				continue;
+
+			simple_vma_hole_alloc(hole, *offset, size);
+			simple_vma_heap_validate(heap);
+			return true;
+		}
+	} else {
+		simple_vma_foreach_hole_safe_rev(hole, heap, tmp) {
+			if (size > hole->size)
+				continue;
+
+			*offset = hole->offset;
+
+			/* Align the offset */
+			misalign = *offset % alignment;
+			if (misalign) {
+				uint64_t pad = alignment - misalign;
+
+				if (pad > hole->size - size)
+					continue;
+
+				*offset += pad;
+			}
+
+			simple_vma_hole_alloc(hole, *offset, size);
+			simple_vma_heap_validate(heap);
+			return true;
+		}
+	}
+
+	/* Failed to allocate */
+	return false;
+}
+
+static void intel_allocator_simple_get_address_range(struct intel_allocator *ial,
+						     uint64_t *startp,
+						     uint64_t *endp)
+{
+	struct intel_allocator_simple *ials = ial->priv;
+
+	if (startp)
+		*startp = ials->start;
+
+	if (endp)
+		*endp = ials->end;
+}
+
+static bool simple_vma_heap_alloc_addr(struct intel_allocator_simple *ials,
+				       uint64_t offset, uint64_t size)
+{
+	struct simple_vma_heap *heap = &ials->heap;
+	struct simple_vma_hole *hole, *tmp;
+	/* Allocating something with a size of 0 is not valid. */
+	igt_assert(size > 0);
+
+	/* It's possible for offset + size to wrap around if we touch the top of
+	 * the 64-bit address space, but we cannot go any higher than 2^64.
+	 */
+	igt_assert(offset + size == 0 || offset + size > offset);
+
+	/* Find the hole if one exists. */
+	simple_vma_foreach_hole_safe(hole, heap, tmp) {
+		if (hole->offset > offset)
+			continue;
+
+	/* Holes are ordered high-to-low so the first hole we find with
+	 * hole->offset <= is our hole.  If it's not big enough to contain the
+	 * requested range, then the allocation fails.
+	 */
+		igt_assert(hole->offset <= offset);
+		if (hole->size < offset - hole->offset + size)
+			return false;
+
+		simple_vma_hole_alloc(hole, offset, size);
+		return true;
+	}
+
+	/* We didn't find a suitable hole */
+	return false;
+}
+
+static uint64_t intel_allocator_simple_alloc(struct intel_allocator *ial,
+					     uint32_t handle, uint64_t size,
+					     uint64_t alignment)
+{
+	struct intel_allocator_record *rec;
+	struct intel_allocator_simple *ials;
+	uint64_t offset;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	igt_assert(handle);
+	alignment = alignment > 0 ? alignment : 1;
+	rec = igt_map_find(&ials->objects, &handle);
+	if (rec) {
+		offset = rec->offset;
+		igt_assert(rec->size == size);
+	} else {
+		igt_assert(simple_vma_heap_alloc(&ials->heap, &offset,
+						 size, alignment));
+		rec = malloc(sizeof(*rec));
+		rec->handle = handle;
+		rec->offset = offset;
+		rec->size = size;
+
+		igt_map_add(&ials->objects, &rec->handle, rec);
+		ials->allocated_objects++;
+		ials->allocated_size += size;
+	}
+
+	return offset;
+}
+
+static bool intel_allocator_simple_free(struct intel_allocator *ial, uint32_t handle)
+{
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	rec = igt_map_del(&ials->objects, &handle);
+	if (rec) {
+		simple_vma_heap_free(&ials->heap, rec->offset, rec->size);
+		ials->allocated_objects--;
+		ials->allocated_size -= rec->size;
+		free(rec);
+
+		return true;
+	}
+
+	return false;
+}
+
+static inline bool __same(const struct intel_allocator_record *rec,
+			  uint32_t handle, uint64_t size, uint64_t offset)
+{
+	return rec->handle == handle && rec->size == size &&
+			DECANONICAL(rec->offset) == DECANONICAL(offset);
+}
+
+static bool intel_allocator_simple_is_allocated(struct intel_allocator *ial,
+						uint32_t handle, uint64_t size,
+						uint64_t offset)
+{
+	struct intel_allocator_record *rec;
+	struct intel_allocator_simple *ials;
+	bool same = false;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	igt_assert(handle);
+
+	rec = igt_map_find(&ials->objects, &handle);
+	if (rec && __same(rec, handle, size, offset))
+		same = true;
+
+	return same;
+}
+
+static bool intel_allocator_simple_reserve(struct intel_allocator *ial,
+					   uint32_t handle,
+					   uint64_t start, uint64_t end)
+{
+	uint64_t size = end - start;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+	igt_assert(end > start || end == 0);
+
+	if (simple_vma_heap_alloc_addr(ials, start, size)) {
+		rec = malloc(sizeof(*rec));
+		rec->handle = handle;
+		rec->offset = start;
+		rec->size = size;
+
+		igt_map_add(&ials->reserved, &rec->offset, rec);
+
+		ials->reserved_areas++;
+		ials->reserved_size += rec->size;
+		return true;
+	}
+
+	igt_debug("Failed to reserve %llx + %llx\n", (long long)start, (long long)size);
+	return false;
+}
+
+static bool intel_allocator_simple_unreserve(struct intel_allocator *ial,
+					     uint32_t handle,
+					     uint64_t start, uint64_t end)
+{
+	uint64_t size = end - start;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+
+	igt_assert(end > start || end == 0);
+
+	rec = igt_map_find(&ials->reserved, &start);
+
+	if (!rec) {
+		igt_warn("Only reserved blocks can be unreserved\n");
+		return false;
+	}
+
+	if (rec->size != size) {
+		igt_warn("Only the whole block unreservation allowed\n");
+		return false;
+	}
+
+	if (rec->handle != handle) {
+		igt_warn("Handle %u doesn't match reservation handle: %u\n",
+			 rec->handle, handle);
+		return false;
+	}
+
+	igt_map_del(&ials->reserved, &start);
+
+	ials->reserved_areas--;
+	ials->reserved_size -= rec->size;
+	free(rec);
+	simple_vma_heap_free(&ials->heap, start, size);
+
+	return true;
+}
+
+static bool intel_allocator_simple_is_reserved(struct intel_allocator *ial,
+					       uint64_t start, uint64_t end)
+{
+	uint64_t size = end - start;
+	struct intel_allocator_record *rec = NULL;
+	struct intel_allocator_simple *ials;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+
+	/* clear [63:48] bits to get rid of canonical form */
+	start = DECANONICAL(start);
+	end = DECANONICAL(end);
+
+	igt_assert(end > start || end == 0);
+
+	rec = igt_map_find(&ials->reserved, &start);
+
+	if (!rec)
+		return false;
+
+	if (rec->offset == start && rec->size == size)
+		return true;
+
+	return false;
+}
+
+static bool equal_8bytes(const void *key1, const void *key2)
+{
+	const uint64_t *k1 = key1, *k2 = key2;
+	return *k1 == *k2;
+}
+
+static void intel_allocator_simple_destroy(struct intel_allocator *ial)
+{
+	struct intel_allocator_simple *ials;
+	struct igt_map_entry *pos;
+	struct igt_map *map;
+	int i;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	simple_vma_heap_finish(&ials->heap);
+
+	map = &ials->objects;
+	igt_map_for_each(map, i, pos)
+		free(pos->value);
+	igt_map_free(&ials->objects);
+
+	map = &ials->reserved;
+	igt_map_for_each(map, i, pos)
+		free(pos->value);
+	igt_map_free(&ials->reserved);
+
+	free(ial->priv);
+	free(ial);
+}
+
+static bool intel_allocator_simple_is_empty(struct intel_allocator *ial)
+{
+	struct intel_allocator_simple *ials = ial->priv;
+
+	igt_debug("<fd: %d, ctx: %u> objects: %" PRId64
+		  ", reserved_areas: %" PRId64 "\n",
+		  ial->fd, ial->ctx,
+		  ials->allocated_objects, ials->reserved_areas);
+
+	return !ials->allocated_objects && !ials->reserved_areas;
+}
+
+static void intel_allocator_simple_print(struct intel_allocator *ial, bool full)
+{
+	struct intel_allocator_simple *ials;
+	struct simple_vma_hole *hole;
+	struct simple_vma_heap *heap;
+	struct igt_map_entry *pos;
+	struct igt_map *map;
+	uint64_t total_free = 0, allocated_size = 0, allocated_objects = 0;
+	uint64_t reserved_size = 0, reserved_areas = 0;
+	int i;
+
+	igt_assert(ial);
+	ials = (struct intel_allocator_simple *) ial->priv;
+	igt_assert(ials);
+	heap = &ials->heap;
+
+	igt_info("intel_allocator_simple <fd:%d ctx:%d> on "
+		 "[0x%"PRIx64" : 0x%"PRIx64"]:\n", ial->fd, ial->ctx,
+		 ials->start, ials->end);
+
+	if (full) {
+		igt_info("holes:\n");
+		simple_vma_foreach_hole(hole, heap) {
+			igt_info("offset = %"PRIu64" (0x%"PRIx64", "
+				 "size = %"PRIu64" (0x%"PRIx64")\n",
+				 hole->offset, hole->offset, hole->size,
+				 hole->size);
+			total_free += hole->size;
+		}
+		igt_assert(total_free <= ials->total_size);
+		igt_info("total_free: %" PRIx64
+			 ", total_size: %" PRIx64
+			 ", allocated_size: %" PRIx64
+			 ", reserved_size: %" PRIx64 "\n",
+			 total_free, ials->total_size, ials->allocated_size,
+			 ials->reserved_size);
+		igt_assert(total_free ==
+			   ials->total_size - ials->allocated_size - ials->reserved_size);
+
+		igt_info("objects:\n");
+		map = &ials->objects;
+		igt_map_for_each(map, i, pos) {
+			struct intel_allocator_record *rec = pos->value;
+
+			igt_info("handle = %d, offset = %"PRIu64" "
+				"(0x%"PRIx64", size = %"PRIu64" (0x%"PRIx64")\n",
+				 rec->handle, rec->offset, rec->offset,
+				 rec->size, rec->size);
+			allocated_objects++;
+			allocated_size += rec->size;
+		}
+		igt_assert(ials->allocated_size == allocated_size);
+		igt_assert(ials->allocated_objects == allocated_objects);
+
+		igt_info("reserved areas:\n");
+		map = &ials->reserved;
+		igt_map_for_each(map, i, pos) {
+			struct intel_allocator_record *rec = pos->value;
+
+			igt_info("offset = %"PRIu64" (0x%"PRIx64", "
+				 "size = %"PRIu64" (0x%"PRIx64")\n",
+				 rec->offset, rec->offset,
+				 rec->size, rec->size);
+			reserved_areas++;
+			reserved_size += rec->size;
+		}
+		igt_assert(ials->reserved_areas == reserved_areas);
+		igt_assert(ials->reserved_size == reserved_size);
+	} else {
+		simple_vma_foreach_hole(hole, heap)
+			total_free += hole->size;
+	}
+
+	igt_info("free space: %"PRIu64"B (0x%"PRIx64") (%.2f%% full)\n"
+		 "allocated objects: %"PRIu64", reserved areas: %"PRIu64"\n",
+		 total_free, total_free,
+		 ((double) (ials->total_size - total_free) /
+		  (double) ials->total_size) * 100,
+		 ials->allocated_objects, ials->reserved_areas);
+}
+
+static struct intel_allocator *
+__intel_allocator_simple_create(int fd, uint32_t ctx,
+				uint64_t start, uint64_t end,
+				enum allocator_strategy strategy)
+{
+	struct intel_allocator *ial;
+	struct intel_allocator_simple *ials;
+
+	igt_debug("Using simple allocator <fd: %d, ctx: %u>\n", fd, ctx);
+
+	ial = calloc(1, sizeof(*ial));
+	igt_assert(ial);
+
+	ial->fd = fd;
+	ial->ctx = ctx;
+	ial->get_address_range = intel_allocator_simple_get_address_range;
+	ial->alloc = intel_allocator_simple_alloc;
+	ial->free = intel_allocator_simple_free;
+	ial->is_allocated = intel_allocator_simple_is_allocated;
+	ial->reserve = intel_allocator_simple_reserve;
+	ial->unreserve = intel_allocator_simple_unreserve;
+	ial->is_reserved = intel_allocator_simple_is_reserved;
+	ial->destroy = intel_allocator_simple_destroy;
+	ial->is_empty = intel_allocator_simple_is_empty;
+	ial->print = intel_allocator_simple_print;
+	ials = ial->priv = malloc(sizeof(struct intel_allocator_simple));
+	igt_assert(ials);
+
+	igt_map_init(&ials->objects);
+	/* Reserved addresses hashtable is indexed by an offset */
+	__igt_map_init(&ials->reserved, equal_8bytes, NULL, 3);
+
+	ials->start = start;
+	ials->end = end;
+	ials->total_size = end - start;
+	simple_vma_heap_init(&ials->heap, ials->start, ials->total_size,
+			     strategy);
+
+	ials->allocated_size = 0;
+	ials->allocated_objects = 0;
+	ials->reserved_size = 0;
+	ials->reserved_areas = 0;
+
+	return ial;
+}
+
+struct intel_allocator *
+intel_allocator_simple_create(int fd, uint32_t ctx)
+{
+	uint64_t gtt_size = gem_aperture_size(fd);
+
+	if (!gem_uses_full_ppgtt(fd))
+		gtt_size /= 2;
+	else
+		gtt_size -= RESERVED;
+
+	return __intel_allocator_simple_create(fd, ctx, 0, gtt_size,
+					       ALLOC_STRATEGY_HIGH_TO_LOW);
+}
+
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy)
+{
+	uint64_t gtt_size = gem_aperture_size(fd);
+
+	igt_assert(end <= gtt_size);
+	if (!gem_uses_full_ppgtt(fd))
+		gtt_size /= 2;
+	igt_assert(end - start <= gtt_size);
+
+	return __intel_allocator_simple_create(fd, ctx, start, end, strategy);
+}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 07/35] lib/intel_allocator_random: Add random allocator
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (5 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 06/35] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 08/35] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
                   ` (29 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Sometimes we want to experiment with addresses so randomizing can help
us a little.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator_random.c | 204 +++++++++++++++++++++++++++++++++++
 1 file changed, 204 insertions(+)
 create mode 100644 lib/intel_allocator_random.c

diff --git a/lib/intel_allocator_random.c b/lib/intel_allocator_random.c
new file mode 100644
index 000000000..15b930af1
--- /dev/null
+++ b/lib/intel_allocator_random.c
@@ -0,0 +1,204 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include "igt.h"
+#include "igt_x86.h"
+#include "igt_rand.h"
+#include "intel_allocator.h"
+
+struct intel_allocator *intel_allocator_random_create(int fd, uint32_t ctx);
+
+struct intel_allocator_random {
+	uint64_t bias;
+	uint32_t prng;
+	uint64_t gtt_size;
+	uint64_t start;
+	uint64_t end;
+
+	/* statistics */
+	uint64_t allocated_objects;
+};
+
+#define GEN8_HIGH_ADDRESS_BIT 47
+static uint64_t gen8_canonical_addr(uint64_t address)
+{
+	int shift = 63 - GEN8_HIGH_ADDRESS_BIT;
+
+	return (int64_t)(address << shift) >> shift;
+}
+
+static uint64_t get_bias(int fd)
+{
+	(void) fd;
+
+	return 256 << 10;
+}
+
+static void intel_allocator_random_get_address_range(struct intel_allocator *ial,
+						     uint64_t *startp,
+						     uint64_t *endp)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	if (startp)
+		*startp = ialr->start;
+
+	if (endp)
+		*endp = ialr->end;
+}
+
+static uint64_t intel_allocator_random_alloc(struct intel_allocator *ial,
+					     uint32_t handle, uint64_t size,
+					     uint64_t alignment)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+	uint64_t offset;
+
+	(void) handle;
+
+	/* randomize the address, we try to avoid relocations */
+	offset = hars_petruska_f54_1_random64(&ialr->prng);
+	offset += ialr->bias; /* Keep the low 256k clear, for negative deltas */
+	offset &= ialr->gtt_size - 1;
+	offset &= ~(alignment - 1);
+	offset = gen8_canonical_addr(offset);
+
+	ialr->allocated_objects++;
+
+	return offset;
+}
+
+static bool intel_allocator_random_free(struct intel_allocator *ial,
+					uint32_t handle)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	(void) handle;
+
+	ialr->allocated_objects--;
+
+	return false;
+}
+
+static bool intel_allocator_random_is_allocated(struct intel_allocator *ial,
+						uint32_t handle, uint64_t size,
+						uint64_t offset)
+{
+	(void) ial;
+	(void) handle;
+	(void) size;
+	(void) offset;
+
+	return false;
+}
+
+static void intel_allocator_random_destroy(struct intel_allocator *ial)
+{
+	igt_assert(ial);
+
+	free(ial->priv);
+	free(ial);
+}
+
+static bool intel_allocator_random_reserve(struct intel_allocator *ial,
+					   uint32_t handle,
+					   uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_random_unreserve(struct intel_allocator *ial,
+					     uint32_t handle,
+					     uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) handle;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static bool intel_allocator_random_is_reserved(struct intel_allocator *ial,
+					       uint64_t start, uint64_t end)
+{
+	(void) ial;
+	(void) start;
+	(void) end;
+
+	return false;
+}
+
+static void intel_allocator_random_print(struct intel_allocator *ial, bool full)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	(void) full;
+
+	igt_info("<fd: %d, ctx: %u> allocated objects: %" PRIx64 "\n",
+		 ial->fd, ial->ctx, ialr->allocated_objects);
+}
+
+static bool intel_allocator_random_is_empty(struct intel_allocator *ial)
+{
+	struct intel_allocator_random *ialr = ial->priv;
+
+	return !ialr->allocated_objects;
+}
+
+struct intel_allocator *intel_allocator_random_create(int fd, uint32_t ctx)
+{
+	struct intel_allocator *ial;
+	struct intel_allocator_random *ialr;
+
+	igt_debug("Using random allocator\n");
+	ial = calloc(1, sizeof(*ial));
+	igt_assert(ial);
+
+	ial->fd = fd;
+	ial->ctx = ctx;
+	ial->get_address_range = intel_allocator_random_get_address_range;
+	ial->alloc = intel_allocator_random_alloc;
+	ial->free = intel_allocator_random_free;
+	ial->is_allocated = intel_allocator_random_is_allocated;
+	ial->reserve = intel_allocator_random_reserve;
+	ial->unreserve = intel_allocator_random_unreserve;
+	ial->is_reserved = intel_allocator_random_is_reserved;
+	ial->destroy = intel_allocator_random_destroy;
+	ial->print = intel_allocator_random_print;
+	ial->is_empty = intel_allocator_random_is_empty;
+
+	ialr = ial->priv = calloc(1, sizeof(*ialr));
+	igt_assert(ial->priv);
+	ialr->prng = (uint32_t) to_user_pointer(ial);
+	ialr->gtt_size = gem_aperture_size(fd);
+	igt_debug("Gtt size: %" PRId64 "\n", ialr->gtt_size);
+	if (!gem_uses_full_ppgtt(fd))
+		ialr->gtt_size /= 2;
+
+	if ((ialr->gtt_size - 1) >> 32) {
+		/*
+		 * We're not aware of bo sizes, so limiting to 46 bit make us
+		 * sure we won't enter to addresses with 47-bit set
+		 * (we use 32-bit size now so still we fit 47-bit address space).
+		 */
+		if (ialr->gtt_size & (3ull << 47))
+			ialr->gtt_size = (1ull << 46);
+	}
+	ialr->bias = get_bias(fd);
+	ialr->start = ialr->bias;
+	ialr->end = ialr->gtt_size;
+
+	ialr->allocated_objects = 0;
+
+	return ial;
+}
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 08/35] lib/intel_allocator: Add intel_allocator core
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (6 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 07/35] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 09/35] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
                   ` (28 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

For discrete gens we have to cease of using relocations when batch
buffers are submitted to GPU. Cards which have ppgtt can use softpin
and establish addresses on our own.

We added simple allocator (taken from Mesa; works on lists) and added
random allocator to exercise batches with different addresses. All
of that works for single VM (context) so we have to add additional
layer (intel_allocator) to support multiprocessing / multithreading.

For main IGT process (also for threads created in it) intel_allocator
resolves addresses "locally", just by mutexing access to global
allocator data (allocators map). When fork() is in use children cannot
establish addresses on they own and have to contact to the thread
spawned within main IGT process. Currently SysV IPC message queue was
chosen as a communication channel between children and allocator thread.
Child calls same functions as main IGT process, only communication path
will be chosen instead of acquiring addresses locally.

v2:

Add intel_allocator_open_full() to allow user pass vm range.
Add strategy: NONE, LOW_TO_HIGH, HIGH_TO_LOW passed to allocator backend.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
---
 .../igt-gpu-tools/igt-gpu-tools-docs.xml      |   1 +
 lib/Makefile.sources                          |   6 +
 lib/igt_core.c                                |  14 +
 lib/intel_allocator.c                         | 995 ++++++++++++++++++
 lib/intel_allocator.h                         | 161 +++
 lib/intel_allocator_msgchannel.c              | 187 ++++
 lib/intel_allocator_msgchannel.h              | 149 +++
 lib/meson.build                               |   4 +
 8 files changed, 1517 insertions(+)
 create mode 100644 lib/intel_allocator.c
 create mode 100644 lib/intel_allocator.h
 create mode 100644 lib/intel_allocator_msgchannel.c
 create mode 100644 lib/intel_allocator_msgchannel.h

diff --git a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
index bf5ac5428..192d1df7a 100644
--- a/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
+++ b/docs/reference/igt-gpu-tools/igt-gpu-tools-docs.xml
@@ -43,6 +43,7 @@
     <xi:include href="xml/igt_vc4.xml"/>
     <xi:include href="xml/igt_vgem.xml"/>
     <xi:include href="xml/igt_x86.xml"/>
+    <xi:include href="xml/intel_allocator.xml"/>
     <xi:include href="xml/intel_batchbuffer.xml"/>
     <xi:include href="xml/intel_bufops.xml"/>
     <xi:include href="xml/intel_chipset.xml"/>
diff --git a/lib/Makefile.sources b/lib/Makefile.sources
index 84fd7b49c..d11876cce 100644
--- a/lib/Makefile.sources
+++ b/lib/Makefile.sources
@@ -121,6 +121,12 @@ lib_source_list =	 	\
 	surfaceformat.h		\
 	sw_sync.c		\
 	sw_sync.h		\
+	intel_allocator.c	\
+	intel_allocator.h	\
+	intel_allocator_random.c	\
+	intel_allocator_simple.c	\
+	intel_allocator_msgchannel.c	\
+	intel_allocator_msgchannel.h	\
 	intel_aux_pgtable.c	\
 	intel_reg_map.c		\
 	intel_iosf.c		\
diff --git a/lib/igt_core.c b/lib/igt_core.c
index 2b4182f16..6597acfaa 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -58,6 +58,7 @@
 #include <glib.h>
 
 #include "drmtest.h"
+#include "intel_allocator.h"
 #include "intel_chipset.h"
 #include "intel_io.h"
 #include "igt_debugfs.h"
@@ -1412,6 +1413,19 @@ __noreturn static void exit_subtest(const char *result)
 	}
 	num_test_children = 0;
 
+	/*
+	 * When test completes - mostly in fail state it can leave allocated
+	 * objects. An allocator is not an exception as it is global IGT
+	 * entity and when test will allocate some ranges and then it will
+	 * fail no free/close likely will be called (controling potential
+	 * fails and clearing before assertions in IGT is not common).
+	 *
+	 * We call intel_allocator_init() then to prepare the allocator
+	 * infrastructure from scratch for each test. Init also removes
+	 * remnants from previous allocator run (if any).
+	 */
+	intel_allocator_init();
+
 	if (!in_dynamic_subtest)
 		_igt_dynamic_tests_executed = -1;
 
diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
new file mode 100644
index 000000000..4d053c381
--- /dev/null
+++ b/lib/intel_allocator.c
@@ -0,0 +1,995 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <fcntl.h>
+#include <pthread.h>
+#include <signal.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include "igt.h"
+#include "igt_map.h"
+#include "intel_allocator.h"
+#include "intel_allocator_msgchannel.h"
+
+//#define ALLOCDBG
+#ifdef ALLOCDBG
+#define alloc_info igt_info
+#define alloc_debug igt_debug
+static const char *reqtype_str[] = {
+	[REQ_STOP]		= "stop",
+	[REQ_OPEN]		= "open",
+	[REQ_CLOSE]		= "close",
+	[REQ_ADDRESS_RANGE]	= "address range",
+	[REQ_ALLOC]		= "alloc",
+	[REQ_FREE]		= "free",
+	[REQ_IS_ALLOCATED]	= "is allocated",
+	[REQ_RESERVE]		= "reserve",
+	[REQ_UNRESERVE]		= "unreserve",
+	[REQ_RESERVE_IF_NOT_ALLOCATED] = "reserve-ina",
+	[REQ_IS_RESERVED]	= "is reserved",
+};
+static inline const char *reqstr(enum reqtype request_type)
+{
+	igt_assert(request_type >= REQ_STOP && request_type <= REQ_IS_RESERVED);
+	return reqtype_str[request_type];
+}
+#else
+#define alloc_info(...) {}
+#define alloc_debug(...) {}
+#endif
+
+struct intel_allocator *intel_allocator_random_create(int fd, uint32_t ctx);
+struct intel_allocator *intel_allocator_simple_create(int fd, uint32_t ctx);
+struct intel_allocator *
+intel_allocator_simple_create_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   enum allocator_strategy strategy);
+
+static struct igt_map *allocators_map;
+static pthread_mutex_t map_mutex = PTHREAD_MUTEX_INITIALIZER;
+static bool multiprocess;
+static pthread_t allocator_thread;
+
+static bool warn_if_not_empty;
+
+/* For allocator purposes we need to track pid/tid */
+static pid_t allocator_pid = -1;
+extern pid_t child_pid;
+extern __thread pid_t child_tid;
+
+static struct msg_channel *channel;
+
+static int send_alloc_stop(struct msg_channel *msgchan)
+{
+	struct alloc_req req = {0};
+
+	req.request_type = REQ_STOP;
+
+	return msgchan->send_req(msgchan, &req);
+}
+
+static int send_req(struct msg_channel *msgchan, pid_t tid,
+		    struct alloc_req *request)
+{
+	request->tid = tid;
+	return msgchan->send_req(msgchan, request);
+}
+
+static int recv_req(struct msg_channel *msgchan, struct alloc_req *request)
+{
+	return msgchan->recv_req(msgchan, request);
+}
+
+static int send_resp(struct msg_channel *msgchan,
+		     pid_t tid, struct alloc_resp *response)
+{
+	response->tid = tid;
+	return msgchan->send_resp(msgchan, response);
+}
+
+static int recv_resp(struct msg_channel *msgchan,
+		     pid_t tid, struct alloc_resp *response)
+{
+	response->tid = tid;
+	return msgchan->recv_resp(msgchan, response);
+}
+
+static struct intel_allocator *intel_allocator_create(int fd, uint32_t ctx,
+						      uint64_t start, uint64_t end,
+						      uint8_t allocator_type,
+						      uint8_t allocator_strategy)
+{
+	struct intel_allocator *ial = NULL;
+
+	switch (allocator_type) {
+	/*
+	 * Few words of explanation is required here.
+	 *
+	 * INTEL_ALLOCATOR_NONE allows keeping information in the code (intel-bb
+	 * is an example) we're not using IGT allocator itself and likely
+	 * we rely on relocations.
+	 * So trying to create NONE allocator doesn't makes sense and below
+	 * assertion catches such invalid usage.
+	 */
+	case INTEL_ALLOCATOR_NONE:
+		igt_assert_f(allocator_type != INTEL_ALLOCATOR_NONE,
+			     "We cannot use NONE allocator\n");
+		break;
+	case INTEL_ALLOCATOR_RANDOM:
+		ial = intel_allocator_random_create(fd, ctx);
+		break;
+	case INTEL_ALLOCATOR_SIMPLE:
+		if (!start && !end)
+			ial = intel_allocator_simple_create(fd, ctx);
+		else
+			ial = intel_allocator_simple_create_full(fd, ctx,
+								 start, end,
+								 allocator_strategy);
+		break;
+	default:
+		igt_assert_f(ial, "Allocator type %d not implemented\n",
+			     allocator_type);
+		break;
+	}
+
+	ial->type = allocator_type;
+	ial->strategy = allocator_strategy;
+	atomic_fetch_add(&ial->refcount, 1);
+	pthread_mutex_init(&ial->mutex, NULL);
+
+	igt_map_add(allocators_map, ial, ial);
+
+	return ial;
+}
+
+static void intel_allocator_destroy(struct intel_allocator *ial)
+{
+	alloc_info("Destroying allocator (empty: %d)\n",
+		   ial->is_empty(ial));
+
+	ial->destroy(ial);
+}
+
+static struct intel_allocator *__allocator_get(int fd, uint32_t ctx)
+{
+	struct intel_allocator *ial, ials = { .fd = fd, .ctx = ctx };
+	int refcount;
+
+	ial = igt_map_find(allocators_map, &ials);
+	if (!ial)
+		goto out_get;
+
+	refcount = atomic_fetch_add(&ial->refcount, 1);
+	igt_assert(refcount > 0);
+
+out_get:
+
+	return ial;
+}
+
+static bool __allocator_put(struct intel_allocator *ial)
+{
+	struct intel_allocator ials = { .fd = ial->fd, .ctx = ial->ctx };
+	bool released = false;
+	int refcount;
+
+	ial = igt_map_find(allocators_map, &ials);
+	igt_assert(ial);
+
+	refcount = atomic_fetch_sub(&ial->refcount, 1);
+	alloc_debug("Refcount: %d\n", refcount);
+	igt_assert(refcount >= 1);
+	if (refcount == 1) {
+		igt_map_del(allocators_map, ial);
+
+		if (!ial->is_empty(ial) && warn_if_not_empty)
+			igt_warn("Allocator not clear before destroy!\n");
+
+		released = true;
+	}
+
+	return released;
+}
+
+static struct intel_allocator *allocator_open(int fd, uint32_t ctx,
+					      uint64_t start, uint64_t end,
+					      uint8_t allocator_type,
+					      uint8_t allocator_strategy)
+{
+	struct intel_allocator *ial;
+
+	pthread_mutex_lock(&map_mutex);
+	ial = __allocator_get(fd, ctx);
+	if (!ial) {
+		alloc_debug("Allocator fd: %d, ctx: %u, <0x%llx : 0x%llx> "
+			    "not found, creating one\n",
+			    fd, ctx, (long long) start, (long long) end);
+		ial = intel_allocator_create(fd, ctx, start, end,
+					     allocator_type, allocator_strategy);
+	}
+	pthread_mutex_unlock(&map_mutex);
+
+	igt_assert_f(ial->type == allocator_type,
+		     "Allocator type must be same for fd/ctx\n");
+	igt_assert_f(ial->strategy == allocator_strategy,
+		     "Allocator strategy must be same or fd/ctx\n");
+
+	return ial;
+}
+
+static bool allocator_close(uint64_t allocator_handle)
+{
+	struct intel_allocator *ial = from_user_pointer(allocator_handle);
+	bool released, is_empty = false;
+
+	igt_assert(ial);
+
+	pthread_mutex_lock(&map_mutex);
+
+	released = __allocator_put(ial);
+	if (released) {
+		is_empty = ial->is_empty(ial);
+		intel_allocator_destroy(ial);
+	}
+
+	pthread_mutex_unlock(&map_mutex);
+
+	return is_empty;
+}
+
+static int send_req_recv_resp(struct msg_channel *msgchan,
+			      struct alloc_req *request,
+			      struct alloc_resp *response)
+{
+	int ret;
+
+	ret = send_req(msgchan, child_tid, request);
+	if (ret < 0) {
+		igt_warn("Error sending request [type: %d]: err = %d [%s]\n",
+			 request->request_type, errno, strerror(errno));
+
+		return ret;
+	}
+
+	ret = recv_resp(msgchan, child_tid, response);
+	if (ret < 0)
+		igt_warn("Error receiving response [type: %d]: err = %d [%s]\n",
+			 request->request_type, errno, strerror(errno));
+
+	/*
+	 * This is main assumption - we receive message which size must be > 0.
+	 * If this is fulfilled we return 0 as a success.
+	 */
+	if (ret > 0)
+		ret = 0;
+
+	return ret;
+}
+
+static int handle_request(struct alloc_req *req, struct alloc_resp *resp)
+{
+	bool same_process = child_pid == -1;
+	int ret;
+
+	memset(resp, 0, sizeof(*resp));
+
+	if (same_process) {
+		struct intel_allocator *ial;
+		uint64_t start, end, size;
+		bool allocated, reserved, unreserved;
+
+		/* Mutex only work on allocator instance, not stop/open/close */
+		if (req->request_type > REQ_CLOSE) {
+			ial = from_user_pointer(req->allocator_handle);
+			igt_assert(ial);
+
+			pthread_mutex_lock(&ial->mutex);
+		}
+
+		switch (req->request_type) {
+		case REQ_STOP:
+			alloc_info("<stop>\n");
+			break;
+
+		case REQ_OPEN:
+			ial = allocator_open(req->open.fd, req->open.ctx,
+					     req->open.start, req->open.end,
+					     req->open.allocator_type,
+					     req->open.allocator_strategy);
+			igt_assert(ial);
+
+			resp->response_type = RESP_OPEN;
+			resp->open.allocator_handle = to_user_pointer(ial);
+			alloc_info("<open> [tid: %ld] fd: %d, ctx: %u, alloc_type: %u, "
+				   "ahnd: %p, refcnt: %d\n",
+				   (long) req->tid, req->open.fd, req->open.ctx,
+				   req->open.allocator_type, ial,
+				   atomic_load(&ial->refcount));
+			break;
+
+		case REQ_CLOSE:
+			ial = from_user_pointer(req->allocator_handle);
+			igt_assert(ial);
+
+			resp->response_type = RESP_CLOSE;
+			ret = atomic_load(&ial->refcount);
+			resp->close.is_empty = allocator_close(req->allocator_handle);
+			alloc_info("<close> [tid: %ld] ahnd: %p, is_empty: %d, refcnt: %d\n",
+				   (long) req->tid, ial, resp->close.is_empty, ret);
+			break;
+
+		case REQ_ADDRESS_RANGE:
+			resp->response_type = RESP_ADDRESS_RANGE;
+			ial->get_address_range(ial, &start, &end);
+			resp->address_range.start = start;
+			resp->address_range.end = end;
+			alloc_info("<address range> [tid: %ld] "
+				   "start: 0x%" PRIx64 ", end: 0x%" PRId64 "\n",
+				   (long) req->tid, start, end);
+			break;
+
+		case REQ_ALLOC:
+			resp->response_type = RESP_ALLOC;
+			resp->alloc.offset = ial->alloc(ial,
+							req->alloc.handle,
+							req->alloc.size,
+							req->alloc.alignment);
+			alloc_info("<alloc> [tid: %ld] handle: %u, "
+				   "size: 0x%" PRIx64 ", offset: 0x%" PRIx64
+				   ", alignment: 0x%" PRIx64 "\n",
+				   (long) req->tid,
+				   req->alloc.handle, req->alloc.size,
+				   resp->alloc.offset, req->alloc.alignment);
+			break;
+
+		case REQ_FREE:
+			resp->response_type = RESP_FREE;
+			resp->free.freed = ial->free(ial, req->free.handle);
+			alloc_info("<free> [tid: %ld] handle: %u, freed: %d\n",
+				   (long) req->tid, req->free.handle,
+				   resp->free.freed);
+			break;
+
+		case REQ_IS_ALLOCATED:
+			resp->response_type = RESP_IS_ALLOCATED;
+			allocated = ial->is_allocated(ial,
+						      req->is_allocated.handle,
+						      req->is_allocated.size,
+						      req->is_allocated.offset);
+			resp->is_allocated.allocated = allocated;
+			alloc_info("<is allocated> [tid: %ld] offset: 0x%" PRIx64
+				   ", allocated: %d\n", (long) req->tid,
+				   req->is_allocated.offset, allocated);
+			break;
+
+		case REQ_RESERVE:
+			resp->response_type = RESP_RESERVE;
+			reserved = ial->reserve(ial,
+						req->reserve.handle,
+						req->reserve.start,
+						req->reserve.end);
+			resp->reserve.reserved = reserved;
+			alloc_info("<reserve> [tid: %ld] handle: %u, "
+				   "start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", reserved: %d\n",
+				   (long) req->tid, req->reserve.handle,
+				   req->reserve.start, req->reserve.end, reserved);
+			break;
+
+		case REQ_UNRESERVE:
+			resp->response_type = RESP_UNRESERVE;
+			unreserved = ial->unreserve(ial,
+						    req->unreserve.handle,
+						    req->unreserve.start,
+						    req->unreserve.end);
+			resp->unreserve.unreserved = unreserved;
+			alloc_info("<unreserve> [tid: %ld] handle: %u, "
+				   "start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", unreserved: %d\n",
+				   (long) req->tid, req->unreserve.handle,
+				   req->unreserve.start, req->unreserve.end,
+				   unreserved);
+			break;
+
+		case REQ_IS_RESERVED:
+			resp->response_type = RESP_IS_RESERVED;
+			reserved = ial->is_reserved(ial,
+						    req->is_reserved.start,
+						    req->is_reserved.end);
+			resp->is_reserved.reserved = reserved;
+			alloc_info("<is reserved> [tid: %ld] "
+				   "start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", reserved: %d\n",
+				   (long) req->tid, req->is_reserved.start,
+				   req->is_reserved.end, reserved);
+			break;
+
+		case REQ_RESERVE_IF_NOT_ALLOCATED:
+			resp->response_type = RESP_RESERVE_IF_NOT_ALLOCATED;
+			size = req->reserve.end - req->reserve.start;
+
+			allocated = ial->is_allocated(ial, req->reserve.handle,
+						      size, req->reserve.start);
+			if (allocated) {
+				resp->reserve_if_not_allocated.allocated = allocated;
+				alloc_info("<reserve if not allocated> [tid: %ld] "
+					   "handle: %u, size: 0x%lx, "
+					   "start: 0x%" PRIx64 ", end: 0x%" PRIx64
+					   ", allocated: %d, reserved: %d\n",
+					   (long) req->tid, req->reserve.handle,
+					   (long) size, req->reserve.start,
+					   req->reserve.end, allocated, false);
+				break;
+			}
+
+			reserved = ial->reserve(ial,
+						req->reserve.handle,
+						req->reserve.start,
+						req->reserve.end);
+			resp->reserve_if_not_allocated.reserved = reserved;
+			alloc_info("<reserve if not allocated> [tid: %ld] "
+				   "handle: %u, start: 0x%" PRIx64 ", end: 0x%" PRIx64
+				   ", allocated: %d, reserved: %d\n",
+				   (long) req->tid, req->reserve.handle,
+				   req->reserve.start, req->reserve.end,
+				   false, reserved);
+			break;
+
+		}
+
+		if (req->request_type > REQ_CLOSE)
+			pthread_mutex_unlock(&ial->mutex);
+
+		return 0;
+	}
+
+	ret = send_req_recv_resp(channel, req, resp);
+
+	if (ret < 0)
+		exit(0);
+
+	return ret;
+}
+
+static void kill_children(int sig)
+{
+	signal(sig, SIG_IGN);
+	kill(-getpgrp(), sig);
+	signal(sig, SIG_DFL);
+}
+
+static void *allocator_thread_loop(void *data)
+{
+	struct alloc_req req;
+	struct alloc_resp resp;
+	int ret;
+	(void) data;
+
+	alloc_info("Allocator pid: %ld, tid: %ld\n",
+		   (long) allocator_pid, (long) gettid());
+	alloc_info("Entering allocator loop\n");
+
+	while (1) {
+		ret = recv_req(channel, &req);
+
+		if (ret == -1) {
+			igt_warn("Error receiving request in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+			kill_children(SIGINT);
+			return (void *) -1;
+		}
+
+		/* Fake message to stop the thread */
+		if (req.request_type == REQ_STOP) {
+			alloc_info("<stop request>\n");
+			break;
+		}
+
+		ret = handle_request(&req, &resp);
+		if (ret) {
+			igt_warn("Error handling request in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+			break;
+		}
+
+		ret = send_resp(channel, req.tid, &resp);
+		if (ret) {
+			igt_warn("Error sending response in thread, ret = %d [%s]\n",
+				 ret, strerror(errno));
+
+			kill_children(SIGINT);
+			return (void *) -1;
+		}
+	}
+
+	return NULL;
+}
+
+/**
+ * intel_allocator_multiprocess_start:
+ *
+ * Function turns on intel_allocator multiprocess mode what means
+ * all allocations from children processes are performed in a separate thread
+ * within main igt process. Children are aware of the situation and use
+ * some interprocess communication channel to send/receive messages
+ * (open, close, alloc, free, ...) to/from allocator thread.
+ *
+ * Must be used when you want to use an allocator in non single-process code.
+ * All allocations in threads spawned in main igt process are handled by
+ * mutexing, not by sending/receiving messages to/from allocator thread.
+ *
+ * Note. This destroys all previously created allocators and theirs content.
+ */
+void intel_allocator_multiprocess_start(void)
+{
+	alloc_info("allocator multiprocess start\n");
+
+	intel_allocator_init();
+
+	multiprocess = true;
+	channel->init(channel);
+
+	pthread_create(&allocator_thread, NULL,
+		       allocator_thread_loop, NULL);
+}
+
+/**
+ * intel_allocator_multiprocess_stop:
+ *
+ * Function turns off intel_allocator multiprocess mode what means means
+ * stopping allocator thread and deinitializing its data.
+ */
+void intel_allocator_multiprocess_stop(void)
+{
+	alloc_info("allocator multiprocess stop\n");
+
+	if (multiprocess) {
+		send_alloc_stop(channel);
+		/* Deinit, this should stop all blocked syscalls, if any */
+		channel->deinit(channel);
+		pthread_join(allocator_thread, NULL);
+		/* But we're not sure does child will stuck */
+		kill_children(SIGINT);
+		igt_waitchildren_timeout(5, "Stopping children");
+		multiprocess = false;
+	}
+}
+
+/**
+ * intel_allocator_open_full:
+ * @fd: i915 descriptor
+ * @ctx: context
+ * @start: address of the beginning
+ * @end: address of the end
+ * @allocator_type: one of INTEL_ALLOCATOR_* define
+ * @strategy: passed to the allocator to define the strategy (like order
+ * of allocation, see notes below).
+ *
+ * Function opens an allocator instance within <@start, @end) vm for given
+ * @fd and @ctx and returns its handle. If the allocator for such pair
+ * doesn't exist it is created with refcount = 1.
+ * Parallel opens returns same handle bumping its refcount.
+ *
+ * Returns: unique handle to the currently opened allocator.
+ *
+ * Notes:
+ * Strategy is generally used internally by the underlying allocator:
+ *
+ * For SIMPLE allocator:
+ * - ALLOC_STRATEGY_HIGH_TO_LOW means topmost addresses are allocated first,
+ * - ALLOC_STRATEGY_LOW_TO_HIGH opposite, allocation starts from lowest
+ *   addresses.
+ *
+ * For RANDOM allocator:
+ * - none of strategy is currently implemented.
+ */
+uint64_t intel_allocator_open_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   uint8_t allocator_type,
+				   enum allocator_strategy strategy)
+{
+	struct alloc_req req = { .request_type = REQ_OPEN,
+				 .open.fd = fd,
+				 .open.ctx = ctx,
+				 .open.start = start,
+				 .open.end = end,
+				 .open.allocator_type = allocator_type,
+				 .open.allocator_strategy = strategy };
+	struct alloc_resp resp;
+
+	/* Get child_tid only once at open() */
+	if (child_tid == -1)
+		child_tid = gettid();
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.open.allocator_handle);
+	igt_assert(resp.response_type == RESP_OPEN);
+
+	return resp.open.allocator_handle;
+}
+
+/**
+ * intel_allocator_open:
+ * @fd: i915 descriptor
+ * @ctx: context
+ * @allocator_type: one of INTEL_ALLOCATOR_* define
+ *
+ * Function opens an allocator instance for given @fd and @ctx and returns
+ * its handle. If the allocator for such pair doesn't exist it is created
+ * with refcount = 1. Parallel opens returns same handle bumping its refcount.
+ *
+ * Returns: unique handle to the currently opened allocator.
+ *
+ * Notes: we pass ALLOC_STRATEGY_HIGH_TO_LOW as default, playing with higher
+ * addresses makes easier to find addressing issues (like passing non-canonical
+ * offsets, which won't be catched unless 47-bit is set).
+ */
+uint64_t intel_allocator_open(int fd, uint32_t ctx, uint8_t allocator_type)
+{
+	return intel_allocator_open_full(fd, ctx, 0, 0, allocator_type,
+					 ALLOC_STRATEGY_HIGH_TO_LOW);
+}
+
+/**
+ * intel_allocator_close:
+ * @allocator_handle: handle to the allocator that will be closed
+ *
+ * Function decreases an allocator refcount for the given @handle.
+ * When refcount reaches zero allocator is closed (destroyed) and all
+ * allocated / reserved areas are freed.
+ *
+ * Returns: true if closed allocator was empty, false otherwise.
+ */
+bool intel_allocator_close(uint64_t allocator_handle)
+{
+	struct alloc_req req = { .request_type = REQ_CLOSE,
+				 .allocator_handle = allocator_handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_CLOSE);
+
+	return resp.close.is_empty;
+}
+
+/**
+ * intel_allocator_get_address_range:
+ * @allocator_handle: handle to an allocator
+ * @startp: pointer to the variable where function writes starting offset
+ * @endp: pointer to the variable where function writes ending offset
+ *
+ * Function fills @startp, @endp with respectively, starting and ending offset
+ * of the allocator working virtual address space range.
+ *
+ * Note. Allocators working ranges can differ depending on the device or
+ * the allocator type so before reserving a specific offset a good practise
+ * is to ensure that address is between accepted range.
+ */
+void intel_allocator_get_address_range(uint64_t allocator_handle,
+				       uint64_t *startp, uint64_t *endp)
+{
+	struct alloc_req req = { .request_type = REQ_ADDRESS_RANGE,
+				 .allocator_handle = allocator_handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_ADDRESS_RANGE);
+
+	if (startp)
+		*startp = resp.address_range.start;
+
+	if (endp)
+		*endp = resp.address_range.end;
+}
+
+/**
+ * intel_allocator_alloc:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @alignment: determines object alignment
+ *
+ * Function finds and returns the most suitable offset with given @alignment
+ * for an object with @size identified by the @handle.
+ *
+ * Returns: currently assigned address for a given object. If an object was
+ * already allocated returns same address.
+ */
+uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t alignment)
+{
+	struct alloc_req req = { .request_type = REQ_ALLOC,
+				 .allocator_handle = allocator_handle,
+				 .alloc.handle = handle,
+				 .alloc.size = size,
+				 .alloc.alignment = alignment };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_ALLOC);
+
+	return resp.alloc.offset;
+}
+
+/**
+ * intel_allocator_free:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object to be freed
+ *
+ * Function free object identified by the @handle in allocator what makes it
+ * offset again allocable.
+ *
+ * Note. Reserved objects can only be freed by an #intel_allocator_unreserve
+ * function.
+ *
+ * Returns: true if the object was successfully freed, otherwise false.
+ */
+bool intel_allocator_free(uint64_t allocator_handle, uint32_t handle)
+{
+	struct alloc_req req = { .request_type = REQ_FREE,
+				 .allocator_handle = allocator_handle,
+				 .free.handle = handle };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_FREE);
+
+	return resp.free.freed;
+}
+
+/**
+ * intel_allocator_is_allocated:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function checks whether the object identified by the @handle and @size
+ * is allocated at the @offset.
+ *
+ * Returns: true if the object is currently allocated at the @offset,
+ * otherwise false.
+ */
+bool intel_allocator_is_allocated(uint64_t allocator_handle, uint32_t handle,
+				  uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_IS_ALLOCATED,
+				 .allocator_handle = allocator_handle,
+				 .is_allocated.handle = handle,
+				 .is_allocated.size = size,
+				 .is_allocated.offset = offset };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_IS_ALLOCATED);
+
+	return resp.is_allocated.allocated;
+}
+
+/**
+ * intel_allocator_reserve:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function reserves space that starts at the @offset and has @size.
+ * Optionally we can pass @handle to mark that space is for a specific
+ * object, otherwise pass -1.
+ *
+ * Note. Reserved space is identified by offset and size, not a handle.
+ * So an object can have multiple reserved spaces with its handle.
+ *
+ * Returns: true if space is successfully reserved, otherwise false.
+ */
+bool intel_allocator_reserve(uint64_t allocator_handle, uint32_t handle,
+			     uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_RESERVE,
+				 .allocator_handle = allocator_handle,
+				 .reserve.handle = handle,
+				 .reserve.start = offset,
+				 .reserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_RESERVE);
+
+	return resp.reserve.reserved;
+}
+
+/**
+ * intel_allocator_unreserve:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function unreserves space that starts at the @offset, @size and @handle.
+ *
+ * Note. @handle, @size and @offset have to match those used in reservation.
+ * i.e. check with the same offset but even smaller size will fail.
+ *
+ * Returns: true if the space is successfully unreserved, otherwise false.
+ */
+bool intel_allocator_unreserve(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_UNRESERVE,
+				 .allocator_handle = allocator_handle,
+				 .unreserve.handle = handle,
+				 .unreserve.start = offset,
+				 .unreserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_UNRESERVE);
+
+	return resp.unreserve.unreserved;
+}
+
+/**
+ * intel_allocator_is_reserved:
+ * @allocator_handle: handle to an allocator
+ * @size: size of an object
+ * @offset: address of an object
+ *
+ * Function checks whether space starting at the @offset and @size is
+ * currently under reservation.
+ *
+ * Note. @size and @offset have to match those used in reservation,
+ * i.e. check with the same offset but even smaller size will fail.
+ *
+ * Returns: true if space is reserved, othwerise false.
+ */
+bool intel_allocator_is_reserved(uint64_t allocator_handle,
+				 uint64_t size, uint64_t offset)
+{
+	struct alloc_req req = { .request_type = REQ_IS_RESERVED,
+				 .allocator_handle = allocator_handle,
+				 .is_reserved.start = offset,
+				 .is_reserved.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_IS_RESERVED);
+
+	return resp.is_reserved.reserved;
+}
+
+/**
+ * intel_allocator_reserve_if_not_allocated:
+ * @allocator_handle: handle to an allocator
+ * @handle: handle to an object
+ * @size: size of an object
+ * @offset: address of an object
+ * @is_allocatedp: if not NULL function writes there object allocation status
+ * (true/false)
+ *
+ * Function checks whether the object identified by the @handle and @size
+ * is allocated at the @offset and writes the result to @is_allocatedp.
+ * If it's not it reserves it at the given @offset.
+ *
+ * Returns: true if the space for an object was reserved, otherwise false.
+ */
+bool intel_allocator_reserve_if_not_allocated(uint64_t allocator_handle,
+					      uint32_t handle,
+					      uint64_t size, uint64_t offset,
+					      bool *is_allocatedp)
+{
+	struct alloc_req req = { .request_type = REQ_RESERVE_IF_NOT_ALLOCATED,
+				 .allocator_handle = allocator_handle,
+				 .reserve.handle = handle,
+				 .reserve.start = offset,
+				 .reserve.end = offset + size };
+	struct alloc_resp resp;
+
+	igt_assert(handle_request(&req, &resp) == 0);
+	igt_assert(resp.response_type == RESP_RESERVE_IF_NOT_ALLOCATED);
+
+	if (is_allocatedp)
+		*is_allocatedp = resp.reserve_if_not_allocated.allocated;
+
+	return resp.reserve_if_not_allocated.reserved;
+}
+
+/**
+ * intel_allocator_print:
+ * @allocator_handle: handle to an allocator
+ *
+ * Function prints statistics and content of the allocator.
+ * Mainly for debugging purposes.
+ *
+ * Note. Printing possible only in the main process.
+ **/
+void intel_allocator_print(uint64_t allocator_handle)
+{
+	bool same_process;
+
+	igt_assert(allocator_handle);
+
+	same_process = child_pid == -1;
+
+	if (!multiprocess || same_process) {
+		struct intel_allocator *ial = from_user_pointer(allocator_handle);
+		pthread_mutex_lock(&map_mutex);
+		ial->print(ial, true);
+		pthread_mutex_unlock(&map_mutex);
+	} else {
+		igt_warn("Print stats is in main process only\n");
+	}
+}
+
+static bool equal_allocators(const void *key1, const void *key2)
+{
+	const struct intel_allocator *a1 = key1, *a2 = key2;
+
+	alloc_debug("a1: <fd: %d, ctx: %u>, a2 <fd: %d, ctx: %u>\n",
+		   a1->fd, a1->ctx, a2->fd, a2->ctx);
+
+	return a1->fd == a2->fd && a1->ctx == a2->ctx;
+}
+
+/*  2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
+#define GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001UL
+
+static inline uint64_t hash_allocators(const void *val, unsigned int bits)
+{
+	uint64_t hash = ((struct intel_allocator *) val)->fd;
+
+	hash = hash * GOLDEN_RATIO_PRIME_64;
+	return hash >> (64 - bits);
+}
+
+static void __free_allocators(void)
+{
+	struct igt_map_entry *pos;
+	struct intel_allocator *ial;
+	int i;
+
+	if (allocators_map) {
+		igt_map_for_each(allocators_map, i, pos) {
+			ial = pos->value;
+			ial->destroy(ial);
+		}
+	}
+
+	igt_map_free(allocators_map);
+}
+
+/**
+ * intel_allocator_init:
+ *
+ * Function initializes the allocators infrastructure. The second call will
+ * override current infra and destroy existing there allocators. It is called
+ * in igt_constructor.
+ **/
+void intel_allocator_init(void)
+{
+	alloc_info("Prepare an allocator infrastructure\n");
+
+	allocator_pid = getpid();
+	alloc_info("Allocator pid: %ld\n", (long) allocator_pid);
+
+	if (allocators_map) {
+		__free_allocators();
+		free(allocators_map);
+	}
+
+	allocators_map = calloc(sizeof(*allocators_map), 1);
+	igt_assert(allocators_map);
+
+	__igt_map_init(allocators_map, equal_allocators, hash_allocators, 8);
+
+	channel = intel_allocator_get_msgchannel(CHANNEL_SYSVIPC_MSGQUEUE);
+}
+
+igt_constructor {
+	intel_allocator_init();
+}
diff --git a/lib/intel_allocator.h b/lib/intel_allocator.h
new file mode 100644
index 000000000..d26816c99
--- /dev/null
+++ b/lib/intel_allocator.h
@@ -0,0 +1,161 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __INTEL_ALLOCATOR_H__
+#define __INTEL_ALLOCATOR_H__
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <pthread.h>
+#include <stdint.h>
+#include <stdatomic.h>
+
+/**
+ * SECTION:intel_allocator
+ * @short_description: igt implementation of allocator
+ * @title: Intel allocator
+ * @include: intel_allocator.h
+ *
+ * # Intel allocator
+ *
+ * Since GPU devices driver has abandoned relocations for newer generations,
+ * we are facing the need to manage addresses in userspace. Intel allocator
+ * supply out of the box mechanisms providing correct virtual addresses.
+ * Specifically, intel_allocator is a multi-threading infrastructure wrapping
+ * a proper single-thread allocator, that can be the one of the following:
+ *
+ *  * INTEL_ALLOCATOR_SIMPLE - ported from Mesa, list-based, simple allocator
+ *  * INTEL_ALLOCATOR_RANDOM - stateless allocator, that provides random addresses
+ *  (sometime in the future the list can grow)
+ *
+ * Usage example:
+ *
+ * |[<!-- language="c" -->
+ * struct object {
+ * 	uint32_t handle;
+ * 	uint64_t offset;
+ * 	uint64_t size;
+ * };
+ *
+ * struct object obj1, obj2;
+ * uint64_t ahnd, startp, endp;
+ * int fd = -1;
+ *
+ * fd = drm_open_driver(DRIVER_INTEL);
+ * ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+ *
+ * obj1.handle = gem_create(4096);
+ * obj2.handle = gem_create(4096);
+ *
+ * // Reserve hole for an object in given address.
+ * // In this example the first possible address.
+ * intel_allocator_get_address_range(ahnd, &startp, &endp);
+ * obj1.offset = startp;
+ * igt_assert(intel_allocator_reserve(ahnd, obj1.handle, 4096, startp));
+ *
+ * // Get the most suitable offset for the object. Prefered way.
+ * obj2.offset = intel_allocator_alloc(ahnd, obj2.handle, 4096, 1 << 13);
+ *
+ *  ...
+ *
+ * // Reserved addresses can be only freed by unreserve.
+ * intel_allocator_unreserve(ahnd, obj1.handle, 4096, obj1.offset);
+ * intel_allocator_free(ahnd, obj2.handle);
+ *
+ * gem_close(obj1.handle);
+ * gem_close(obj2.handle);
+ * ]|
+ *
+ */
+
+enum allocator_strategy {
+	ALLOC_STRATEGY_NONE,
+	ALLOC_STRATEGY_LOW_TO_HIGH,
+	ALLOC_STRATEGY_HIGH_TO_LOW
+};
+
+struct intel_allocator {
+	int fd;
+	uint32_t ctx;
+	uint8_t type;
+	enum allocator_strategy strategy;
+	_Atomic(int32_t) refcount;
+	pthread_mutex_t mutex;
+
+	/* allocator's private structure */
+	void *priv;
+
+	void (*get_address_range)(struct intel_allocator *ial,
+				  uint64_t *startp, uint64_t *endp);
+	uint64_t (*alloc)(struct intel_allocator *ial, uint32_t handle,
+			  uint64_t size, uint64_t alignment);
+	bool (*is_allocated) (struct intel_allocator *ial, uint32_t handle,
+			      uint64_t size, uint64_t alignment);
+	bool (*reserve)(struct intel_allocator *ial,
+			uint32_t handle, uint64_t start, uint64_t size);
+	bool (*unreserve)(struct intel_allocator *ial,
+			  uint32_t handle, uint64_t start, uint64_t size);
+	bool (*is_reserved) (struct intel_allocator *ial,
+			     uint64_t start, uint64_t size);
+	bool (*free)(struct intel_allocator *ial, uint32_t handle);
+
+	void (*destroy)(struct intel_allocator *ial);
+
+	bool (*is_empty)(struct intel_allocator *ial);
+
+	void (*print)(struct intel_allocator *ial, bool full);
+};
+
+void intel_allocator_init(void);
+void intel_allocator_multiprocess_start(void);
+void intel_allocator_multiprocess_stop(void);
+
+uint64_t intel_allocator_open(int fd, uint32_t ctx, uint8_t allocator_type);
+uint64_t intel_allocator_open_full(int fd, uint32_t ctx,
+				   uint64_t start, uint64_t end,
+				   uint8_t allocator_type,
+				   enum allocator_strategy strategy);
+bool intel_allocator_close(uint64_t allocator_handle);
+void intel_allocator_get_address_range(uint64_t allocator_handle,
+				       uint64_t *startp, uint64_t *endp);
+uint64_t intel_allocator_alloc(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t alignment);
+bool intel_allocator_free(uint64_t allocator_handle, uint32_t handle);
+bool intel_allocator_is_allocated(uint64_t allocator_handle, uint32_t handle,
+				  uint64_t size, uint64_t offset);
+bool intel_allocator_reserve(uint64_t allocator_handle, uint32_t handle,
+			     uint64_t size, uint64_t offset);
+bool intel_allocator_unreserve(uint64_t allocator_handle, uint32_t handle,
+			       uint64_t size, uint64_t offset);
+bool intel_allocator_is_reserved(uint64_t allocator_handle,
+				 uint64_t size, uint64_t offset);
+bool intel_allocator_reserve_if_not_allocated(uint64_t allocator_handle,
+					      uint32_t handle,
+					      uint64_t size, uint64_t offset,
+					      bool *is_allocatedp);
+
+void intel_allocator_print(uint64_t allocator_handle);
+
+#define INTEL_ALLOCATOR_NONE   0
+#define INTEL_ALLOCATOR_RANDOM 1
+#define INTEL_ALLOCATOR_SIMPLE 2
+
+#define GEN8_GTT_ADDRESS_WIDTH 48
+
+static inline uint64_t sign_extend64(uint64_t x, int high)
+{
+	int shift = 63 - high;
+
+	return (int64_t)(x << shift) >> shift;
+}
+
+static inline uint64_t CANONICAL(uint64_t offset)
+{
+	return sign_extend64(offset, GEN8_GTT_ADDRESS_WIDTH - 1);
+}
+
+#define DECANONICAL(offset) (offset & ((1ull << GEN8_GTT_ADDRESS_WIDTH) - 1))
+
+#endif
diff --git a/lib/intel_allocator_msgchannel.c b/lib/intel_allocator_msgchannel.c
new file mode 100644
index 000000000..74a0f9286
--- /dev/null
+++ b/lib/intel_allocator_msgchannel.c
@@ -0,0 +1,187 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include "igt.h"
+#include "intel_allocator_msgchannel.h"
+
+extern __thread pid_t child_tid;
+
+/* ----- SYSVIPC MSGQUEUE ----- */
+
+#define FTOK_IGT_ALLOCATOR_KEY "/tmp/igt.allocator.key"
+#define FTOK_IGT_ALLOCATOR_PROJID 2020
+
+#define ALLOCATOR_REQUEST 1
+
+struct msgqueue_data {
+	key_t key;
+	int queue;
+};
+
+struct msgqueue_buf {
+       long mtype;
+       union {
+	       struct alloc_req request;
+	       struct alloc_resp response;
+       } data;
+};
+
+static void msgqueue_init(struct msg_channel *channel)
+{
+	struct msgqueue_data *msgdata;
+	struct msqid_ds qstat;
+	key_t key;
+	int fd, queue;
+
+	igt_debug("Init msgqueue\n");
+
+	/* Create ftok key only if not exists */
+	fd = open(FTOK_IGT_ALLOCATOR_KEY, O_CREAT | O_EXCL | O_WRONLY, 0600);
+	igt_assert(fd >= 0 || errno == EEXIST);
+	if (fd >= 0)
+		close(fd);
+
+	key = ftok(FTOK_IGT_ALLOCATOR_KEY, FTOK_IGT_ALLOCATOR_PROJID);
+	igt_assert(key != -1);
+	igt_debug("Queue key: %x\n", (int) key);
+
+	queue = msgget(key, 0);
+	if (queue != -1) {
+		igt_assert(msgctl(queue, IPC_STAT, &qstat) == 0);
+		igt_debug("old messages: %lu\n", qstat.msg_qnum);
+		igt_assert(msgctl(queue, IPC_RMID, NULL) == 0);
+	}
+
+	queue = msgget(key, IPC_CREAT);
+	igt_debug("msg queue: %d\n", queue);
+
+	msgdata = calloc(1, sizeof(*msgdata));
+	igt_assert(msgdata);
+	msgdata->key = key;
+	msgdata->queue = queue;
+	channel->priv = msgdata;
+}
+
+static void msgqueue_deinit(struct msg_channel *channel)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+
+	igt_debug("Deinit msgqueue\n");
+	msgctl(msgdata->queue, IPC_RMID, NULL);
+	free(channel->priv);
+}
+
+static int msgqueue_send_req(struct msg_channel *channel,
+			     struct alloc_req *request)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret;
+
+	buf.mtype = ALLOCATOR_REQUEST;
+	buf.data.request.request_type = 1;
+	memcpy(&buf.data.request, request, sizeof(*request));
+
+retry:
+	ret = msgsnd(msgdata->queue, &buf, sizeof(buf) - sizeof(long), 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_recv_req(struct msg_channel *channel,
+			     struct alloc_req *request)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret, size = sizeof(buf) - sizeof(long);
+
+retry:
+	ret = msgrcv(msgdata->queue, &buf, size, ALLOCATOR_REQUEST, 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == size)
+		memcpy(request, &buf.data.request, sizeof(*request));
+	else if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_send_resp(struct msg_channel *channel,
+			      struct alloc_resp *response)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret;
+
+	buf.mtype = response->tid;
+	memcpy(&buf.data.response, response, sizeof(*response));
+
+retry:
+	ret = msgsnd(msgdata->queue, &buf, sizeof(buf) - sizeof(long), 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static int msgqueue_recv_resp(struct msg_channel *channel,
+			      struct alloc_resp *response)
+{
+	struct msgqueue_data *msgdata = channel->priv;
+	struct msgqueue_buf buf = {0};
+	int ret, size = sizeof(buf) - sizeof(long);
+
+retry:
+	ret = msgrcv(msgdata->queue, &buf, sizeof(buf) - sizeof(long),
+		     response->tid, 0);
+	if (ret == -1 && errno == EINTR)
+		goto retry;
+
+	if (ret == size)
+		memcpy(response, &buf.data.response, sizeof(*response));
+	else if (ret == -1)
+		igt_warn("Error: %s\n", strerror(errno));
+
+	return ret;
+}
+
+static struct msg_channel msgqueue_channel = {
+	.priv = NULL,
+	.init = msgqueue_init,
+	.deinit = msgqueue_deinit,
+	.send_req = msgqueue_send_req,
+	.recv_req = msgqueue_recv_req,
+	.send_resp = msgqueue_send_resp,
+	.recv_resp = msgqueue_recv_resp,
+};
+
+struct msg_channel *intel_allocator_get_msgchannel(enum msg_channel_type type)
+{
+	struct msg_channel *channel = NULL;
+
+	switch (type) {
+	case CHANNEL_SYSVIPC_MSGQUEUE:
+		channel = &msgqueue_channel;
+	}
+
+	igt_assert(channel);
+
+	return channel;
+}
diff --git a/lib/intel_allocator_msgchannel.h b/lib/intel_allocator_msgchannel.h
new file mode 100644
index 000000000..ad5a9e901
--- /dev/null
+++ b/lib/intel_allocator_msgchannel.h
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __INTEL_ALLOCATOR_MSGCHANNEL_H__
+#define __INTEL_ALLOCATOR_MSGCHANNEL_H__
+
+#include <sys/types.h>
+#include <unistd.h>
+#include <stdint.h>
+
+enum reqtype {
+	REQ_STOP,
+	REQ_OPEN,
+	REQ_CLOSE,
+	REQ_ADDRESS_RANGE,
+	REQ_ALLOC,
+	REQ_FREE,
+	REQ_IS_ALLOCATED,
+	REQ_RESERVE,
+	REQ_UNRESERVE,
+	REQ_RESERVE_IF_NOT_ALLOCATED,
+	REQ_IS_RESERVED,
+};
+
+enum resptype {
+	RESP_OPEN,
+	RESP_CLOSE,
+	RESP_ADDRESS_RANGE,
+	RESP_ALLOC,
+	RESP_FREE,
+	RESP_IS_ALLOCATED,
+	RESP_RESERVE,
+	RESP_UNRESERVE,
+	RESP_IS_RESERVED,
+	RESP_RESERVE_IF_NOT_ALLOCATED,
+};
+
+struct alloc_req {
+	enum reqtype request_type;
+
+	/* Common */
+	pid_t tid;
+	uint64_t allocator_handle;
+
+	union {
+		struct {
+			int fd;
+			uint32_t ctx;
+			uint64_t start;
+			uint64_t end;
+			uint8_t allocator_type;
+			uint8_t allocator_strategy;
+		} open;
+
+		struct {
+			uint32_t handle;
+			uint64_t size;
+			uint64_t alignment;
+		} alloc;
+
+		struct {
+			uint32_t handle;
+		} free;
+
+		struct {
+			uint32_t handle;
+			uint64_t size;
+			uint64_t offset;
+		} is_allocated;
+
+		struct {
+			uint32_t handle;
+			uint64_t start;
+			uint64_t end;
+		} reserve, unreserve;
+
+		struct {
+			uint64_t start;
+			uint64_t end;
+		} is_reserved;
+
+	};
+};
+
+struct alloc_resp {
+	enum resptype response_type;
+	pid_t tid;
+
+	union {
+		struct {
+			uint64_t allocator_handle;
+		} open;
+
+		struct {
+			bool is_empty;
+		} close;
+
+		struct {
+			uint64_t start;
+			uint64_t end;
+			uint8_t direction;
+		} address_range;
+
+		struct {
+			uint64_t offset;
+		} alloc;
+
+		struct {
+			bool freed;
+		} free;
+
+		struct {
+			bool allocated;
+		} is_allocated;
+
+		struct {
+			bool reserved;
+		} reserve, is_reserved;
+
+		struct {
+			bool unreserved;
+		} unreserve;
+
+		struct {
+			bool allocated;
+			bool reserved;
+		} reserve_if_not_allocated;
+	};
+};
+
+struct msg_channel {
+	void *priv;
+	void (*init)(struct msg_channel *channel);
+	void (*deinit)(struct msg_channel *channel);
+	int (*send_req)(struct msg_channel *channel, struct alloc_req *request);
+	int (*recv_req)(struct msg_channel *channel, struct alloc_req *request);
+	int (*send_resp)(struct msg_channel *channel, struct alloc_resp *response);
+	int (*recv_resp)(struct msg_channel *channel, struct alloc_resp *response);
+};
+
+enum msg_channel_type {
+	CHANNEL_SYSVIPC_MSGQUEUE
+};
+
+struct msg_channel *intel_allocator_get_msgchannel(enum msg_channel_type type);
+
+#endif
diff --git a/lib/meson.build b/lib/meson.build
index c2e176447..a3b4fbd79 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -34,6 +34,10 @@ lib_sources = [
 	'igt_vgem.c',
 	'igt_x86.c',
 	'instdone.c',
+	'intel_allocator.c',
+	'intel_allocator_msgchannel.c',
+	'intel_allocator_random.c',
+	'intel_allocator_simple.c',
 	'intel_batchbuffer.c',
 	'intel_bufops.c',
 	'intel_chipset.c',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 09/35] lib/intel_allocator: Try to stop smoothly instead of deinit
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (7 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 08/35] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 10/35] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
                   ` (27 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Avoid race when stop was send to allocator thread. We wait around
100 ms to give thread chance to stop smoothly instead of removing
queue and enforcing exiting all blocked message syscalls.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator.c | 26 +++++++++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index 4d053c381..113c4aa1a 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -55,6 +55,7 @@ static struct igt_map *allocators_map;
 static pthread_mutex_t map_mutex = PTHREAD_MUTEX_INITIALIZER;
 static bool multiprocess;
 static pthread_t allocator_thread;
+static bool allocator_thread_running;
 
 static bool warn_if_not_empty;
 
@@ -475,6 +476,8 @@ static void *allocator_thread_loop(void *data)
 		   (long) allocator_pid, (long) gettid());
 	alloc_info("Entering allocator loop\n");
 
+	WRITE_ONCE(allocator_thread_running, true);
+
 	while (1) {
 		ret = recv_req(channel, &req);
 
@@ -508,6 +511,8 @@ static void *allocator_thread_loop(void *data)
 		}
 	}
 
+	WRITE_ONCE(allocator_thread_running, false);
+
 	return NULL;
 }
 
@@ -545,15 +550,30 @@ void intel_allocator_multiprocess_start(void)
  * Function turns off intel_allocator multiprocess mode what means means
  * stopping allocator thread and deinitializing its data.
  */
+#define STOP_TIMEOUT_MS 100
 void intel_allocator_multiprocess_stop(void)
 {
+	int time_left = STOP_TIMEOUT_MS;
+
 	alloc_info("allocator multiprocess stop\n");
 
 	if (multiprocess) {
 		send_alloc_stop(channel);
-		/* Deinit, this should stop all blocked syscalls, if any */
-		channel->deinit(channel);
-		pthread_join(allocator_thread, NULL);
+
+		/* We prefer joining thread when it is stopped */
+		while (time_left-- > 0 && READ_ONCE(allocator_thread_running))
+			usleep(1000); /* coarse calculation */
+
+		/* Thread has stuck somewhere */
+		if (READ_ONCE(allocator_thread_running)) {
+			/* Deinit, this should stop all blocked syscalls, if any */
+			channel->deinit(channel);
+			pthread_join(allocator_thread, NULL);
+		} else {
+			pthread_join(allocator_thread, NULL);
+			channel->deinit(channel);
+		}
+
 		/* But we're not sure does child will stuck */
 		kill_children(SIGINT);
 		igt_waitchildren_timeout(5, "Stopping children");
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 10/35] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (8 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 09/35] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 11/35] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
                   ` (26 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

When playing with multiprocess mode in allocator we're currently
using sysvipc message queues in blocking mode (request/response).
We can calculate then what is maximum depth for the queue for
requested number of children. Change alters kernel queue depth
to cover 4k users (1 is main thread and 4095 are children).

We're still prone to unlimited wait in allocator thread
(more than 4095 children successfully send the messages)
but we're going to address this later.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator_msgchannel.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/intel_allocator_msgchannel.c b/lib/intel_allocator_msgchannel.c
index 74a0f9286..ab116b44f 100644
--- a/lib/intel_allocator_msgchannel.c
+++ b/lib/intel_allocator_msgchannel.c
@@ -17,6 +17,7 @@ extern __thread pid_t child_tid;
 
 #define FTOK_IGT_ALLOCATOR_KEY "/tmp/igt.allocator.key"
 #define FTOK_IGT_ALLOCATOR_PROJID 2020
+#define MAXQLEN 4096
 
 #define ALLOCATOR_REQUEST 1
 
@@ -62,6 +63,13 @@ static void msgqueue_init(struct msg_channel *channel)
 	queue = msgget(key, IPC_CREAT);
 	igt_debug("msg queue: %d\n", queue);
 
+	igt_assert(msgctl(queue, IPC_STAT, &qstat) == 0);
+	igt_debug("msg size in bytes: %lu\n", qstat.msg_qbytes);
+	qstat.msg_qbytes = MAXQLEN * sizeof(struct msgqueue_buf);
+	igt_debug("resizing queue to support %d requests\n", MAXQLEN);
+	igt_assert_f(msgctl(queue, IPC_SET, &qstat) == 0,
+		     "Couldn't change queue size to %lu\n", qstat.msg_qbytes);
+
 	msgdata = calloc(1, sizeof(*msgdata));
 	igt_assert(msgdata);
 	msgdata->key = key;
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 11/35] lib/intel_allocator: Separate allocator multiprocess start
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (9 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 10/35] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 12/35] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
                   ` (25 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Validating allocator code (leaks and memory overwriting) can be done
with address sanitizer. When allocator is not working in multiprocess
mode it is easy, problems start when fork is in the game. In this
situation we need to separate preparation and starting allocator thread.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reported-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_allocator.c | 41 ++++++++++++++++++++++++++++++++++-------
 lib/intel_allocator.h |  2 ++
 2 files changed, 36 insertions(+), 7 deletions(-)

diff --git a/lib/intel_allocator.c b/lib/intel_allocator.c
index 113c4aa1a..1295f5183 100644
--- a/lib/intel_allocator.c
+++ b/lib/intel_allocator.c
@@ -516,6 +516,38 @@ static void *allocator_thread_loop(void *data)
 	return NULL;
 }
 
+
+/**
+ * __intel_allocator_multiprocess_prepare:
+ *
+ * Prepares allocator infrastructure to work in multiprocess mode.
+ *
+ * Some description is required why prepare/start steps are separated.
+ * When we write the code and we don't use address sanitizer simple
+ * intel_allocator_multiprocess_start() call is enough. With address
+ * sanitizer and using forking we can encounter situation where one
+ * forked child called allocator alloc() (so parent has some poisoned
+ * memory in shadow map), then second fork occurs. Second child will
+ * get poisoned shadow map from parent (there allocator thread reside).
+ * Checking shadow map in this child will report memory leak.
+ *
+ * How to separate initialization steps take a look into api_intel_allocator.c
+ * fork_simple_stress() function.
+ */
+void __intel_allocator_multiprocess_prepare(void)
+{
+	intel_allocator_init();
+
+	multiprocess = true;
+	channel->init(channel);
+}
+
+void __intel_allocator_multiprocess_start(void)
+{
+	pthread_create(&allocator_thread, NULL,
+		       allocator_thread_loop, NULL);
+}
+
 /**
  * intel_allocator_multiprocess_start:
  *
@@ -535,13 +567,8 @@ void intel_allocator_multiprocess_start(void)
 {
 	alloc_info("allocator multiprocess start\n");
 
-	intel_allocator_init();
-
-	multiprocess = true;
-	channel->init(channel);
-
-	pthread_create(&allocator_thread, NULL,
-		       allocator_thread_loop, NULL);
+	__intel_allocator_multiprocess_prepare();
+	__intel_allocator_multiprocess_start();
 }
 
 /**
diff --git a/lib/intel_allocator.h b/lib/intel_allocator.h
index d26816c99..1298ce4a0 100644
--- a/lib/intel_allocator.h
+++ b/lib/intel_allocator.h
@@ -109,6 +109,8 @@ struct intel_allocator {
 };
 
 void intel_allocator_init(void);
+void __intel_allocator_multiprocess_prepare(void);
+void __intel_allocator_multiprocess_start(void);
 void intel_allocator_multiprocess_start(void);
 void intel_allocator_multiprocess_stop(void);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 12/35] lib/intel_bufops: Change size from 32->64 bit
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (10 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 11/35] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 13/35] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
                   ` (24 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

1. Buffer size from 32 -> 64 bit was changed to be consistent
   with drm code.
2. Remember buffer creation size to avoid recalculation.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_bufops.c        | 17 ++++++++---------
 lib/intel_bufops.h        |  7 +++++--
 tests/i915/api_intel_bb.c |  6 +++---
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index a50035e40..eb5ac4dad 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -711,7 +711,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 			     uint32_t req_tiling, uint32_t compression)
 {
 	uint32_t tiling = req_tiling;
-	uint32_t size;
+	uint64_t size;
 	uint32_t devid;
 	int tile_width;
 	int align_h = 1;
@@ -776,6 +776,9 @@ static void __intel_buf_init(struct buf_ops *bops,
 		size = buf->surface[0].stride * ALIGN(height, align_h);
 	}
 
+	/* Store real bo size to avoid mistakes in calculating it again */
+	buf->size = size;
+
 	if (handle)
 		buf->handle = handle;
 	else
@@ -1001,8 +1004,8 @@ void intel_buf_flush_and_unmap(struct intel_buf *buf)
 void intel_buf_print(const struct intel_buf *buf)
 {
 	igt_info("[name: %s]\n", buf->name);
-	igt_info("[%u]: w: %u, h: %u, stride: %u, size: %u, bo-size: %u, "
-		 "bpp: %u, tiling: %u, compress: %u\n",
+	igt_info("[%u]: w: %u, h: %u, stride: %u, size: %" PRIx64
+		 ", bo-size: %" PRIx64 ", bpp: %u, tiling: %u, compress: %u\n",
 		 buf->handle, intel_buf_width(buf), intel_buf_height(buf),
 		 buf->surface[0].stride, buf->surface[0].size,
 		 intel_buf_bo_size(buf), buf->bpp,
@@ -1208,13 +1211,9 @@ static void idempotency_selftest(struct buf_ops *bops, uint32_t tiling)
 	buf_ops_set_software_tiling(bops, tiling, false);
 }
 
-uint32_t intel_buf_bo_size(const struct intel_buf *buf)
+uint64_t intel_buf_bo_size(const struct intel_buf *buf)
 {
-	int offset = CCS_OFFSET(buf) ?: buf->surface[0].size;
-	int ccs_size =
-		buf->compression ? CCS_SIZE(buf->bops->intel_gen, buf) : 0;
-
-	return offset + ccs_size;
+	return buf->size;
 }
 
 static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency)
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 8debe7f22..5619fc6fa 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -9,10 +9,13 @@ struct buf_ops;
 
 #define INTEL_BUF_INVALID_ADDRESS (-1ull)
 #define INTEL_BUF_NAME_MAXSIZE 32
+#define INVALID_ADDR(x) ((x) == INTEL_BUF_INVALID_ADDRESS)
+
 struct intel_buf {
 	struct buf_ops *bops;
 	bool is_owner;
 	uint32_t handle;
+	uint64_t size;
 	uint32_t tiling;
 	uint32_t bpp;
 	uint32_t compression;
@@ -23,7 +26,7 @@ struct intel_buf {
 	struct {
 		uint32_t offset;
 		uint32_t stride;
-		uint32_t size;
+		uint64_t size;
 	} surface[2];
 	struct {
 		uint32_t offset;
@@ -88,7 +91,7 @@ intel_buf_ccs_height(int gen, const struct intel_buf *buf)
 	return DIV_ROUND_UP(intel_buf_height(buf), 512) * 32;
 }
 
-uint32_t intel_buf_bo_size(const struct intel_buf *buf);
+uint64_t intel_buf_bo_size(const struct intel_buf *buf);
 
 struct buf_ops *buf_ops_create(int fd);
 struct buf_ops *buf_ops_create_with_selftest(int fd);
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index cc1d1be6e..14bfeadb3 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -123,9 +123,9 @@ static void print_buf(struct intel_buf *buf, const char *name)
 
 	ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
 					buf->surface[0].size, PROT_READ);
-	igt_debug("[%s] Buf handle: %d, size: %d, v: 0x%02x, presumed_addr: %p\n",
+	igt_debug("[%s] Buf handle: %d, size: %" PRIx64 ", v: 0x%02x, presumed_addr: %p\n",
 		  name, buf->handle, buf->surface[0].size, ptr[0],
-			from_user_pointer(buf->addr.offset));
+		  from_user_pointer(buf->addr.offset));
 	munmap(ptr, buf->surface[0].size);
 }
 
@@ -677,7 +677,7 @@ static int dump_base64(const char *name, struct intel_buf *buf)
 	if (ret != Z_OK) {
 		igt_warn("error compressing, ret: %d\n", ret);
 	} else {
-		igt_info("compressed %u -> %lu\n",
+		igt_info("compressed %" PRIx64 " -> %lu\n",
 			 buf->surface[0].size, outsize);
 
 		igt_info("--- %s ---\n", name);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 13/35] lib/intel_bufops: Add init with handle and size function
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (11 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 12/35] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 14/35] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
                   ` (23 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

For some cases (fb with compression) fb size is bigger than calculated
intel_buf what lead to execbuf failure when allocator is used
along with EXEC_OBJECT_PINNED flag set for all objects.

We need to create intel_buf with size equal to fb so new function
in which we pass handle and size is required.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_bufops.c | 33 ++++++++++++++++++++++++++++-----
 lib/intel_bufops.h |  7 +++++++
 2 files changed, 35 insertions(+), 5 deletions(-)

diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index eb5ac4dad..d8eb64e3a 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -708,7 +708,8 @@ static void __intel_buf_init(struct buf_ops *bops,
 			     uint32_t handle,
 			     struct intel_buf *buf,
 			     int width, int height, int bpp, int alignment,
-			     uint32_t req_tiling, uint32_t compression)
+			     uint32_t req_tiling, uint32_t compression,
+			     uint64_t bo_size)
 {
 	uint32_t tiling = req_tiling;
 	uint64_t size;
@@ -758,7 +759,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 		buf->ccs[0].offset = buf->surface[0].stride * ALIGN(height, 32);
 		buf->ccs[0].stride = aux_width;
 
-		size = buf->ccs[0].offset + aux_width * aux_height;
+		size = max(bo_size, buf->ccs[0].offset + aux_width * aux_height);
 	} else {
 		if (tiling) {
 			devid =  intel_get_drm_devid(bops->fd);
@@ -773,7 +774,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 		buf->tiling = tiling;
 		buf->bpp = bpp;
 
-		size = buf->surface[0].stride * ALIGN(height, align_h);
+		size = max(bo_size, buf->surface[0].stride * ALIGN(height, align_h));
 	}
 
 	/* Store real bo size to avoid mistakes in calculating it again */
@@ -809,7 +810,7 @@ void intel_buf_init(struct buf_ops *bops,
 		    uint32_t tiling, uint32_t compression)
 {
 	__intel_buf_init(bops, 0, buf, width, height, bpp, alignment,
-			 tiling, compression);
+			 tiling, compression, 0);
 
 	intel_buf_set_ownership(buf, true);
 }
@@ -858,7 +859,7 @@ void intel_buf_init_using_handle(struct buf_ops *bops,
 				 uint32_t req_tiling, uint32_t compression)
 {
 	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
-			 req_tiling, compression);
+			 req_tiling, compression, 0);
 }
 
 /**
@@ -927,6 +928,28 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
 	return buf;
 }
 
+struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
+							 uint32_t handle,
+							 int width, int height,
+							 int bpp, int alignment,
+							 uint32_t req_tiling,
+							 uint32_t compression,
+							 uint64_t size)
+{
+	struct intel_buf *buf;
+
+	igt_assert(bops);
+
+	buf = calloc(1, sizeof(*buf));
+	igt_assert(buf);
+
+	__intel_buf_init(bops, handle, buf, width, height, bpp, alignment,
+			 req_tiling, compression, size);
+
+	return buf;
+}
+
+
 /**
  * intel_buf_destroy
  * @buf: intel_buf
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 5619fc6fa..54480bff6 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -139,6 +139,13 @@ struct intel_buf *intel_buf_create_using_handle(struct buf_ops *bops,
 						uint32_t req_tiling,
 						uint32_t compression);
 
+struct intel_buf *intel_buf_create_using_handle_and_size(struct buf_ops *bops,
+							 uint32_t handle,
+							 int width, int height,
+							 int bpp, int alignment,
+							 uint32_t req_tiling,
+							 uint32_t compression,
+							 uint64_t size);
 void intel_buf_destroy(struct intel_buf *buf);
 
 void *intel_buf_cpu_map(struct intel_buf *buf, bool write);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 14/35] lib/intel_batchbuffer: Integrate intel_bb with allocator
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (12 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 13/35] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 15/35] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
                   ` (22 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Complex changes inside intel-bb related to introduce IGT allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_aux_pgtable.c      |  26 +-
 lib/intel_batchbuffer.c      | 502 ++++++++++++++++++++++++-----------
 lib/intel_batchbuffer.h      |  23 +-
 tests/i915/api_intel_bb.c    |  16 +-
 tests/i915/gem_mmap_offset.c |   4 +-
 5 files changed, 394 insertions(+), 177 deletions(-)

diff --git a/lib/intel_aux_pgtable.c b/lib/intel_aux_pgtable.c
index c05d4511e..f29a30916 100644
--- a/lib/intel_aux_pgtable.c
+++ b/lib/intel_aux_pgtable.c
@@ -94,9 +94,9 @@ pgt_table_count(int address_bits, struct intel_buf **bufs, int buf_count)
 		/* We require bufs to be sorted. */
 		igt_assert(i == 0 ||
 			   buf->addr.offset >= bufs[i - 1]->addr.offset +
-					       intel_buf_bo_size(bufs[i - 1]));
-
+				intel_buf_bo_size(bufs[i - 1]));
 		start = ALIGN_DOWN(buf->addr.offset, 1UL << address_bits);
+
 		/* Avoid double counting for overlapping aligned bufs. */
 		start = max(start, end);
 
@@ -346,10 +346,8 @@ pgt_populate_entries_for_buf(struct pgtable *pgt,
 			     uint64_t top_table,
 			     int surface_idx)
 {
-	uint64_t surface_addr = buf->addr.offset +
-				buf->surface[surface_idx].offset;
-	uint64_t surface_end = surface_addr +
-			       buf->surface[surface_idx].size;
+	uint64_t surface_addr = buf->addr.offset + buf->surface[surface_idx].offset;
+	uint64_t surface_end = surface_addr +  buf->surface[surface_idx].size;
 	uint64_t aux_addr = buf->addr.offset + buf->ccs[surface_idx].offset;
 	uint64_t l1_flags = pgt_get_l1_flags(buf, surface_idx);
 	uint64_t lx_flags = pgt_get_lx_flags();
@@ -441,7 +439,6 @@ struct intel_buf *
 intel_aux_pgtable_create(struct intel_bb *ibb,
 			 struct intel_buf **bufs, int buf_count)
 {
-	struct drm_i915_gem_exec_object2 *obj;
 	static const struct pgtable_level_desc level_desc[] = {
 		{
 			.idx_shift = 16,
@@ -465,7 +462,6 @@ intel_aux_pgtable_create(struct intel_bb *ibb,
 	struct pgtable *pgt;
 	struct buf_ops *bops;
 	struct intel_buf *buf;
-	uint64_t prev_alignment;
 
 	igt_assert(buf_count);
 	bops = bufs[0]->bops;
@@ -476,11 +472,8 @@ intel_aux_pgtable_create(struct intel_bb *ibb,
 				    I915_COMPRESSION_NONE);
 
 	/* We need to use pgt->max_align for aux table */
-	prev_alignment = intel_bb_set_default_object_alignment(ibb,
-							       pgt->max_align);
-	obj = intel_bb_add_intel_buf(ibb, pgt->buf, false);
-	intel_bb_set_default_object_alignment(ibb, prev_alignment);
-	obj->alignment = pgt->max_align;
+	intel_bb_add_intel_buf_with_alignment(ibb, pgt->buf,
+					      pgt->max_align, false);
 
 	pgt_map(ibb->i915, pgt);
 	pgt_populate_entries(pgt, bufs, buf_count);
@@ -498,9 +491,10 @@ aux_pgtable_reserve_buf_slot(struct intel_buf **bufs, int buf_count,
 {
 	int i;
 
-	for (i = 0; i < buf_count; i++)
+	for (i = 0; i < buf_count; i++) {
 		if (bufs[i]->addr.offset > new_buf->addr.offset)
 			break;
+	}
 
 	memmove(&bufs[i + 1], &bufs[i], sizeof(bufs[0]) * (buf_count - i));
 
@@ -606,8 +600,10 @@ gen12_aux_pgtable_cleanup(struct intel_bb *ibb, struct aux_pgtable_info *info)
 		igt_assert_eq_u64(addr, info->buf_pin_offsets[i]);
 	}
 
-	if (info->pgtable_buf)
+	if (info->pgtable_buf) {
+		intel_bb_remove_intel_buf(ibb, info->pgtable_buf);
 		intel_buf_destroy(info->pgtable_buf);
+	}
 }
 
 uint32_t
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 8118dc945..6fb350924 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -50,7 +50,6 @@
 #include "media_spin.h"
 #include "gpgpu_fill.h"
 #include "igt_aux.h"
-#include "igt_rand.h"
 #include "i830_reg.h"
 #include "huc_copy.h"
 #include <glib.h>
@@ -1211,23 +1210,9 @@ static void __reallocate_objects(struct intel_bb *ibb)
 	}
 }
 
-/*
- * gen8_canonical_addr
- * Used to convert any address into canonical form, i.e. [63:48] == [47].
- * Based on kernel's sign_extend64 implementation.
- * @address - a virtual address
- */
-#define GEN8_HIGH_ADDRESS_BIT 47
-static uint64_t gen8_canonical_addr(uint64_t address)
-{
-	int shift = 63 - GEN8_HIGH_ADDRESS_BIT;
-
-	return (int64_t)(address << shift) >> shift;
-}
-
 static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
 					     uint32_t handle,
-					     uint32_t size,
+					     uint64_t size,
 					     uint32_t alignment)
 {
 	uint64_t offset;
@@ -1235,33 +1220,77 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
 	if (ibb->enforce_relocs)
 		return 0;
 
-	/* randomize the address, we try to avoid relocations */
-	offset = hars_petruska_f54_1_random64(&ibb->prng);
-	offset += 256 << 10; /* Keep the low 256k clear, for negative deltas */
-	offset &= ibb->gtt_size - 1;
-	offset &= ~(ibb->alignment - 1);
-	offset = gen8_canonical_addr(offset);
+	offset = intel_allocator_alloc(ibb->allocator_handle,
+				       handle, size, alignment);
 
 	return offset;
 }
 
 /**
- * intel_bb_create:
+ * __intel_bb_create:
  * @i915: drm fd
+ * @ctx: context
  * @size: size of the batchbuffer
+ * @do_relocs: use relocations or allocator
+ * @allocator_type: allocator type, must be INTEL_ALLOCATOR_NONE for relocations
+ *
+ * intel-bb assumes it will work in one of two modes - with relocations or
+ * with using allocator (currently RANDOM and SIMPLE are implemented).
+ * Some description is required to describe how they maintain the addresses.
+ *
+ * Before entering into each scenarios generic rule is intel-bb keeps objects
+ * and their offsets in the internal cache and reuses in subsequent execs.
+ *
+ * 1. intel-bb with relocations
+ *
+ * Creating new intel-bb adds handle to cache implicitly and sets its address
+ * to 0. Objects added to intel-bb later also have address 0 set for first run.
+ * After calling execbuf cache is altered with new addresses. As intel-bb
+ * works in reloc mode addresses are only suggestion to the driver and we
+ * cannot be sure they won't change at next exec.
+ *
+ * 2. with allocator
+ *
+ * This mode is valid only for ppgtt. Addresses are acquired from allocator
+ * and softpinned. intel-bb cache must be then coherent with allocator
+ * (simple is coherent, random is not due to fact we don't keep its state).
+ * When we do intel-bb reset with purging cache it has to reacquire addresses
+ * from allocator (allocator should return same address - what is true for
+ * simple allocator and false for random as mentioned before).
+ *
+ * If we do reset without purging caches we use addresses from intel-bb cache
+ * during execbuf objects construction.
  *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
  */
 static struct intel_bb *
-__intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
+__intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
+		  uint8_t allocator_type)
 {
+	struct drm_i915_gem_exec_object2 *object;
 	struct intel_bb *ibb = calloc(1, sizeof(*ibb));
-	uint64_t gtt_size;
 
 	igt_assert(ibb);
 
+	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(i915);
+
+	/*
+	 * If we don't have full ppgtt driver can change our addresses
+	 * so allocator is useless in this case. Just enforce relocations
+	 * for such gens and don't use allocator at all.
+	 */
+	if (!ibb->uses_full_ppgtt) {
+		do_relocs = true;
+		allocator_type = INTEL_ALLOCATOR_NONE;
+	}
+
+	if (!do_relocs)
+		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+	else
+		igt_assert(allocator_type == INTEL_ALLOCATOR_NONE);
+	ibb->allocator_type = allocator_type;
 	ibb->i915 = i915;
 	ibb->devid = intel_get_drm_devid(i915);
 	ibb->gen = intel_gen(ibb->devid);
@@ -1273,41 +1302,43 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
 	ibb->batch = calloc(1, size);
 	igt_assert(ibb->batch);
 	ibb->ptr = ibb->batch;
-	ibb->prng = (uint32_t) to_user_pointer(ibb);
 	ibb->fence = -1;
 
-	gtt_size = gem_aperture_size(i915);
-	if (!gem_uses_full_ppgtt(i915))
-		gtt_size /= 2;
-	if ((gtt_size - 1) >> 32) {
+	ibb->gtt_size = gem_aperture_size(i915);
+	if ((ibb->gtt_size - 1) >> 32)
 		ibb->supports_48b_address = true;
 
-		/*
-		 * Until we develop IGT address allocator we workaround
-		 * playing with canonical addresses with 47-bit set to 1
-		 * just by limiting gtt size to 46-bit when gtt is 47 or 48
-		 * bit size. Current interface doesn't pass bo size, so
-		 * limiting to 46 bit make us sure we won't enter to
-		 * addresses with 47-bit set (we use 32-bit size now so
-		 * still we fit 47-bit address space).
-		 */
-		if (gtt_size & (3ull << 47))
-			gtt_size = (1ull << 46);
-	}
-	ibb->gtt_size = gtt_size;
-
-	ibb->batch_offset = __intel_bb_get_offset(ibb,
-						  ibb->handle,
-						  ibb->size,
-						  ibb->alignment);
-	intel_bb_add_object(ibb, ibb->handle, ibb->size,
-			    ibb->batch_offset, false);
+	object = intel_bb_add_object(ibb, ibb->handle, ibb->size,
+				     INTEL_BUF_INVALID_ADDRESS, ibb->alignment,
+				     false);
+	ibb->batch_offset = object->offset;
 
 	ibb->refcount = 1;
 
 	return ibb;
 }
 
+/**
+ * intel_bb_create_full:
+ * @i915: drm fd
+ * @ctx: context
+ * @size: size of the batchbuffer
+ * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ *
+ * Creates bb with context passed in @ctx, size in @size and allocator type
+ * in @allocator_type. Relocations are set to false because IGT allocator
+ * is not used in that case.
+ *
+ * Returns:
+ *
+ * Pointer the intel_bb, asserts on failure.
+ */
+struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+				      uint8_t allocator_type)
+{
+	return __intel_bb_create(i915, ctx, size, false, allocator_type);
+}
+
 /**
  * intel_bb_create:
  * @i915: drm fd
@@ -1321,7 +1352,7 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs)
  */
 struct intel_bb *intel_bb_create(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, false);
+	return __intel_bb_create(i915, 0, size, false, INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1339,7 +1370,7 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, false);
+	return __intel_bb_create(i915, ctx, size, false, INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1356,7 +1387,7 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
  */
 struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, true);
+	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
 }
 
 /**
@@ -1375,7 +1406,7 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, true);
+	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
 }
 
 static void __intel_bb_destroy_relocations(struct intel_bb *ibb)
@@ -1429,6 +1460,10 @@ void intel_bb_destroy(struct intel_bb *ibb)
 	__intel_bb_destroy_objects(ibb);
 	__intel_bb_destroy_cache(ibb);
 
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE) {
+		intel_allocator_free(ibb->allocator_handle, ibb->handle);
+		intel_allocator_close(ibb->allocator_handle);
+	}
 	gem_close(ibb->i915, ibb->handle);
 
 	if (ibb->fence >= 0)
@@ -1445,6 +1480,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
  *
  * Recreate batch bo when there's no additional reference.
 */
+
 void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 {
 	uint32_t i;
@@ -1468,28 +1504,32 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	__intel_bb_destroy_objects(ibb);
 	__reallocate_objects(ibb);
 
-	if (purge_objects_cache) {
+	if (purge_objects_cache)
 		__intel_bb_destroy_cache(ibb);
+
+	/*
+	 * When we use allocators we're in no-reloc mode so we have to free
+	 * and reacquire offset (ibb->handle can change in multiprocess
+	 * environment). We also have to remove and add it again to
+	 * objects and cache tree.
+	 */
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE && !purge_objects_cache)
+		intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset,
+				       ibb->size);
+
+	gem_close(ibb->i915, ibb->handle);
+	ibb->handle = gem_create(ibb->i915, ibb->size);
+
+	/* Keep address for bb in reloc mode and RANDOM allocator */
+	if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
 		ibb->batch_offset = __intel_bb_get_offset(ibb,
 							  ibb->handle,
 							  ibb->size,
 							  ibb->alignment);
-	} else {
-		struct drm_i915_gem_exec_object2 *object;
-
-		object = intel_bb_find_object(ibb, ibb->handle);
-		ibb->batch_offset = object ? object->offset :
-					     __intel_bb_get_offset(ibb,
-								   ibb->handle,
-								   ibb->size,
-								   ibb->alignment);
-	}
-
-	gem_close(ibb->i915, ibb->handle);
-	ibb->handle = gem_create(ibb->i915, ibb->size);
 
 	intel_bb_add_object(ibb, ibb->handle, ibb->size,
-			    ibb->batch_offset, false);
+			    ibb->batch_offset,
+			    ibb->alignment, false);
 	ibb->ptr = ibb->batch;
 	memset(ibb->batch, 0, ibb->size);
 }
@@ -1528,8 +1568,8 @@ void intel_bb_print(struct intel_bb *ibb)
 		 ibb->i915, ibb->gen, ibb->devid, ibb->debug);
 	igt_info("handle: %u, size: %u, batch: %p, ptr: %p\n",
 		 ibb->handle, ibb->size, ibb->batch, ibb->ptr);
-	igt_info("prng: %u, gtt_size: %" PRIu64 ", supports 48bit: %d\n",
-		 ibb->prng, ibb->gtt_size, ibb->supports_48b_address);
+	igt_info("gtt_size: %" PRIu64 ", supports 48bit: %d\n",
+		 ibb->gtt_size, ibb->supports_48b_address);
 	igt_info("ctx: %u\n", ibb->ctx);
 	igt_info("root: %p\n", ibb->root);
 	igt_info("objects: %p, num_objects: %u, allocated obj: %u\n",
@@ -1605,7 +1645,7 @@ __add_to_cache(struct intel_bb *ibb, uint32_t handle)
 	if (*found == object) {
 		memset(object, 0, sizeof(*object));
 		object->handle = handle;
-		object->alignment = ibb->alignment;
+		object->offset = INTEL_BUF_INVALID_ADDRESS;
 	} else {
 		free(object);
 		object = *found;
@@ -1614,6 +1654,25 @@ __add_to_cache(struct intel_bb *ibb, uint32_t handle)
 	return object;
 }
 
+static bool __remove_from_cache(struct intel_bb *ibb, uint32_t handle)
+{
+	struct drm_i915_gem_exec_object2 **found, *object;
+
+	object = intel_bb_find_object(ibb, handle);
+	if (!object) {
+		igt_warn("Object: handle: %u not found\n", handle);
+		return false;
+	}
+
+	found = tdelete((void *) object, &ibb->root, __compare_objects);
+	if (!found)
+		return false;
+
+	free(object);
+
+	return true;
+}
+
 static int __compare_handles(const void *p1, const void *p2)
 {
 	return (int) (*(int32_t *) p1 - *(int32_t *) p2);
@@ -1639,12 +1698,54 @@ static void __add_to_objects(struct intel_bb *ibb,
 	}
 }
 
+static void __remove_from_objects(struct intel_bb *ibb,
+				  struct drm_i915_gem_exec_object2 *object)
+{
+	uint32_t i, **handle, *to_free;
+	bool found = false;
+
+	for (i = 0; i < ibb->num_objects; i++) {
+		if (ibb->objects[i] == object) {
+			found = true;
+			break;
+		}
+	}
+
+	/*
+	 * When we reset bb (without purging) we have:
+	 * 1. cache which contains all cached objects
+	 * 2. objects array which contains only bb object (cleared in reset
+	 *    path with bb object added at the end)
+	 * So !found is normal situation and no warning is added here.
+	 */
+	if (!found)
+		return;
+
+	ibb->num_objects--;
+	if (i < ibb->num_objects)
+		memmove(&ibb->objects[i], &ibb->objects[i + 1],
+			sizeof(object) * (ibb->num_objects - i));
+
+	handle = tfind((void *) &object->handle,
+		       &ibb->current, __compare_handles);
+	if (!handle) {
+		igt_warn("Object %u doesn't exist in the tree, can't remove",
+			 object->handle);
+		return;
+	}
+
+	to_free = *handle;
+	tdelete((void *) &object->handle, &ibb->current, __compare_handles);
+	free(to_free);
+}
+
 /**
  * intel_bb_add_object:
  * @ibb: pointer to intel_bb
  * @handle: which handle to add to objects array
  * @size: object size
  * @offset: presumed offset of the object when no relocation is enforced
+ * @alignment: alignment of the object, if 0 it will be set to page size
  * @write: does a handle is a render target
  *
  * Function adds or updates execobj slot in bb objects array and
@@ -1652,23 +1753,71 @@ static void __add_to_objects(struct intel_bb *ibb,
  * be marked with EXEC_OBJECT_WRITE flag.
  */
 struct drm_i915_gem_exec_object2 *
-intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
-		    uint64_t offset, bool write)
+intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
+		    uint64_t offset, uint64_t alignment, bool write)
 {
 	struct drm_i915_gem_exec_object2 *object;
 
+	igt_assert(INVALID_ADDR(offset) || alignment == 0
+		   || ALIGN(offset, alignment) == offset);
+
 	object = __add_to_cache(ibb, handle);
+	object->alignment = alignment ?: 4096;
 	__add_to_objects(ibb, object);
 
-	/* Limit current offset to gtt size */
+	/*
+	 * If object->offset == INVALID_ADDRESS we added freshly object to the
+	 * cache. In that case we have two choices:
+	 * a) get new offset (passed offset was invalid)
+	 * b) use offset passed in the call (valid)
+	 */
+	if (INVALID_ADDR(object->offset)) {
+		if (INVALID_ADDR(offset)) {
+			offset = __intel_bb_get_offset(ibb, handle, size,
+						       object->alignment);
+		} else {
+			offset = offset & (ibb->gtt_size - 1);
+
+			/*
+			 * For simple allocator check entry consistency
+			 * - reserve if it is not already allocated.
+			 */
+			if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE) {
+				bool allocated, reserved;
+
+				reserved = intel_allocator_reserve_if_not_allocated(ibb->allocator_handle,
+										    handle, size, offset,
+										    &allocated);
+				igt_assert_f(allocated || reserved,
+					     "Can't get offset, allocated: %d, reserved: %d\n",
+					     allocated, reserved);
+			}
+		}
+	} else {
+		/*
+		 * This assertion makes sense only when we have to be consistent
+		 * with underlying allocator. For relocations and when !ppgtt
+		 * we can expect addresses passed by the user can be moved
+		 * within the driver.
+		 */
+		if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+			igt_assert_f(object->offset == offset,
+				     "(pid: %ld) handle: %u, offset not match: %" PRIx64 " <> %" PRIx64 "\n",
+				     (long) getpid(), handle,
+				     (uint64_t) object->offset,
+				     offset);
+	}
+
 	object->offset = offset;
-	if (offset != INTEL_BUF_INVALID_ADDRESS)
-		object->offset = gen8_canonical_addr(offset & (ibb->gtt_size - 1));
 
-	if (object->offset == INTEL_BUF_INVALID_ADDRESS)
+	/* Limit current offset to gtt size */
+	if (offset != INTEL_BUF_INVALID_ADDRESS) {
+		object->offset = CANONICAL(offset & (ibb->gtt_size - 1));
+	} else {
 		object->offset = __intel_bb_get_offset(ibb,
 						       handle, size,
 						       object->alignment);
+	}
 
 	if (write)
 		object->flags |= EXEC_OBJECT_WRITE;
@@ -1676,40 +1825,95 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
 	if (ibb->supports_48b_address)
 		object->flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 
+	if (ibb->uses_full_ppgtt && ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+		object->flags |= EXEC_OBJECT_PINNED;
+
 	return object;
 }
 
-struct drm_i915_gem_exec_object2 *
-intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write)
+bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
+			    uint64_t offset, uint64_t size)
 {
-	struct drm_i915_gem_exec_object2 *obj;
+	struct drm_i915_gem_exec_object2 *object;
+	bool is_reserved;
 
-	obj = intel_bb_add_object(ibb, buf->handle,
-				  intel_buf_bo_size(buf),
-				  buf->addr.offset, write);
+	object = intel_bb_find_object(ibb, handle);
+	if (!object)
+		return false;
 
-	/* For compressed surfaces ensure address is aligned to 64KB */
-	if (ibb->gen >= 12 && buf->compression) {
-		obj->offset &= ~(0x10000 - 1);
-		obj->alignment = 0x10000;
+	if (ibb->allocator_type != INTEL_ALLOCATOR_NONE) {
+		intel_allocator_free(ibb->allocator_handle, handle);
+		is_reserved = intel_allocator_is_reserved(ibb->allocator_handle,
+							  size, offset);
+		if (is_reserved)
+			intel_allocator_unreserve(ibb->allocator_handle, handle,
+						  size, offset);
 	}
 
-	/* For gen3 ensure tiled buffers are aligned to power of two size */
-	if (ibb->gen == 3 && buf->tiling) {
-		uint64_t alignment = 1024 * 1024;
+	__remove_from_objects(ibb, object);
+	__remove_from_cache(ibb, handle);
 
-		while (alignment < buf->surface[0].size)
-			alignment <<= 1;
-		obj->offset &= ~(alignment - 1);
-		obj->alignment = alignment;
+	return true;
+}
+
+static struct drm_i915_gem_exec_object2 *
+__intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
+			 uint64_t alignment, bool write)
+{
+	struct drm_i915_gem_exec_object2 *obj;
+
+	igt_assert(ALIGN(alignment, 4096) == alignment);
+
+	if (!alignment) {
+		alignment = 0x1000;
+
+		if (ibb->gen >= 12 && buf->compression)
+			alignment = 0x10000;
+
+		/* For gen3 ensure tiled buffers are aligned to power of two size */
+		if (ibb->gen == 3 && buf->tiling) {
+			alignment = 1024 * 1024;
+
+			while (alignment < buf->surface[0].size)
+				alignment <<= 1;
+		}
 	}
 
-	/* Update address in intel_buf buffer */
+	obj = intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
+				  buf->addr.offset, alignment, write);
 	buf->addr.offset = obj->offset;
 
+	if (!ibb->enforce_relocs)
+		obj->alignment = alignment;
+
 	return obj;
 }
 
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write)
+{
+	return __intel_bb_add_intel_buf(ibb, buf, 0, write);
+}
+
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
+				      uint64_t alignment, bool write)
+{
+	return __intel_bb_add_intel_buf(ibb, buf, alignment, write);
+}
+
+bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf)
+{
+	bool removed = intel_bb_remove_object(ibb, buf->handle,
+					      buf->addr.offset,
+					      intel_buf_bo_size(buf));
+
+	if (removed)
+		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+
+	return removed;
+}
+
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 {
@@ -1717,6 +1921,8 @@ intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 	struct drm_i915_gem_exec_object2 **found;
 
 	found = tfind((void *) &object, &ibb->root, __compare_objects);
+	if (!found)
+		return NULL;
 
 	return *found;
 }
@@ -1727,6 +1933,8 @@ intel_bb_object_set_flag(struct intel_bb *ibb, uint32_t handle, uint64_t flag)
 	struct drm_i915_gem_exec_object2 object = { .handle = handle };
 	struct drm_i915_gem_exec_object2 **found;
 
+	igt_assert_f(ibb->root, "Trying to search in null tree\n");
+
 	found = tfind((void *) &object, &ibb->root, __compare_objects);
 	if (!found) {
 		igt_warn("Trying to set fence on not found handle: %u\n",
@@ -1766,14 +1974,9 @@ intel_bb_object_clear_flag(struct intel_bb *ibb, uint32_t handle, uint64_t flag)
  * @write_domain: gem domain bit for the relocation
  * @delta: delta value to add to @buffer's gpu address
  * @offset: offset within bb to be patched
- * @presumed_offset: address of the object in address space. If -1 is passed
- * then final offset of the object will be randomized (for no-reloc bb) or
- * 0 (for reloc bb, in that case reloc.presumed_offset will be -1). In
- * case address is known it should passed in @presumed_offset (for no-reloc).
  *
  * Function allocates additional relocation slot in reloc array for a handle.
- * It also implicitly adds handle in the objects array if object doesn't
- * exists but doesn't mark it as a render target.
+ * Object must be previously added to bb.
  */
 static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
 				   uint32_t to_handle,
@@ -1788,13 +1991,8 @@ static uint64_t intel_bb_add_reloc(struct intel_bb *ibb,
 	struct drm_i915_gem_exec_object2 *object, *to_object;
 	uint32_t i;
 
-	if (ibb->enforce_relocs) {
-		object = intel_bb_add_object(ibb, handle, 0,
-					     presumed_offset, false);
-	} else {
-		object = intel_bb_find_object(ibb, handle);
-		igt_assert(object);
-	}
+	object = intel_bb_find_object(ibb, handle);
+	igt_assert(object);
 
 	/* For ibb we have relocs allocated in chunks */
 	if (to_handle == ibb->handle) {
@@ -1988,45 +2186,47 @@ static void intel_bb_dump_execbuf(struct intel_bb *ibb,
 	int i, j;
 	uint64_t address;
 
-	igt_info("execbuf batch len: %u, start offset: 0x%x, "
-		 "DR1: 0x%x, DR4: 0x%x, "
-		 "num clip: %u, clipptr: 0x%llx, "
-		 "flags: 0x%llx, rsvd1: 0x%llx, rsvd2: 0x%llx\n",
-		 execbuf->batch_len, execbuf->batch_start_offset,
-		 execbuf->DR1, execbuf->DR4,
-		 execbuf->num_cliprects, execbuf->cliprects_ptr,
-		 execbuf->flags, execbuf->rsvd1, execbuf->rsvd2);
-
-	igt_info("execbuf buffer_count: %d\n", execbuf->buffer_count);
+	igt_debug("execbuf [pid: %ld, fd: %d, ctx: %u]\n",
+		  (long) getpid(), ibb->i915, ibb->ctx);
+	igt_debug("execbuf batch len: %u, start offset: 0x%x, "
+		  "DR1: 0x%x, DR4: 0x%x, "
+		  "num clip: %u, clipptr: 0x%llx, "
+		  "flags: 0x%llx, rsvd1: 0x%llx, rsvd2: 0x%llx\n",
+		  execbuf->batch_len, execbuf->batch_start_offset,
+		  execbuf->DR1, execbuf->DR4,
+		  execbuf->num_cliprects, execbuf->cliprects_ptr,
+		  execbuf->flags, execbuf->rsvd1, execbuf->rsvd2);
+
+	igt_debug("execbuf buffer_count: %d\n", execbuf->buffer_count);
 	for (i = 0; i < execbuf->buffer_count; i++) {
 		objects = &((struct drm_i915_gem_exec_object2 *)
 			    from_user_pointer(execbuf->buffers_ptr))[i];
 		relocs = from_user_pointer(objects->relocs_ptr);
 		address = objects->offset;
-		igt_info(" [%d] handle: %u, reloc_count: %d, reloc_ptr: %p, "
-			 "align: 0x%llx, offset: 0x%" PRIx64 ", flags: 0x%llx, "
-			 "rsvd1: 0x%llx, rsvd2: 0x%llx\n",
-			 i, objects->handle, objects->relocation_count,
-			 relocs,
-			 objects->alignment,
-			 address,
-			 objects->flags,
-			 objects->rsvd1, objects->rsvd2);
+		igt_debug(" [%d] handle: %u, reloc_count: %d, reloc_ptr: %p, "
+			  "align: 0x%llx, offset: 0x%" PRIx64 ", flags: 0x%llx, "
+			  "rsvd1: 0x%llx, rsvd2: 0x%llx\n",
+			  i, objects->handle, objects->relocation_count,
+			  relocs,
+			  objects->alignment,
+			  address,
+			  objects->flags,
+			  objects->rsvd1, objects->rsvd2);
 		if (objects->relocation_count) {
-			igt_info("\texecbuf relocs:\n");
+			igt_debug("\texecbuf relocs:\n");
 			for (j = 0; j < objects->relocation_count; j++) {
 				reloc = &relocs[j];
 				address = reloc->presumed_offset;
-				igt_info("\t [%d] target handle: %u, "
-					 "offset: 0x%llx, delta: 0x%x, "
-					 "presumed_offset: 0x%" PRIx64 ", "
-					 "read_domains: 0x%x, "
-					 "write_domain: 0x%x\n",
-					 j, reloc->target_handle,
-					 reloc->offset, reloc->delta,
-					 address,
-					 reloc->read_domains,
-					 reloc->write_domain);
+				igt_debug("\t [%d] target handle: %u, "
+					  "offset: 0x%llx, delta: 0x%x, "
+					  "presumed_offset: 0x%" PRIx64 ", "
+					  "read_domains: 0x%x, "
+					  "write_domain: 0x%x\n",
+					  j, reloc->target_handle,
+					  reloc->offset, reloc->delta,
+					  address,
+					  reloc->read_domains,
+					  reloc->write_domain);
 			}
 		}
 	}
@@ -2069,6 +2269,12 @@ static void print_node(const void *node, VISIT which, int depth)
 	}
 }
 
+void intel_bb_dump_cache(struct intel_bb *ibb)
+{
+	igt_info("[pid: %ld] dump cache\n", (long) getpid());
+	twalk(ibb->root, print_node);
+}
+
 static struct drm_i915_gem_exec_object2 *
 create_objects_array(struct intel_bb *ibb)
 {
@@ -2078,8 +2284,10 @@ create_objects_array(struct intel_bb *ibb)
 	objects = malloc(sizeof(*objects) * ibb->num_objects);
 	igt_assert(objects);
 
-	for (i = 0; i < ibb->num_objects; i++)
+	for (i = 0; i < ibb->num_objects; i++) {
 		objects[i] = *(ibb->objects[i]);
+		objects[i].offset = CANONICAL(objects[i].offset);
+	}
 
 	return objects;
 }
@@ -2094,7 +2302,10 @@ static void update_offsets(struct intel_bb *ibb,
 		object = intel_bb_find_object(ibb, objects[i].handle);
 		igt_assert(object);
 
-		object->offset = objects[i].offset;
+		object->offset = DECANONICAL(objects[i].offset);
+
+		if (i == 0)
+			ibb->batch_offset = object->offset;
 	}
 }
 
@@ -2122,6 +2333,7 @@ static int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 	ibb->objects[0]->relocs_ptr = to_user_pointer(ibb->relocs);
 	ibb->objects[0]->relocation_count = ibb->num_relocs;
 	ibb->objects[0]->handle = ibb->handle;
+	ibb->objects[0]->offset = ibb->batch_offset;
 
 	gem_write(ibb->i915, ibb->handle, 0, ibb->batch, ibb->size);
 
@@ -2206,7 +2418,6 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 {
 	struct drm_i915_gem_exec_object2 object = { .handle = handle };
 	struct drm_i915_gem_exec_object2 **found;
-	uint64_t address;
 
 	igt_assert(ibb);
 
@@ -2214,12 +2425,7 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 	if (!found)
 		return INTEL_BUF_INVALID_ADDRESS;
 
-	address = (*found)->offset;
-
-	if (address == INTEL_BUF_INVALID_ADDRESS)
-		return address;
-
-	return address & (ibb->gtt_size - 1);
+	return (*found)->offset;
 }
 
 /**
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 76989eaae..b9a4ee7e8 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -8,6 +8,7 @@
 #include "igt_core.h"
 #include "intel_reg.h"
 #include "drmtest.h"
+#include "intel_allocator.h"
 
 #define BATCH_SZ 4096
 #define BATCH_RESERVED 16
@@ -441,6 +442,9 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
  * Batchbuffer without libdrm dependency
  */
 struct intel_bb {
+	uint64_t allocator_handle;
+	uint8_t allocator_type;
+
 	int i915;
 	unsigned int gen;
 	bool debug;
@@ -454,9 +458,9 @@ struct intel_bb {
 	uint64_t alignment;
 	int fence;
 
-	uint32_t prng;
 	uint64_t gtt_size;
 	bool supports_48b_address;
+	bool uses_full_ppgtt;
 
 	uint32_t ctx;
 
@@ -484,6 +488,8 @@ struct intel_bb {
 	int32_t refcount;
 };
 
+struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+				      uint8_t allocator_type);
 struct intel_bb *intel_bb_create(int i915, uint32_t size);
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size);
@@ -510,6 +516,7 @@ void intel_bb_dump(struct intel_bb *ibb, const char *filename);
 void intel_bb_set_debug(struct intel_bb *ibb, bool debug);
 void intel_bb_set_dump_base64(struct intel_bb *ibb, bool dump);
 
+/*
 static inline uint64_t
 intel_bb_set_default_object_alignment(struct intel_bb *ibb, uint64_t alignment)
 {
@@ -525,6 +532,7 @@ intel_bb_get_default_object_alignment(struct intel_bb *ibb)
 {
 	return ibb->alignment;
 }
+*/
 
 static inline uint32_t intel_bb_offset(struct intel_bb *ibb)
 {
@@ -574,11 +582,16 @@ static inline void intel_bb_out(struct intel_bb *ibb, uint32_t dword)
 }
 
 struct drm_i915_gem_exec_object2 *
-intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint32_t size,
-		    uint64_t offset, bool write);
+intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
+		    uint64_t offset, uint64_t alignment, bool write);
+bool intel_bb_remove_object(struct intel_bb *ibb, uint32_t handle,
+			    uint64_t offset, uint64_t size);
 struct drm_i915_gem_exec_object2 *
 intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf, bool write);
-
+struct drm_i915_gem_exec_object2 *
+intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
+				      uint64_t alignment, bool write);
+bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf);
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle);
 
@@ -625,6 +638,8 @@ uint64_t intel_bb_offset_reloc_to_object(struct intel_bb *ibb,
 					 uint32_t offset,
 					 uint64_t presumed_offset);
 
+void intel_bb_dump_cache(struct intel_bb *ibb);
+
 void intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 		   uint64_t flags, bool sync);
 
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 14bfeadb3..c6c943506 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -810,11 +810,11 @@ static void offset_control(struct buf_ops *bops)
 	dst2 = create_buf(bops, WIDTH, HEIGHT, COLOR_77);
 
 	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
-			    src->addr.offset, false);
+			    src->addr.offset, 0, false);
 	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
-			    dst1->addr.offset, true);
+			    dst1->addr.offset, 0, true);
 	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
-			    dst2->addr.offset, true);
+			    dst2->addr.offset, 0, true);
 
 	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
 	intel_bb_ptr_align(ibb, 8);
@@ -838,13 +838,13 @@ static void offset_control(struct buf_ops *bops)
 
 	dst3 = create_buf(bops, WIDTH, HEIGHT, COLOR_33);
 	intel_bb_add_object(ibb, dst3->handle, intel_buf_bo_size(dst3),
-			    dst3->addr.offset, true);
+			    dst3->addr.offset, 0, true);
 	intel_bb_add_object(ibb, src->handle, intel_buf_bo_size(src),
-			    src->addr.offset, false);
+			    src->addr.offset, 0, false);
 	intel_bb_add_object(ibb, dst1->handle, intel_buf_bo_size(dst1),
-			    dst1->addr.offset, true);
+			    dst1->addr.offset, 0, true);
 	intel_bb_add_object(ibb, dst2->handle, intel_buf_bo_size(dst2),
-			    dst2->addr.offset, true);
+			    dst2->addr.offset, 0, true);
 
 	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
 	intel_bb_ptr_align(ibb, 8);
@@ -901,7 +901,7 @@ static void delta_check(struct buf_ops *bops)
 	buf = create_buf(bops, 0x1000, 0x10, COLOR_CC);
 	buf->addr.offset = 0xfffff000;
 	intel_bb_add_object(ibb, buf->handle, intel_buf_bo_size(buf),
-			    buf->addr.offset, false);
+			    buf->addr.offset, 0, false);
 
 	intel_bb_out(ibb, MI_STORE_DWORD_IMM);
 	intel_bb_emit_reloc(ibb, buf->handle,
diff --git a/tests/i915/gem_mmap_offset.c b/tests/i915/gem_mmap_offset.c
index 8b29d0a0a..5e48cd697 100644
--- a/tests/i915/gem_mmap_offset.c
+++ b/tests/i915/gem_mmap_offset.c
@@ -614,8 +614,8 @@ static void blt_coherency(int i915)
 	dst = create_bo(bops, 1, width, height);
 	size = src->surface[0].size;
 
-	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, false);
-	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, true);
+	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, 0, false);
+	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, 0, true);
 
 	intel_bb_blt_copy(ibb,
 			  src, 0, 0, src->surface[0].stride,
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 15/35] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (13 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 14/35] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 16/35] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
                   ` (21 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

We want to use relocations as long as we can so change intel-bb
strategy to use it as default up to gen12. For gen12 we have
to use softpinning in ccs aux tables so there we enforce using
allocator instead of relocations.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c | 34 +++++++++++++++++++++++-----------
 1 file changed, 23 insertions(+), 11 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 6fb350924..4f67bb127 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1275,25 +1275,25 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 	igt_assert(ibb);
 
 	ibb->uses_full_ppgtt = gem_uses_full_ppgtt(i915);
+	ibb->devid = intel_get_drm_devid(i915);
+	ibb->gen = intel_gen(ibb->devid);
 
 	/*
 	 * If we don't have full ppgtt driver can change our addresses
 	 * so allocator is useless in this case. Just enforce relocations
 	 * for such gens and don't use allocator at all.
 	 */
-	if (!ibb->uses_full_ppgtt) {
+	if (!ibb->uses_full_ppgtt)
 		do_relocs = true;
-		allocator_type = INTEL_ALLOCATOR_NONE;
-	}
 
-	if (!do_relocs)
-		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+	/* if relocs are set we won't use an allocator */
+	if (do_relocs)
+		allocator_type = INTEL_ALLOCATOR_NONE;
 	else
-		igt_assert(allocator_type == INTEL_ALLOCATOR_NONE);
+		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
+
 	ibb->allocator_type = allocator_type;
 	ibb->i915 = i915;
-	ibb->devid = intel_get_drm_devid(i915);
-	ibb->gen = intel_gen(ibb->devid);
 	ibb->enforce_relocs = do_relocs;
 	ibb->handle = gem_create(i915, size);
 	ibb->size = size;
@@ -1327,7 +1327,7 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
  *
  * Creates bb with context passed in @ctx, size in @size and allocator type
  * in @allocator_type. Relocations are set to false because IGT allocator
- * is not used in that case.
+ * is used in that case.
  *
  * Returns:
  *
@@ -1352,7 +1352,11 @@ struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
  */
 struct intel_bb *intel_bb_create(int i915, uint32_t size)
 {
-	return __intel_bb_create(i915, 0, size, false, INTEL_ALLOCATOR_SIMPLE);
+	bool relocs = gem_has_relocations(i915);
+	unsigned int gen = intel_gen(intel_get_drm_devid(i915));
+
+	return __intel_bb_create(i915, 0, size, relocs && gen < 12,
+				 INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1370,7 +1374,11 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 {
-	return __intel_bb_create(i915, ctx, size, false, INTEL_ALLOCATOR_SIMPLE);
+	bool relocs = gem_has_relocations(i915);
+	unsigned int gen = intel_gen(intel_get_drm_devid(i915));
+
+	return __intel_bb_create(i915, ctx, size, relocs && gen < 12,
+				 INTEL_ALLOCATOR_SIMPLE);
 }
 
 /**
@@ -1387,6 +1395,8 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
  */
 struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
+	igt_require(gem_has_relocations(i915));
+
 	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
 }
 
@@ -1406,6 +1416,8 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 struct intel_bb *
 intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
+	igt_require(gem_has_relocations(i915));
+
 	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 16/35] lib/intel_batchbuffer: Create bb with strategy / vm ranges
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (14 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 15/35] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 17/35] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
                   ` (20 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Previously intel-bb just used default allocator settings (safe,
chosen for work from the box). But limitation is not good if we want
to exercise non-standard cases (vm ranges for example). As allocator
provides passing full settings lets intel-bb also allows to use them.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c | 63 +++++++++++++++++++++++++++++++++--------
 lib/intel_batchbuffer.h | 10 +++++--
 2 files changed, 59 insertions(+), 14 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 4f67bb127..2d9c08d6a 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1267,7 +1267,8 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  */
 static struct intel_bb *
 __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
-		  uint8_t allocator_type)
+		  uint64_t start, uint64_t end,
+		  uint8_t allocator_type, enum allocator_strategy strategy)
 {
 	struct drm_i915_gem_exec_object2 *object;
 	struct intel_bb *ibb = calloc(1, sizeof(*ibb));
@@ -1290,9 +1291,12 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 	if (do_relocs)
 		allocator_type = INTEL_ALLOCATOR_NONE;
 	else
-		ibb->allocator_handle = intel_allocator_open(i915, ctx, allocator_type);
-
+		ibb->allocator_handle = intel_allocator_open_full(i915, ctx,
+								  start, end,
+								  allocator_type,
+								  strategy);
 	ibb->allocator_type = allocator_type;
+	ibb->allocator_strategy = strategy;
 	ibb->i915 = i915;
 	ibb->enforce_relocs = do_relocs;
 	ibb->handle = gem_create(i915, size);
@@ -1323,20 +1327,51 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
  * @i915: drm fd
  * @ctx: context
  * @size: size of the batchbuffer
+ * @start: allocator vm start address
+ * @end: allocator vm start address
  * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ * @strategy: allocation strategy
  *
  * Creates bb with context passed in @ctx, size in @size and allocator type
  * in @allocator_type. Relocations are set to false because IGT allocator
- * is used in that case.
+ * is used in that case. VM range is passed to allocator (@start and @end)
+ * and allocation @strategy (suggestion to allocator about address allocation
+ * preferences).
  *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
  */
 struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
-				      uint8_t allocator_type)
+				      uint64_t start, uint64_t end,
+				      uint8_t allocator_type,
+				      enum allocator_strategy strategy)
+{
+	return __intel_bb_create(i915, ctx, size, false, start, end,
+				 allocator_type, strategy);
+}
+
+/**
+ * intel_bb_create_with_allocator:
+ * @i915: drm fd
+ * @ctx: context
+ * @size: size of the batchbuffer
+ * @allocator_type: allocator type, SIMPLE, RANDOM, ...
+ *
+ * Creates bb with context passed in @ctx, size in @size and allocator type
+ * in @allocator_type. Relocations are set to false because IGT allocator
+ * is used in that case.
+ *
+ * Returns:
+ *
+ * Pointer the intel_bb, asserts on failure.
+ */
+struct intel_bb *intel_bb_create_with_allocator(int i915, uint32_t ctx,
+						uint32_t size,
+						uint8_t allocator_type)
 {
-	return __intel_bb_create(i915, ctx, size, false, allocator_type);
+	return __intel_bb_create(i915, ctx, size, false, 0, 0,
+				 allocator_type, ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
@@ -1355,8 +1390,9 @@ struct intel_bb *intel_bb_create(int i915, uint32_t size)
 	bool relocs = gem_has_relocations(i915);
 	unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 
-	return __intel_bb_create(i915, 0, size, relocs && gen < 12,
-				 INTEL_ALLOCATOR_SIMPLE);
+	return __intel_bb_create(i915, 0, size, relocs && gen < 12, 0, 0,
+				 INTEL_ALLOCATOR_SIMPLE,
+				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
@@ -1377,8 +1413,9 @@ intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size)
 	bool relocs = gem_has_relocations(i915);
 	unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 
-	return __intel_bb_create(i915, ctx, size, relocs && gen < 12,
-				 INTEL_ALLOCATOR_SIMPLE);
+	return __intel_bb_create(i915, ctx, size, relocs && gen < 12, 0, 0,
+				 INTEL_ALLOCATOR_SIMPLE,
+				 ALLOC_STRATEGY_HIGH_TO_LOW);
 }
 
 /**
@@ -1397,7 +1434,8 @@ struct intel_bb *intel_bb_create_with_relocs(int i915, uint32_t size)
 {
 	igt_require(gem_has_relocations(i915));
 
-	return __intel_bb_create(i915, 0, size, true, INTEL_ALLOCATOR_NONE);
+	return __intel_bb_create(i915, 0, size, true, 0, 0,
+				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
 }
 
 /**
@@ -1418,7 +1456,8 @@ intel_bb_create_with_relocs_and_context(int i915, uint32_t ctx, uint32_t size)
 {
 	igt_require(gem_has_relocations(i915));
 
-	return __intel_bb_create(i915, ctx, size, true, INTEL_ALLOCATOR_NONE);
+	return __intel_bb_create(i915, ctx, size, true, 0, 0,
+				 INTEL_ALLOCATOR_NONE, ALLOC_STRATEGY_NONE);
 }
 
 static void __intel_bb_destroy_relocations(struct intel_bb *ibb)
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index b9a4ee7e8..702052d22 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -444,6 +444,7 @@ igt_media_spinfunc_t igt_get_media_spinfunc(int devid);
 struct intel_bb {
 	uint64_t allocator_handle;
 	uint8_t allocator_type;
+	enum allocator_strategy allocator_strategy;
 
 	int i915;
 	unsigned int gen;
@@ -488,8 +489,13 @@ struct intel_bb {
 	int32_t refcount;
 };
 
-struct intel_bb *intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
-				      uint8_t allocator_type);
+struct intel_bb *
+intel_bb_create_full(int i915, uint32_t ctx, uint32_t size,
+		     uint64_t start, uint64_t end,
+		     uint8_t allocator_type, enum allocator_strategy strategy);
+struct intel_bb *
+intel_bb_create_with_allocator(int i915, uint32_t ctx,
+			       uint32_t size, uint8_t allocator_type);
 struct intel_bb *intel_bb_create(int i915, uint32_t size);
 struct intel_bb *
 intel_bb_create_with_context(int i915, uint32_t ctx, uint32_t size);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 17/35] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (15 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 16/35] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 18/35] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
                   ` (19 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

From now on intel_bb starts tracking added/removed intel_bufs.
We're safe now regardless order of intel_buf close/destroy
or intel_bb destroy paths.

When intel_buf is closed/destroyed first and it was previously added
to intel_bb it calls the code which removes itself from intel_bb.
In destroy path we go over all tracked intel_bufs and clear tracking
information and buffer offset (it is set to INTEL_BUF_INVALID_ADDRESS).

Reset path is handled as follows:
- intel_bb_reset(ibb, false) - just clean objects array leaving
  cache / allocator state intact.
- intel_bb_reset(ibb, true) - purge cache as well as detach intel_bufs
  from intel_bb (release offsets from allocator).

Remove intel_bb_object_offset_to_buf() function as tracking intel_buf
updates (checks for allocator) their offsets after execbuf.

Alter api_intel_bb according to intel-bb changes.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/intel_batchbuffer.c   | 119 +++++++++++++++++++++++---------------
 lib/intel_batchbuffer.h   |   5 ++
 lib/intel_bufops.c        |  13 ++++-
 lib/intel_bufops.h        |   6 ++
 lib/media_spin.c          |   2 -
 tests/i915/api_intel_bb.c |   7 ---
 6 files changed, 96 insertions(+), 56 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 2d9c08d6a..b76a8bb87 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1261,6 +1261,8 @@ static inline uint64_t __intel_bb_get_offset(struct intel_bb *ibb,
  * If we do reset without purging caches we use addresses from intel-bb cache
  * during execbuf objects construction.
  *
+ * If we do reset with purging caches allocator entires are freed as well.
+ *
  * Returns:
  *
  * Pointer the intel_bb, asserts on failure.
@@ -1317,6 +1319,8 @@ __intel_bb_create(int i915, uint32_t ctx, uint32_t size, bool do_relocs,
 				     false);
 	ibb->batch_offset = object->offset;
 
+	IGT_INIT_LIST_HEAD(&ibb->intel_bufs);
+
 	ibb->refcount = 1;
 
 	return ibb;
@@ -1494,6 +1498,15 @@ static void __intel_bb_destroy_cache(struct intel_bb *ibb)
 	ibb->root = NULL;
 }
 
+static void __intel_bb_remove_intel_bufs(struct intel_bb *ibb)
+{
+	struct intel_buf *entry, *tmp;
+
+	igt_list_for_each_entry_safe(entry, tmp, &ibb->intel_bufs, link) {
+		intel_bb_remove_intel_buf(ibb, entry);
+	}
+}
+
 /**
  * intel_bb_destroy:
  * @ibb: pointer to intel_bb
@@ -1507,6 +1520,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
 	ibb->refcount--;
 	igt_assert_f(ibb->refcount == 0, "Trying to destroy referenced bb!");
 
+	__intel_bb_remove_intel_bufs(ibb);
 	__intel_bb_destroy_relocations(ibb);
 	__intel_bb_destroy_objects(ibb);
 	__intel_bb_destroy_cache(ibb);
@@ -1530,6 +1544,10 @@ void intel_bb_destroy(struct intel_bb *ibb)
  * @purge_objects_cache: if true destroy internal execobj and relocs + cache
  *
  * Recreate batch bo when there's no additional reference.
+ *
+ * When purge_object_cache == true we destroy cache as well as remove intel_buf
+ * from intel-bb tracking list. Removing intel_bufs releases their addresses
+ * in the allocator.
 */
 
 void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
@@ -1555,8 +1573,10 @@ void intel_bb_reset(struct intel_bb *ibb, bool purge_objects_cache)
 	__intel_bb_destroy_objects(ibb);
 	__reallocate_objects(ibb);
 
-	if (purge_objects_cache)
+	if (purge_objects_cache) {
+		__intel_bb_remove_intel_bufs(ibb);
 		__intel_bb_destroy_cache(ibb);
+	}
 
 	/*
 	 * When we use allocators we're in no-reloc mode so we have to free
@@ -1861,22 +1881,13 @@ intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
 
 	object->offset = offset;
 
-	/* Limit current offset to gtt size */
-	if (offset != INTEL_BUF_INVALID_ADDRESS) {
-		object->offset = CANONICAL(offset & (ibb->gtt_size - 1));
-	} else {
-		object->offset = __intel_bb_get_offset(ibb,
-						       handle, size,
-						       object->alignment);
-	}
-
 	if (write)
 		object->flags |= EXEC_OBJECT_WRITE;
 
 	if (ibb->supports_48b_address)
 		object->flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
 
-	if (ibb->uses_full_ppgtt && ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+	if (ibb->uses_full_ppgtt && !ibb->enforce_relocs)
 		object->flags |= EXEC_OBJECT_PINNED;
 
 	return object;
@@ -1913,6 +1924,9 @@ __intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
 {
 	struct drm_i915_gem_exec_object2 *obj;
 
+	igt_assert(ibb);
+	igt_assert(buf);
+	igt_assert(!buf->ibb || buf->ibb == ibb);
 	igt_assert(ALIGN(alignment, 4096) == alignment);
 
 	if (!alignment) {
@@ -1937,6 +1951,13 @@ __intel_bb_add_intel_buf(struct intel_bb *ibb, struct intel_buf *buf,
 	if (!ibb->enforce_relocs)
 		obj->alignment = alignment;
 
+	if (igt_list_empty(&buf->link)) {
+		igt_list_add_tail(&buf->link, &ibb->intel_bufs);
+		buf->ibb = ibb;
+	} else {
+		igt_assert(buf->ibb == ibb);
+	}
+
 	return obj;
 }
 
@@ -1955,16 +1976,36 @@ intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *bu
 
 bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf)
 {
-	bool removed = intel_bb_remove_object(ibb, buf->handle,
-					      buf->addr.offset,
-					      intel_buf_bo_size(buf));
+	bool removed;
+
+	igt_assert(ibb);
+	igt_assert(buf);
+	igt_assert(!buf->ibb || buf->ibb == ibb);
 
-	if (removed)
+	removed = intel_bb_remove_object(ibb, buf->handle,
+					 buf->addr.offset,
+					 intel_buf_bo_size(buf));
+
+	if (removed && !igt_list_empty(&buf->link)) {
 		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+		buf->ibb = NULL;
+		igt_list_del_init(&buf->link);
+	}
 
 	return removed;
 }
 
+void intel_bb_intel_buf_list(struct intel_bb *ibb)
+{
+	struct intel_buf *entry;
+
+	igt_list_for_each_entry(entry, &ibb->intel_bufs, link) {
+		igt_info("handle: %u, ibb: %p, offset: %lx\n",
+			 entry->handle, entry->ibb,
+			 (long) entry->addr.offset);
+	}
+}
+
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle)
 {
@@ -2347,6 +2388,7 @@ static void update_offsets(struct intel_bb *ibb,
 			   struct drm_i915_gem_exec_object2 *objects)
 {
 	struct drm_i915_gem_exec_object2 *object;
+	struct intel_buf *entry;
 	uint32_t i;
 
 	for (i = 0; i < ibb->num_objects; i++) {
@@ -2358,11 +2400,23 @@ static void update_offsets(struct intel_bb *ibb,
 		if (i == 0)
 			ibb->batch_offset = object->offset;
 	}
+
+	igt_list_for_each_entry(entry, &ibb->intel_bufs, link) {
+		object = intel_bb_find_object(ibb, entry->handle);
+		igt_assert(object);
+
+		if (ibb->allocator_type == INTEL_ALLOCATOR_SIMPLE)
+			igt_assert(object->offset == entry->addr.offset);
+		else
+			entry->addr.offset = object->offset;
+
+		entry->addr.ctx = ibb->ctx;
+	}
 }
 
 #define LINELEN 76
 /*
- * @__intel_bb_exec:
+ * __intel_bb_exec:
  * @ibb: pointer to intel_bb
  * @end_offset: offset of the last instruction in the bb
  * @flags: flags passed directly to execbuf
@@ -2402,6 +2456,9 @@ static int __intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 	if (ibb->dump_base64)
 		intel_bb_dump_base64(ibb, LINELEN);
 
+	/* For debugging on CI, remove in final series */
+	intel_bb_dump_execbuf(ibb, &execbuf);
+
 	ret = __gem_execbuf_wr(ibb->i915, &execbuf);
 	if (ret) {
 		intel_bb_dump_execbuf(ibb, &execbuf);
@@ -2479,36 +2536,6 @@ uint64_t intel_bb_get_object_offset(struct intel_bb *ibb, uint32_t handle)
 	return (*found)->offset;
 }
 
-/**
- * intel_bb_object_offset_to_buf:
- * @ibb: pointer to intel_bb
- * @buf: buffer we want to store last exec offset and context id
- *
- * Copy object offset used in the batch to intel_buf to allow caller prepare
- * other batch likely without relocations.
- */
-bool intel_bb_object_offset_to_buf(struct intel_bb *ibb, struct intel_buf *buf)
-{
-	struct drm_i915_gem_exec_object2 object = { .handle = buf->handle };
-	struct drm_i915_gem_exec_object2 **found;
-
-	igt_assert(ibb);
-	igt_assert(buf);
-
-	found = tfind((void *)&object, &ibb->root, __compare_objects);
-	if (!found) {
-		buf->addr.offset = 0;
-		buf->addr.ctx = ibb->ctx;
-
-		return false;
-	}
-
-	buf->addr.offset = (*found)->offset & (ibb->gtt_size - 1);
-	buf->addr.ctx = ibb->ctx;
-
-	return true;
-}
-
 /*
  * intel_bb_emit_bbe:
  * @ibb: batchbuffer
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 702052d22..f8a38967b 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -6,6 +6,7 @@
 #include <i915_drm.h>
 
 #include "igt_core.h"
+#include "igt_list.h"
 #include "intel_reg.h"
 #include "drmtest.h"
 #include "intel_allocator.h"
@@ -481,6 +482,9 @@ struct intel_bb {
 	uint32_t num_relocs;
 	uint32_t allocated_relocs;
 
+	/* Tracked intel_bufs */
+	struct igt_list_head intel_bufs;
+
 	/*
 	 * BO recreate in reset path only when refcount == 0
 	 * Currently we don't need to use atomics because intel_bb
@@ -598,6 +602,7 @@ struct drm_i915_gem_exec_object2 *
 intel_bb_add_intel_buf_with_alignment(struct intel_bb *ibb, struct intel_buf *buf,
 				      uint64_t alignment, bool write);
 bool intel_bb_remove_intel_buf(struct intel_bb *ibb, struct intel_buf *buf);
+void intel_bb_intel_buf_list(struct intel_bb *ibb);
 struct drm_i915_gem_exec_object2 *
 intel_bb_find_object(struct intel_bb *ibb, uint32_t handle);
 
diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c
index d8eb64e3a..166a957f1 100644
--- a/lib/intel_bufops.c
+++ b/lib/intel_bufops.c
@@ -727,6 +727,7 @@ static void __intel_buf_init(struct buf_ops *bops,
 
 	buf->bops = bops;
 	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+	IGT_INIT_LIST_HEAD(&buf->link);
 
 	if (compression) {
 		int aux_width, aux_height;
@@ -822,13 +823,23 @@ void intel_buf_init(struct buf_ops *bops,
  *
  * Function closes gem BO inside intel_buf if bo is owned by intel_buf.
  * For handle passed from the caller intel_buf doesn't take ownership and
- * doesn't close it in close()/destroy() paths.
+ * doesn't close it in close()/destroy() paths. When intel_buf was previously
+ * added to intel_bb (intel_bb_add_intel_buf() call) it is tracked there and
+ * must be removed from its internal structures.
  */
 void intel_buf_close(struct buf_ops *bops, struct intel_buf *buf)
 {
 	igt_assert(bops);
 	igt_assert(buf);
 
+	/* If buf is tracked by some intel_bb ensure it will be removed there */
+	if (buf->ibb) {
+		intel_bb_remove_intel_buf(buf->ibb, buf);
+		buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+		buf->ibb = NULL;
+		IGT_INIT_LIST_HEAD(&buf->link);
+	}
+
 	if (buf->is_owner)
 		gem_close(bops->fd, buf->handle);
 }
diff --git a/lib/intel_bufops.h b/lib/intel_bufops.h
index 54480bff6..1a3d86925 100644
--- a/lib/intel_bufops.h
+++ b/lib/intel_bufops.h
@@ -2,6 +2,7 @@
 #define __INTEL_BUFOPS_H__
 
 #include <stdint.h>
+#include "igt_list.h"
 #include "igt_aux.h"
 #include "intel_batchbuffer.h"
 
@@ -13,6 +14,7 @@ struct buf_ops;
 
 struct intel_buf {
 	struct buf_ops *bops;
+
 	bool is_owner;
 	uint32_t handle;
 	uint64_t size;
@@ -40,6 +42,10 @@ struct intel_buf {
 		uint32_t ctx;
 	} addr;
 
+	/* Tracking */
+	struct intel_bb *ibb;
+	struct igt_list_head link;
+
 	/* CPU mapping */
 	uint32_t *ptr;
 	bool cpu_write;
diff --git a/lib/media_spin.c b/lib/media_spin.c
index 5da469a52..d2345d153 100644
--- a/lib/media_spin.c
+++ b/lib/media_spin.c
@@ -132,7 +132,6 @@ gen8_media_spinfunc(int i915, struct intel_buf *buf, uint32_t spins)
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 
-	intel_bb_object_offset_to_buf(ibb, buf);
 	intel_bb_destroy(ibb);
 }
 
@@ -186,6 +185,5 @@ gen9_media_spinfunc(int i915, struct intel_buf *buf, uint32_t spins)
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 
-	intel_bb_object_offset_to_buf(ibb, buf);
 	intel_bb_destroy(ibb);
 }
diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index c6c943506..77dfb6854 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -828,9 +828,6 @@ static void offset_control(struct buf_ops *bops)
 		print_buf(dst2, "dst2");
 	}
 
-	igt_assert(intel_bb_object_offset_to_buf(ibb, src) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst1) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst2) == true);
 	poff_src = src->addr.offset;
 	poff_dst1 = dst1->addr.offset;
 	poff_dst2 = dst2->addr.offset;
@@ -853,10 +850,6 @@ static void offset_control(struct buf_ops *bops)
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, false);
 	intel_bb_sync(ibb);
 
-	igt_assert(intel_bb_object_offset_to_buf(ibb, src) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst1) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst2) == true);
-	igt_assert(intel_bb_object_offset_to_buf(ibb, dst3) == true);
 	igt_assert(poff_src == src->addr.offset);
 	igt_assert(poff_dst1 == dst1->addr.offset);
 	igt_assert(poff_dst2 == dst2->addr.offset);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 18/35] lib/igt_fb: Initialize intel_buf with same size as fb
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (16 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 17/35] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 19/35] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
                   ` (18 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

We need to have same size when intel_buf is initialized over fb
(with compression) because allocator could be called with smaller
size what could lead to relocation.

Use new intel_buf function which allows initalize with handle and size.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/igt_fb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index 4b9be47eb..375d3a35f 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2174,11 +2174,11 @@ igt_fb_create_intel_buf(int fd, struct buf_ops *bops,
 	bo_name = gem_flink(fd, fb->gem_handle);
 	handle = gem_open(fd, bo_name);
 
-	buf = intel_buf_create_using_handle(bops, handle,
-					    fb->width, fb->height,
-					    fb->plane_bpp[0], 0,
-					    igt_fb_mod_to_tiling(fb->modifier),
-					    compression);
+	buf = intel_buf_create_using_handle_and_size(bops, handle,
+						     fb->width, fb->height,
+						     fb->plane_bpp[0], 0,
+						     igt_fb_mod_to_tiling(fb->modifier),
+						     compression, fb->size);
 	intel_buf_set_name(buf, name);
 
 	/* Make sure we close handle on destroy path */
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 19/35] tests/api_intel_bb: Modify test to verify intel_bb with allocator
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (17 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 18/35] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 20/35] tests/api_intel_bb: Add subtest to check render batch on the last page Zbigniew Kempczyński
                   ` (17 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

intel_bb was adopted to use allocator. Change the test to verify
addresses in different scenarios - with relocations and with softpin.

v2: skip addresses which are beyond allocator range in check-canonical

v3: adding intel-buf to intel-bb inserts addresses so they should
    be same even if intel-bb cache purge was called

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 536 +++++++++++++++++++++++++++++---------
 1 file changed, 414 insertions(+), 122 deletions(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 77dfb6854..d5ed77314 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -22,6 +22,7 @@
  */
 
 #include "igt.h"
+#include "i915/gem.h"
 #include <unistd.h>
 #include <stdlib.h>
 #include <stdio.h>
@@ -123,24 +124,34 @@ static void print_buf(struct intel_buf *buf, const char *name)
 
 	ptr = gem_mmap__device_coherent(i915, buf->handle, 0,
 					buf->surface[0].size, PROT_READ);
-	igt_debug("[%s] Buf handle: %d, size: %" PRIx64 ", v: 0x%02x, presumed_addr: %p\n",
+	igt_debug("[%s] Buf handle: %d, size: %" PRIu64
+		  ", v: 0x%02x, presumed_addr: %p\n",
 		  name, buf->handle, buf->surface[0].size, ptr[0],
 		  from_user_pointer(buf->addr.offset));
 	munmap(ptr, buf->surface[0].size);
 }
 
+static void reset_bb(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	intel_bb_reset(ibb, false);
+	intel_bb_destroy(ibb);
+}
+
 static void simple_bb(struct buf_ops *bops, bool use_context)
 {
 	int i915 = buf_ops_get_fd(bops);
 	struct intel_bb *ibb;
-	uint32_t ctx;
+	uint32_t ctx = 0;
 
-	if (use_context) {
+	if (use_context)
 		gem_require_contexts(i915);
-		ctx = gem_context_create(i915);
-	}
 
-	ibb = intel_bb_create(i915, PAGE_SIZE);
+	ibb = intel_bb_create_with_allocator(i915, ctx, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
 	if (debug_bb)
 		intel_bb_set_debug(ibb, true);
 
@@ -155,10 +166,8 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 	intel_bb_reset(ibb, false);
 	intel_bb_reset(ibb, true);
 
-	intel_bb_out(ibb, MI_BATCH_BUFFER_END);
-	intel_bb_ptr_align(ibb, 8);
-
 	if (use_context) {
+		ctx = gem_context_create(i915);
 		intel_bb_destroy(ibb);
 		ibb = intel_bb_create_with_context(i915, ctx, PAGE_SIZE);
 		intel_bb_out(ibb, MI_BATCH_BUFFER_END);
@@ -166,11 +175,10 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 		intel_bb_exec(ibb, intel_bb_offset(ibb),
 			      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC,
 			      true);
+		gem_context_destroy(i915, ctx);
 	}
 
 	intel_bb_destroy(ibb);
-	if (use_context)
-		gem_context_destroy(i915, ctx);
 }
 
 /*
@@ -194,29 +202,43 @@ static void lot_of_buffers(struct buf_ops *bops)
 	for (i = 0; i < NUM_BUFS; i++) {
 		buf[i] = intel_buf_create(bops, 4096, 1, 8, 0, I915_TILING_NONE,
 					  I915_COMPRESSION_NONE);
-		intel_bb_add_intel_buf(ibb, buf[i], false);
+		if (i % 2)
+			intel_bb_add_intel_buf(ibb, buf[i], false);
+		else
+			intel_bb_add_intel_buf_with_alignment(ibb, buf[i],
+							      0x4000, false);
 	}
 
 	intel_bb_exec(ibb, intel_bb_offset(ibb),
 		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
 
-	intel_bb_destroy(ibb);
-
 	for (i = 0; i < NUM_BUFS; i++)
 		intel_buf_destroy(buf[i]);
+
+	intel_bb_destroy(ibb);
 }
 
 /*
  * Make sure intel-bb space allocator currently doesn't enter 47-48 bit
  * gtt sizes.
  */
+#define GEN8_HIGH_ADDRESS_BIT 47
+static uint64_t gen8_canonical_addr(uint64_t address)
+{
+	int shift = 63 - GEN8_HIGH_ADDRESS_BIT;
+
+	return (int64_t)(address << shift) >> shift;
+}
+
 static void check_canonical(struct buf_ops *bops)
 {
-	int i915 = buf_ops_get_fd(bops);
+	int i915 = buf_ops_get_fd(bops), i;
 	struct intel_bb *ibb;
 	struct intel_buf *buf;
 	uint32_t offset;
-	uint64_t address;
+	/* Twice same addresses to verify will be unreserved in remove path */
+	uint64_t addresses[] = { 0x800000000000, 0x800000000000, 0x400000000000 };
+	uint64_t address, start, end;
 	bool supports_48bit;
 
 	ibb = intel_bb_create(i915, PAGE_SIZE);
@@ -225,25 +247,43 @@ static void check_canonical(struct buf_ops *bops)
 		intel_bb_destroy(ibb);
 	igt_require_f(supports_48bit, "We need 48bit ppgtt for testing\n");
 
-	address = 0xc00000000000;
 	if (debug_bb)
 		intel_bb_set_debug(ibb, true);
 
-	offset = intel_bb_emit_bbe(ibb);
+	intel_allocator_get_address_range(ibb->allocator_handle,
+					  &start, &end);
 
-	buf = intel_buf_create(bops, 512, 512, 32, 0,
-			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	for (i = 0; i < ARRAY_SIZE(addresses); i++) {
+		address = addresses[i];
+
+		if (address < start || address >= end) {
+			igt_debug("Address too big: %" PRIx64
+				  ", start: %" PRIx64 ", end: %" PRIx64
+				  ", skipping...\n",
+				  address, start, end);
+			continue;
+		}
 
-	buf->addr.offset = address;
-	intel_bb_add_intel_buf(ibb, buf, true);
-	intel_bb_object_set_flag(ibb, buf->handle, EXEC_OBJECT_PINNED);
+		offset = intel_bb_emit_bbe(ibb);
 
-	igt_assert(buf->addr.offset == 0);
+		buf = intel_buf_create(bops, 512, 512, 32, 0,
+				       I915_TILING_NONE, I915_COMPRESSION_NONE);
 
-	intel_bb_exec(ibb, offset,
-		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+		buf->addr.offset = address;
+		intel_bb_add_intel_buf(ibb, buf, true);
+		intel_bb_add_intel_buf(ibb, buf, true);
+
+		igt_assert(buf->addr.offset == gen8_canonical_addr(address));
+
+		intel_bb_exec(ibb, offset,
+			      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+		/* We ensure to do unreserve and verify address can be reused */
+		intel_bb_remove_intel_buf(ibb, buf);
+		intel_buf_destroy(buf);
+		intel_bb_reset(ibb, true);
+	}
 
-	intel_buf_destroy(buf);
 	intel_bb_destroy(ibb);
 }
 
@@ -339,70 +379,287 @@ static void reset_flags(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
+static void add_remove_objects(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *mid, *dst;
+	uint32_t offset;
+	const uint32_t width = 512;
+	const uint32_t height = 512;
 
-#define MI_FLUSH_DW (0x26<<23)
-#define BCS_SWCTRL  0x22200
-#define BCS_SRC_Y   (1 << 0)
-#define BCS_DST_Y   (1 << 1)
-static void __emit_blit(struct intel_bb *ibb,
-			struct intel_buf *src, struct intel_buf *dst)
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	mid = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, mid, true);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_remove_intel_buf(ibb, mid);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	intel_buf_destroy(src);
+	intel_buf_destroy(mid);
+	intel_buf_destroy(dst);
+	intel_bb_destroy(ibb);
+}
+
+static void destroy_bb(struct buf_ops *bops)
 {
-	uint32_t mask;
-	bool has_64b_reloc;
-	uint64_t address;
-
-	has_64b_reloc = ibb->gen >= 8;
-
-	if ((src->tiling | dst->tiling) >= I915_TILING_Y) {
-		intel_bb_out(ibb, MI_LOAD_REGISTER_IMM);
-		intel_bb_out(ibb, BCS_SWCTRL);
-
-		mask = (BCS_SRC_Y | BCS_DST_Y) << 16;
-		if (src->tiling == I915_TILING_Y)
-			mask |= BCS_SRC_Y;
-		if (dst->tiling == I915_TILING_Y)
-			mask |= BCS_DST_Y;
-		intel_bb_out(ibb, mask);
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *mid, *dst;
+	uint32_t offset;
+	const uint32_t width = 512;
+	const uint32_t height = 512;
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	mid = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, width, height, 32, 0,
+			       I915_TILING_NONE, I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, mid, true);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	/* Check destroy will detach intel_bufs */
+	intel_bb_destroy(ibb);
+	igt_assert(src->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(src->ibb == NULL);
+	igt_assert(mid->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(mid->ibb == NULL);
+	igt_assert(dst->addr.offset == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(dst->ibb == NULL);
+
+	ibb = intel_bb_create(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	offset = intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, offset,
+		      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
+
+	intel_bb_destroy(ibb);
+	intel_buf_destroy(src);
+	intel_buf_destroy(mid);
+	intel_buf_destroy(dst);
+}
+
+static void object_reloc(struct buf_ops *bops, enum obj_cache_ops cache_op)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	uint32_t h1, h2;
+	uint64_t poff_bb, poff_h1, poff_h2;
+	uint64_t poff2_bb, poff2_h1, poff2_h2;
+	uint64_t flags = 0;
+	uint64_t shift = cache_op == PURGE_CACHE ? 0x2000 : 0x0;
+	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
+
+	ibb = intel_bb_create_with_relocs(i915, PAGE_SIZE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	h1 = gem_create(i915, PAGE_SIZE);
+	h2 = gem_create(i915, PAGE_SIZE);
+
+	/* intel_bb_create adds bb handle so it has 0 for relocs */
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	igt_assert(poff_bb == 0);
+
+	/* Before adding to intel_bb it should return INVALID_ADDRESS */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[1] poff_h1: %lx\n", (long) poff_h1);
+	igt_debug("[1] poff_h2: %lx\n", (long) poff_h2);
+	igt_assert(poff_h1 == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(poff_h2 == INTEL_BUF_INVALID_ADDRESS);
+
+	intel_bb_add_object(ibb, h1, PAGE_SIZE, poff_h1, 0, true);
+	intel_bb_add_object(ibb, h2, PAGE_SIZE, poff_h2, 0x2000, true);
+
+	/*
+	 * Objects were added to bb, we expect initial addresses are zeroed
+	 * for relocs.
+	 */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_assert(poff_h1 == 0);
+	igt_assert(poff_h2 == 0);
+
+	intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
+
+	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff2_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff2_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[2] poff2_h1: %lx\n", (long) poff2_h1);
+	igt_debug("[2] poff2_h2: %lx\n", (long) poff2_h2);
+	/* Some addresses won't be 0 */
+	igt_assert(poff2_bb | poff2_h1 | poff2_h2);
+
+	intel_bb_reset(ibb, purge_cache);
+
+	if (purge_cache) {
+		intel_bb_add_object(ibb, h1, PAGE_SIZE, poff2_h1, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2 + shift, 0x2000, true);
 	}
 
-	intel_bb_out(ibb,
-		     XY_SRC_COPY_BLT_CMD |
-		     XY_SRC_COPY_BLT_WRITE_ALPHA |
-		     XY_SRC_COPY_BLT_WRITE_RGB |
-		     (6 + 2 * has_64b_reloc));
-	intel_bb_out(ibb, 3 << 24 | 0xcc << 16 | dst->surface[0].stride);
-	intel_bb_out(ibb, 0);
-	intel_bb_out(ibb, intel_buf_height(dst) << 16 | intel_buf_width(dst));
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[3] poff_h1: %lx\n", (long) poff_h1);
+	igt_debug("[3] poff_h2: %lx\n", (long) poff_h2);
+	igt_debug("[3] poff2_h1: %lx\n", (long) poff2_h1);
+	igt_debug("[3] poff2_h2: %lx + shift (%lx)\n", (long) poff2_h2,
+		 (long) shift);
+	igt_assert(poff_h1 == poff2_h1);
+	igt_assert(poff_h2 == poff2_h2 + shift);
+	intel_bb_emit_bbe(ibb);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
 
-	address = intel_bb_get_object_offset(ibb, dst->handle);
-	intel_bb_emit_reloc_fenced(ibb, dst->handle,
-				   I915_GEM_DOMAIN_RENDER,
-				   I915_GEM_DOMAIN_RENDER,
-				   0, address);
-	intel_bb_out(ibb, 0);
-	intel_bb_out(ibb, src->surface[0].stride);
+	gem_close(i915, h1);
+	gem_close(i915, h2);
+	intel_bb_destroy(ibb);
+}
 
-	address = intel_bb_get_object_offset(ibb, src->handle);
-	intel_bb_emit_reloc_fenced(ibb, src->handle,
-				   I915_GEM_DOMAIN_RENDER, 0,
-				   0, address);
+#define WITHIN_RANGE(offset, start, end) \
+	(DECANONICAL(offset) >= start && DECANONICAL(offset) <= end)
+static void object_noreloc(struct buf_ops *bops, enum obj_cache_ops cache_op,
+			   uint8_t allocator_type)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	uint32_t h1, h2;
+	uint64_t start, end;
+	uint64_t poff_bb, poff_h1, poff_h2;
+	uint64_t poff2_bb, poff2_h1, poff2_h2;
+	uint64_t flags = 0;
+	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
 
-	if ((src->tiling | dst->tiling) >= I915_TILING_Y) {
-		igt_assert(ibb->gen >= 6);
-		intel_bb_out(ibb, MI_FLUSH_DW | 2);
-		intel_bb_out(ibb, 0);
-		intel_bb_out(ibb, 0);
-		intel_bb_out(ibb, 0);
+	igt_require(gem_uses_full_ppgtt(i915));
+
+	ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE, allocator_type);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	h1 = gem_create(i915, PAGE_SIZE);
+	h2 = gem_create(i915, PAGE_SIZE);
+
+	intel_allocator_get_address_range(ibb->allocator_handle,
+					  &start, &end);
+	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	igt_debug("[1] bb presumed offset: 0x%" PRIx64
+		  ", start: %" PRIx64 ", end: %" PRIx64 "\n",
+		  poff_bb, start, end);
+	igt_assert(WITHIN_RANGE(poff_bb, start, end));
+
+	/* Before adding to intel_bb it should return INVALID_ADDRESS */
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[1] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[1] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+	igt_assert(poff_h1 == INTEL_BUF_INVALID_ADDRESS);
+	igt_assert(poff_h2 == INTEL_BUF_INVALID_ADDRESS);
+
+	intel_bb_add_object(ibb, h1, PAGE_SIZE, poff_h1, 0, true);
+	intel_bb_add_object(ibb, h2, PAGE_SIZE, poff_h2, 0, true);
+
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[2] bb presumed offset: 0x%"PRIx64"\n", poff_bb);
+	igt_debug("[2] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[2] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+	igt_assert(WITHIN_RANGE(poff_bb, start, end));
+	igt_assert(WITHIN_RANGE(poff_h1, start, end));
+	igt_assert(WITHIN_RANGE(poff_h2, start, end));
 
-		intel_bb_out(ibb, MI_LOAD_REGISTER_IMM);
-		intel_bb_out(ibb, BCS_SWCTRL);
-		intel_bb_out(ibb, (BCS_SRC_Y | BCS_DST_Y) << 16);
+	intel_bb_emit_bbe(ibb);
+	igt_debug("exec flags: %" PRIX64 "\n", flags);
+	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, false);
+
+	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+	poff2_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff2_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[3] bb presumed offset: 0x%"PRIx64"\n", poff2_bb);
+	igt_debug("[3] h1 presumed offset: 0x%"PRIx64"\n", poff2_h1);
+	igt_debug("[3] h2 presumed offset: 0x%"PRIx64"\n", poff2_h2);
+	igt_assert(poff_h1 == poff2_h1);
+	igt_assert(poff_h2 == poff2_h2);
+
+	igt_debug("purge: %d\n", purge_cache);
+	intel_bb_reset(ibb, purge_cache);
+
+	/*
+	 * Check if intel-bb cache was purged:
+	 * a) retrieve same address from allocator (works for simple, not random)
+	 * b) passing previous address enters allocator <-> intel_bb cache
+	 *    consistency check path.
+	 */
+	if (purge_cache) {
+		intel_bb_add_object(ibb, h1, PAGE_SIZE,
+				    INTEL_BUF_INVALID_ADDRESS, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2, 0, true);
+	} else {
+		/* See consistency check will not fail */
+		intel_bb_add_object(ibb, h1, PAGE_SIZE, poff2_h1, 0, true);
+		intel_bb_add_object(ibb, h2, PAGE_SIZE, poff2_h2, 0, true);
 	}
+
+	poff_h1 = intel_bb_get_object_offset(ibb, h1);
+	poff_h2 = intel_bb_get_object_offset(ibb, h2);
+	igt_debug("[4] bb presumed offset: 0x%"PRIx64"\n", poff_bb);
+	igt_debug("[4] h1 presumed offset: 0x%"PRIx64"\n", poff_h1);
+	igt_debug("[4] h2 presumed offset: 0x%"PRIx64"\n", poff_h2);
+
+	/* For simple allocator and purge=cache we must have same addresses */
+	if (allocator_type == INTEL_ALLOCATOR_SIMPLE || !purge_cache) {
+		igt_assert(poff_h1 == poff2_h1);
+		igt_assert(poff_h2 == poff2_h2);
+	}
+
+	gem_close(i915, h1);
+	gem_close(i915, h2);
+	intel_bb_destroy(ibb);
+}
+static void __emit_blit(struct intel_bb *ibb,
+			 struct intel_buf *src, struct intel_buf *dst)
+{
+	intel_bb_emit_blt_copy(ibb,
+			       src, 0, 0, src->surface[0].stride,
+			       dst, 0, 0, dst->surface[0].stride,
+			       intel_buf_width(dst),
+			       intel_buf_height(dst),
+			       dst->bpp);
 }
 
 static void blit(struct buf_ops *bops,
 		 enum reloc_objects reloc_obj,
-		 enum obj_cache_ops cache_op)
+		 enum obj_cache_ops cache_op,
+		 uint8_t allocator_type)
 {
 	int i915 = buf_ops_get_fd(bops);
 	struct intel_bb *ibb;
@@ -413,6 +670,9 @@ static void blit(struct buf_ops *bops,
 	bool purge_cache = cache_op == PURGE_CACHE ? true : false;
 	bool do_relocs = reloc_obj == RELOC ? true : false;
 
+	if (!do_relocs)
+		igt_require(gem_uses_full_ppgtt(i915));
+
 	src = create_buf(bops, WIDTH, HEIGHT, COLOR_CC);
 	dst = create_buf(bops, WIDTH, HEIGHT, COLOR_00);
 
@@ -424,38 +684,33 @@ static void blit(struct buf_ops *bops,
 	if (do_relocs) {
 		ibb = intel_bb_create_with_relocs(i915, PAGE_SIZE);
 	} else {
-		ibb = intel_bb_create(i915, PAGE_SIZE);
+		ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE,
+						     allocator_type);
 		flags |= I915_EXEC_NO_RELOC;
 	}
 
-	if (ibb->gen >= 6)
-		flags |= I915_EXEC_BLT;
-
 	if (debug_bb)
 		intel_bb_set_debug(ibb, true);
 
-
-	intel_bb_add_intel_buf(ibb, src, false);
-	intel_bb_add_intel_buf(ibb, dst, true);
-
 	__emit_blit(ibb, src, dst);
 
 	/* We expect initial addresses are zeroed for relocs */
-	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
-	poff_src = intel_bb_get_object_offset(ibb, src->handle);
-	poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
-	igt_debug("bb  presumed offset: 0x%"PRIx64"\n", poff_bb);
-	igt_debug("src presumed offset: 0x%"PRIx64"\n", poff_src);
-	igt_debug("dst presumed offset: 0x%"PRIx64"\n", poff_dst);
 	if (reloc_obj == RELOC) {
+		poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
+		poff_src = intel_bb_get_object_offset(ibb, src->handle);
+		poff_dst = intel_bb_get_object_offset(ibb, dst->handle);
+		igt_debug("bb  presumed offset: 0x%"PRIx64"\n", poff_bb);
+		igt_debug("src presumed offset: 0x%"PRIx64"\n", poff_src);
+		igt_debug("dst presumed offset: 0x%"PRIx64"\n", poff_dst);
 		igt_assert(poff_bb == 0);
 		igt_assert(poff_src == 0);
 		igt_assert(poff_dst == 0);
 	}
 
 	intel_bb_emit_bbe(ibb);
-	igt_debug("exec flags: %" PRIX64 "\n", flags);
-	intel_bb_exec(ibb, intel_bb_offset(ibb), flags, true);
+	intel_bb_flush_blit(ibb);
+	intel_bb_sync(ibb);
+
 	check_buf(dst, COLOR_CC);
 
 	poff_bb = intel_bb_get_object_offset(ibb, ibb->handle);
@@ -464,15 +719,30 @@ static void blit(struct buf_ops *bops,
 
 	intel_bb_reset(ibb, purge_cache);
 
+	/* For purge we lost offsets and bufs were removed from tracking list */
+	if (purge_cache) {
+		src->addr.offset = poff_src;
+		dst->addr.offset = poff_dst;
+	}
+
+	/* Add buffers again, should work both for purge and keep cache */
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
+
+	igt_assert_f(poff_src == src->addr.offset,
+		     "prev src addr: %" PRIx64 " <> src addr %" PRIx64 "\n",
+		     poff_src, src->addr.offset);
+	igt_assert_f(poff_dst == dst->addr.offset,
+		     "prev dst addr: %" PRIx64 " <> dst addr %" PRIx64 "\n",
+		     poff_dst, dst->addr.offset);
+
 	fill_buf(src, COLOR_77);
 	fill_buf(dst, COLOR_00);
 
-	if (purge_cache && !do_relocs) {
-		intel_bb_add_intel_buf(ibb, src, false);
-		intel_bb_add_intel_buf(ibb, dst, true);
-	}
-
 	__emit_blit(ibb, src, dst);
+	intel_bb_flush_blit(ibb);
+	intel_bb_sync(ibb);
+	check_buf(dst, COLOR_77);
 
 	poff2_bb = intel_bb_get_object_offset(ibb, ibb->handle);
 	poff2_src = intel_bb_get_object_offset(ibb, src->handle);
@@ -496,21 +766,9 @@ static void blit(struct buf_ops *bops,
 	 * we are in full control of our own GTT.
 	 */
 	if (gem_uses_full_ppgtt(i915)) {
-		if (purge_cache) {
-			if (do_relocs) {
-				igt_assert_eq_u64(poff2_bb,  0);
-				igt_assert_eq_u64(poff2_src, 0);
-				igt_assert_eq_u64(poff2_dst, 0);
-			} else {
-				igt_assert_neq_u64(poff_bb, poff2_bb);
-				igt_assert_eq_u64(poff_src, poff2_src);
-				igt_assert_eq_u64(poff_dst, poff2_dst);
-			}
-		} else {
-			igt_assert_eq_u64(poff_bb,  poff2_bb);
-			igt_assert_eq_u64(poff_src, poff2_src);
-			igt_assert_eq_u64(poff_dst, poff2_dst);
-		}
+		igt_assert_eq_u64(poff_bb,  poff2_bb);
+		igt_assert_eq_u64(poff_src, poff2_src);
+		igt_assert_eq_u64(poff_dst, poff2_dst);
 	}
 
 	intel_bb_emit_bbe(ibb);
@@ -677,7 +935,7 @@ static int dump_base64(const char *name, struct intel_buf *buf)
 	if (ret != Z_OK) {
 		igt_warn("error compressing, ret: %d\n", ret);
 	} else {
-		igt_info("compressed %" PRIx64 " -> %lu\n",
+		igt_info("compressed %" PRIu64 " -> %lu\n",
 			 buf->surface[0].size, outsize);
 
 		igt_info("--- %s ---\n", name);
@@ -1183,6 +1441,10 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 		gen = intel_gen(intel_get_drm_devid(i915));
 	}
 
+	igt_describe("Ensure reset is possible on fresh bb");
+	igt_subtest("reset-bb")
+		reset_bb(bops);
+
 	igt_subtest("simple-bb")
 		simple_bb(bops, false);
 
@@ -1198,17 +1460,47 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("reset-flags")
 		reset_flags(bops);
 
-	igt_subtest("blit-noreloc-keep-cache")
-		blit(bops, NORELOC, KEEP_CACHE);
+	igt_subtest("add-remove-objects")
+		add_remove_objects(bops);
 
-	igt_subtest("blit-reloc-purge-cache")
-		blit(bops, RELOC, PURGE_CACHE);
+	igt_subtest("destroy-bb")
+		destroy_bb(bops);
 
-	igt_subtest("blit-noreloc-purge-cache")
-		blit(bops, NORELOC, PURGE_CACHE);
+	igt_subtest("object-reloc-purge-cache")
+		object_reloc(bops, PURGE_CACHE);
+
+	igt_subtest("object-reloc-keep-cache")
+		object_reloc(bops, KEEP_CACHE);
+
+	igt_subtest("object-noreloc-purge-cache-simple")
+		object_noreloc(bops, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("object-noreloc-keep-cache-simple")
+		object_noreloc(bops, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("object-noreloc-purge-cache-random")
+		object_noreloc(bops, PURGE_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("object-noreloc-keep-cache-random")
+		object_noreloc(bops, KEEP_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-reloc-purge-cache")
+		blit(bops, RELOC, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
 
 	igt_subtest("blit-reloc-keep-cache")
-		blit(bops, RELOC, KEEP_CACHE);
+		blit(bops, RELOC, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("blit-noreloc-keep-cache-random")
+		blit(bops, NORELOC, KEEP_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-noreloc-purge-cache-random")
+		blit(bops, NORELOC, PURGE_CACHE, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest("blit-noreloc-keep-cache")
+		blit(bops, NORELOC, KEEP_CACHE, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest("blit-noreloc-purge-cache")
+		blit(bops, NORELOC, PURGE_CACHE, INTEL_ALLOCATOR_SIMPLE);
 
 	igt_subtest("intel-bb-blit-none")
 		do_intel_bb_blit(bops, 10, I915_TILING_NONE);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 20/35] tests/api_intel_bb: Add subtest to check render batch on the last page
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (18 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 19/35] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 21/35] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
                   ` (16 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Last page (on 48-bit ppgtt) seems to be problematic when full
3D pipeline batch is inserted and executed from it. Try to find out
which generations are still prone to hang on it.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 74 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index d5ed77314..207a70fab 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -36,6 +36,7 @@
 #include <glib.h>
 #include <zlib.h>
 #include "intel_bufops.h"
+#include "sw_sync.h"
 
 #define PAGE_SIZE 4096
 
@@ -1391,6 +1392,76 @@ static void render_ccs(struct buf_ops *bops)
 	igt_assert_f(fails == 0, "render-ccs fails: %d\n", fails);
 }
 
+static void last_page(struct buf_ops *bops, uint32_t width, uint32_t height)
+{
+	struct intel_bb *ibb1, *ibb2;
+	struct intel_buf src, dst;
+	int i915 = buf_ops_get_fd(bops);
+	uint32_t devid = intel_get_drm_devid(i915);
+	igt_render_copyfunc_t render_copy = NULL;
+	uint64_t gtt_size;
+	uint64_t ctx;
+	int ret;
+
+	igt_require(gem_uses_full_ppgtt(i915));
+	gtt_size = gem_aperture_size(i915);
+
+	ctx = gem_context_create(i915);
+
+	ibb1 = intel_bb_create_full(i915, ctx, PAGE_SIZE,
+				   0, gtt_size, INTEL_ALLOCATOR_SIMPLE,
+				   ALLOC_STRATEGY_LOW_TO_HIGH);
+
+	ibb2 = intel_bb_create_full(i915, 0, PAGE_SIZE,
+				   0, gtt_size, INTEL_ALLOCATOR_SIMPLE,
+				   ALLOC_STRATEGY_HIGH_TO_LOW);
+
+	if (debug_bb) {
+		intel_bb_set_debug(ibb1, true);
+		intel_bb_set_debug(ibb2, true);
+	}
+
+	scratch_buf_init(bops, &src, width, height, I915_TILING_NONE,
+			 I915_COMPRESSION_NONE);
+	scratch_buf_init(bops, &dst, width, height, I915_TILING_NONE,
+			 I915_COMPRESSION_NONE);
+	scratch_buf_draw_pattern(bops, &src,
+				 0, 0, width, height,
+				 0, 0, width, height, 0);
+
+	render_copy = igt_get_render_copyfunc(devid);
+	igt_assert(render_copy);
+
+	render_copy(ibb1,
+		    &src,
+		    0, 0, width, height,
+		    &dst,
+		    0, 0);
+	gem_sync(i915, dst.handle);
+	igt_assert(sync_fence_status(ibb1->fence) == 1);
+	intel_bb_destroy(ibb1);
+
+	intel_buf_close(bops, &dst);
+	scratch_buf_init(bops, &dst, width, height, I915_TILING_NONE,
+			 I915_COMPRESSION_NONE);
+
+	render_copy(ibb2,
+		    &src,
+		    0, 0, width, height,
+		    &dst,
+		    0, 0);
+
+	/* We likely got a hang here, so free resources before assert */
+	gem_sync(i915, dst.handle);
+	ret = sync_fence_status(ibb2->fence);
+
+	intel_bb_destroy(ibb2);
+	intel_buf_close(bops, &src);
+	intel_buf_close(bops, &dst);
+
+	igt_assert_f(ret == 1, "Batch in last page in rcs leads to hang\n");
+}
+
 static int opt_handler(int opt, int opt_index, void *data)
 {
 	switch (opt) {
@@ -1545,6 +1616,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("render-ccs")
 		render_ccs(bops);
 
+	igt_subtest("last-page")
+		last_page(bops, 512, 512);
+
 	igt_fixture {
 		buf_ops_destroy(bops);
 		close(i915);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 21/35] tests/api_intel_bb: Add compressed->compressed copy
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (19 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 20/35] tests/api_intel_bb: Add subtest to check render batch on the last page Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 22/35] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
                   ` (15 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Check aux pagetables are working when more than one compressed
buffers are added.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 207a70fab..351fdc9f1 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -1332,7 +1332,7 @@ static void render_ccs(struct buf_ops *bops)
 	struct intel_bb *ibb;
 	const int width = 1024;
 	const int height = 1024;
-	struct intel_buf src, dst, final;
+	struct intel_buf src, dst, dst2, final;
 	int i915 = buf_ops_get_fd(bops);
 	uint32_t fails = 0;
 	uint32_t compressed = 0;
@@ -1347,6 +1347,8 @@ static void render_ccs(struct buf_ops *bops)
 			 I915_COMPRESSION_NONE);
 	scratch_buf_init(bops, &dst, width, height, I915_TILING_Y,
 			 I915_COMPRESSION_RENDER);
+	scratch_buf_init(bops, &dst2, width, height, I915_TILING_Y,
+			 I915_COMPRESSION_RENDER);
 	scratch_buf_init(bops, &final, width, height, I915_TILING_NONE,
 			 I915_COMPRESSION_NONE);
 
@@ -1366,6 +1368,12 @@ static void render_ccs(struct buf_ops *bops)
 	render_copy(ibb,
 		    &dst,
 		    0, 0, width, height,
+		    &dst2,
+		    0, 0);
+
+	render_copy(ibb,
+		    &dst2,
+		    0, 0, width, height,
 		    &final,
 		    0, 0);
 
@@ -1381,12 +1389,15 @@ static void render_ccs(struct buf_ops *bops)
 	if (write_png) {
 		intel_buf_write_to_png(&src, "render-ccs-src.png");
 		intel_buf_write_to_png(&dst, "render-ccs-dst.png");
+		intel_buf_write_to_png(&dst2, "render-ccs-dst2.png");
 		intel_buf_write_aux_to_png(&dst, "render-ccs-dst-aux.png");
+		intel_buf_write_aux_to_png(&dst2, "render-ccs-dst2-aux.png");
 		intel_buf_write_to_png(&final, "render-ccs-final.png");
 	}
 
 	intel_buf_close(bops, &src);
 	intel_buf_close(bops, &dst);
+	intel_buf_close(bops, &dst2);
 	intel_buf_close(bops, &final);
 
 	igt_assert_f(fails == 0, "render-ccs fails: %d\n", fails);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 22/35] tests/api_intel_bb: Add purge-bb test
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (20 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 21/35] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 23/35] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
                   ` (14 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Check address acquired is same after purging bb. For relocations
we expect 0 twice, for allocator release and alloc same address.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 36 ++++++++++++++++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 351fdc9f1..7b714af8e 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -142,6 +142,33 @@ static void reset_bb(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
+static void purge_bb(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_buf *buf;
+	struct intel_bb *ibb;
+	uint64_t offset0, offset1;
+
+	buf = intel_buf_create(bops, 512, 512, 32, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	ibb = intel_bb_create(i915, 4096);
+	intel_bb_set_debug(ibb, true);
+
+	intel_bb_add_intel_buf(ibb, buf, false);
+	offset0 = buf->addr.offset;
+
+	intel_bb_reset(ibb, true);
+	buf->addr.offset = INTEL_BUF_INVALID_ADDRESS;
+
+	intel_bb_add_intel_buf(ibb, buf, false);
+	offset1 = buf->addr.offset;
+
+	igt_assert(offset0 == offset1);
+
+	intel_buf_destroy(buf);
+	intel_bb_destroy(ibb);
+}
+
 static void simple_bb(struct buf_ops *bops, bool use_context)
 {
 	int i915 = buf_ops_get_fd(bops);
@@ -240,13 +267,15 @@ static void check_canonical(struct buf_ops *bops)
 	/* Twice same addresses to verify will be unreserved in remove path */
 	uint64_t addresses[] = { 0x800000000000, 0x800000000000, 0x400000000000 };
 	uint64_t address, start, end;
-	bool supports_48bit;
+	bool supports_48bit, relocs;
 
 	ibb = intel_bb_create(i915, PAGE_SIZE);
 	supports_48bit = ibb->supports_48b_address;
-	if (!supports_48bit)
+	relocs = ibb->enforce_relocs;
+	if (!supports_48bit || relocs)
 		intel_bb_destroy(ibb);
 	igt_require_f(supports_48bit, "We need 48bit ppgtt for testing\n");
+	igt_require_f(!relocs, "We need to use allocator instead of relocations\n");
 
 	if (debug_bb)
 		intel_bb_set_debug(ibb, true);
@@ -1527,6 +1556,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("reset-bb")
 		reset_bb(bops);
 
+	igt_subtest_f("purge-bb")
+		purge_bb(bops);
+
 	igt_subtest("simple-bb")
 		simple_bb(bops, false);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 23/35] tests/api_intel_bb: Remove check-canonical test
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (21 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 22/35] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 24/35] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
                   ` (13 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

As intel-bb uses internally decanonical address for objects/intel_bufs
checking canonical bits makes no sense anymore.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Arjun Melkaveri <arjun.melkaveri@intel.com>
---
 tests/i915/api_intel_bb.c | 74 ---------------------------------------
 1 file changed, 74 deletions(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index 7b714af8e..bf886ca3c 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -246,77 +246,6 @@ static void lot_of_buffers(struct buf_ops *bops)
 	intel_bb_destroy(ibb);
 }
 
-/*
- * Make sure intel-bb space allocator currently doesn't enter 47-48 bit
- * gtt sizes.
- */
-#define GEN8_HIGH_ADDRESS_BIT 47
-static uint64_t gen8_canonical_addr(uint64_t address)
-{
-	int shift = 63 - GEN8_HIGH_ADDRESS_BIT;
-
-	return (int64_t)(address << shift) >> shift;
-}
-
-static void check_canonical(struct buf_ops *bops)
-{
-	int i915 = buf_ops_get_fd(bops), i;
-	struct intel_bb *ibb;
-	struct intel_buf *buf;
-	uint32_t offset;
-	/* Twice same addresses to verify will be unreserved in remove path */
-	uint64_t addresses[] = { 0x800000000000, 0x800000000000, 0x400000000000 };
-	uint64_t address, start, end;
-	bool supports_48bit, relocs;
-
-	ibb = intel_bb_create(i915, PAGE_SIZE);
-	supports_48bit = ibb->supports_48b_address;
-	relocs = ibb->enforce_relocs;
-	if (!supports_48bit || relocs)
-		intel_bb_destroy(ibb);
-	igt_require_f(supports_48bit, "We need 48bit ppgtt for testing\n");
-	igt_require_f(!relocs, "We need to use allocator instead of relocations\n");
-
-	if (debug_bb)
-		intel_bb_set_debug(ibb, true);
-
-	intel_allocator_get_address_range(ibb->allocator_handle,
-					  &start, &end);
-
-	for (i = 0; i < ARRAY_SIZE(addresses); i++) {
-		address = addresses[i];
-
-		if (address < start || address >= end) {
-			igt_debug("Address too big: %" PRIx64
-				  ", start: %" PRIx64 ", end: %" PRIx64
-				  ", skipping...\n",
-				  address, start, end);
-			continue;
-		}
-
-		offset = intel_bb_emit_bbe(ibb);
-
-		buf = intel_buf_create(bops, 512, 512, 32, 0,
-				       I915_TILING_NONE, I915_COMPRESSION_NONE);
-
-		buf->addr.offset = address;
-		intel_bb_add_intel_buf(ibb, buf, true);
-		intel_bb_add_intel_buf(ibb, buf, true);
-
-		igt_assert(buf->addr.offset == gen8_canonical_addr(address));
-
-		intel_bb_exec(ibb, offset,
-			      I915_EXEC_DEFAULT | I915_EXEC_NO_RELOC, true);
-
-		/* We ensure to do unreserve and verify address can be reused */
-		intel_bb_remove_intel_buf(ibb, buf);
-		intel_buf_destroy(buf);
-		intel_bb_reset(ibb, true);
-	}
-
-	intel_bb_destroy(ibb);
-}
-
 /*
  * Check flags are cleared after intel_bb_reset(ibb, false);
  */
@@ -1568,9 +1497,6 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("lot-of-buffers")
 		lot_of_buffers(bops);
 
-	igt_subtest("check-canonical")
-		check_canonical(bops);
-
 	igt_subtest("reset-flags")
 		reset_flags(bops);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 24/35] tests/api_intel_bb: Add simple intel-bb which uses allocator
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (22 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 23/35] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 25/35] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
                   ` (12 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

A simple test which uses allocator and can be easily copy-paste
when intel-bb is used for batchbuffer creation.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index bf886ca3c..e45705726 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -209,6 +209,34 @@ static void simple_bb(struct buf_ops *bops, bool use_context)
 	intel_bb_destroy(ibb);
 }
 
+static void bb_with_allocator(struct buf_ops *bops)
+{
+	int i915 = buf_ops_get_fd(bops);
+	struct intel_bb *ibb;
+	struct intel_buf *src, *dst;
+	uint32_t ctx = 0;
+
+	igt_require(gem_uses_full_ppgtt(i915));
+
+	ibb = intel_bb_create_with_allocator(i915, ctx, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
+	if (debug_bb)
+		intel_bb_set_debug(ibb, true);
+
+	src = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+	dst = intel_buf_create(bops, 4096/32, 32, 8, 0, I915_TILING_NONE,
+			       I915_COMPRESSION_NONE);
+
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
+	intel_bb_copy_intel_buf(ibb, dst, src, 4096);
+	intel_bb_remove_intel_buf(ibb, src);
+	intel_bb_remove_intel_buf(ibb, dst);
+
+	intel_bb_destroy(ibb);
+}
+
 /*
  * Make sure we lead to realloc in the intel_bb.
  */
@@ -1494,6 +1522,9 @@ igt_main_args("dpib", NULL, help_str, opt_handler, NULL)
 	igt_subtest("simple-bb-ctx")
 		simple_bb(bops, true);
 
+	igt_subtest("bb-with-allocator")
+		bb_with_allocator(bops);
+
 	igt_subtest("lot-of-buffers")
 		lot_of_buffers(bops);
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 25/35] tests/api_intel_bb: Use allocator in delta-check test
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (23 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 24/35] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 26/35] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
                   ` (11 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

We want to use address returned from emit_reloc() but do not call
kernel relocation path. Change intel-bb to use allocator to fully
control addresses passed in execbuf.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_bb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/i915/api_intel_bb.c b/tests/i915/api_intel_bb.c
index e45705726..b62957b34 100644
--- a/tests/i915/api_intel_bb.c
+++ b/tests/i915/api_intel_bb.c
@@ -1127,7 +1127,8 @@ static void delta_check(struct buf_ops *bops)
 	uint64_t offset;
 	bool supports_48bit;
 
-	ibb = intel_bb_create(i915, PAGE_SIZE);
+	ibb = intel_bb_create_with_allocator(i915, 0, PAGE_SIZE,
+					     INTEL_ALLOCATOR_SIMPLE);
 	supports_48bit = ibb->supports_48b_address;
 	if (!supports_48bit)
 		intel_bb_destroy(ibb);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 26/35] tests/api_intel_allocator: Simple allocator test suite
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (24 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 25/35] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 27/35] tests/api_intel_allocator: Prepare to run with sanitizer Zbigniew Kempczyński
                   ` (10 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

We want to verify allocator works as expected. Try to exploit it.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_allocator.c | 538 +++++++++++++++++++++++++++++++
 tests/meson.build                |   1 +
 2 files changed, 539 insertions(+)
 create mode 100644 tests/i915/api_intel_allocator.c

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
new file mode 100644
index 000000000..650c2ff5e
--- /dev/null
+++ b/tests/i915/api_intel_allocator.c
@@ -0,0 +1,538 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <stdatomic.h>
+#include "i915/gem.h"
+#include "igt.h"
+#include "igt_aux.h"
+#include "intel_allocator.h"
+
+#define OBJ_SIZE 1024
+
+struct test_obj {
+	uint32_t handle;
+	uint64_t offset;
+	uint64_t size;
+};
+
+static _Atomic(uint32_t) next_handle;
+
+static inline uint32_t gem_handle_gen(void)
+{
+	return atomic_fetch_add(&next_handle, 1);
+}
+
+static void alloc_simple(int fd)
+{
+	uint64_t ialh;
+	uint64_t offset0, offset1;
+	bool is_allocated, freed;
+
+	ialh = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	offset0 = intel_allocator_alloc(ialh, 1, 0x1000, 0x1000);
+	offset1 = intel_allocator_alloc(ialh, 1, 0x1000, 0x1000);
+	igt_assert(offset0 == offset1);
+
+	is_allocated = intel_allocator_is_allocated(ialh, 1, 0x1000, offset0);
+	igt_assert(is_allocated);
+
+	freed = intel_allocator_free(ialh, 1);
+	igt_assert(freed);
+
+	is_allocated = intel_allocator_is_allocated(ialh, 1, 0x1000, offset0);
+	igt_assert(!is_allocated);
+
+	freed = intel_allocator_free(ialh, 1);
+	igt_assert(!freed);
+
+	intel_allocator_close(ialh);
+}
+
+static void reserve_simple(int fd)
+{
+	uint64_t ialh;
+	uint64_t start;
+	bool reserved, unreserved;
+
+	ialh = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	intel_allocator_get_address_range(ialh, &start, NULL);
+
+	reserved = intel_allocator_reserve(ialh, 0, 0x1000, start);
+	igt_assert(reserved);
+
+	reserved = intel_allocator_is_reserved(ialh, 0x1000, start);
+	igt_assert(reserved);
+
+	reserved = intel_allocator_reserve(ialh, 0, 0x1000, start);
+	igt_assert(!reserved);
+
+	unreserved = intel_allocator_unreserve(ialh, 0, 0x1000, start);
+	igt_assert(unreserved);
+
+	reserved = intel_allocator_is_reserved(ialh, 0x1000, start);
+	igt_assert(!reserved);
+
+	intel_allocator_close(ialh);
+}
+
+static void reserve(int fd, uint8_t type)
+{
+	struct intel_allocator *ial;
+	struct test_obj obj;
+
+	ial = from_user_pointer(intel_allocator_open(fd, 0, type));
+
+	igt_assert(ial->reserve(ial, 0, 0x40000, 0x800000));
+	/* try reserve once again */
+	igt_assert_eq(ial->reserve(ial, 0, 0x40040, 0x700000), false);
+
+	obj.handle = gem_handle_gen();
+	obj.size = OBJ_SIZE;
+	obj.offset = ial->alloc(ial, obj.handle, obj.size, 0);
+
+	igt_assert_eq(ial->reserve(ial, 0, obj.offset,
+				obj.offset + obj.size), false);
+	ial->free(ial, obj.handle);
+	igt_assert_eq(ial->reserve(ial, 0, obj.offset,
+				obj.offset + obj.size), true);
+
+	ial->unreserve(ial, 0, obj.offset, obj.offset + obj.size);
+	ial->unreserve(ial, 0, 0x40000, 0x800000);
+	igt_assert(ial->reserve(ial, 0, 0x40040, 0x700000));
+	ial->unreserve(ial, 0, 0x40040, 0x700000);
+
+	igt_assert(ial->is_empty(ial));
+
+	intel_allocator_close(to_user_pointer(ial));
+}
+
+static bool overlaps(struct test_obj *buf1, struct test_obj *buf2)
+{
+	uint64_t begin1 = buf1->offset;
+	uint64_t end1 = buf1->offset + buf1->size;
+	uint64_t begin2 = buf2->offset;
+	uint64_t end2 = buf2->offset + buf2->size;
+
+	return (end1 > begin2 && end2 > end1) || (end2 > begin1 && end1 > end2);
+}
+
+static void basic_alloc(int fd, int cnt, uint8_t type)
+{
+	struct test_obj *obj;
+	struct intel_allocator *ial;
+	int i, j;
+
+	ial = from_user_pointer(intel_allocator_open(fd, 0, type));
+	obj = malloc(sizeof(struct test_obj) * cnt);
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("allocating objects: ", i, cnt);
+		obj[i].handle = gem_handle_gen();
+		obj[i].size = OBJ_SIZE;
+		obj[i].offset = ial->alloc(ial, obj[i].handle,
+					   obj[i].size, 4096);
+		igt_assert_eq(obj[i].offset % 4096, 0);
+	}
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("check overlapping: ", i, cnt);
+
+		if (type == INTEL_ALLOCATOR_RANDOM)
+			continue;
+
+		for (j = 0; j < cnt; j++) {
+			if (j == i)
+				continue;
+				igt_assert(!overlaps(&obj[i], &obj[j]));
+		}
+	}
+
+	for (i = 0; i < cnt; i++) {
+		igt_progress("freeing objects: ", i, cnt);
+		ial->free(ial, obj[i].handle);
+	}
+
+	igt_assert(ial->is_empty(ial));
+
+	free(obj);
+	intel_allocator_close(to_user_pointer(ial));
+}
+
+static void reuse(int fd, uint8_t type)
+{
+	struct test_obj obj[128], tmp;
+	struct intel_allocator *ial;
+	uint64_t prev_offset;
+	int i;
+
+	ial = from_user_pointer(intel_allocator_open(fd, 0, type));
+
+	for (i = 0; i < 128; i++) {
+		obj[i].handle = gem_handle_gen();
+		obj[i].size = OBJ_SIZE;
+		obj[i].offset = ial->alloc(ial, obj[i].handle,
+					   obj[i].size, 0x40);
+	}
+
+	/* check simple reuse */
+	for (i = 0; i < 128; i++) {
+		prev_offset = obj[i].offset;
+		obj[i].offset = ial->alloc(ial, obj[i].handle, obj[i].size, 0);
+		igt_assert(prev_offset == obj[i].offset);
+	}
+	i--;
+
+	/* free bo prevously alloced */
+	ial->free(ial, obj[i].handle);
+	/* alloc different buffer to fill freed hole */
+	tmp.handle = gem_handle_gen();
+	tmp.offset = ial->alloc(ial, tmp.handle, OBJ_SIZE, 0);
+	igt_assert(prev_offset == tmp.offset);
+
+	obj[i].offset = ial->alloc(ial, obj[i].handle, obj[i].size, 0);
+	igt_assert(prev_offset != obj[i].offset);
+	ial->free(ial, tmp.handle);
+
+	for (i = 0; i < 128; i++)
+		ial->free(ial, obj[i].handle);
+
+	igt_assert(ial->is_empty(ial));
+
+	intel_allocator_close(to_user_pointer(ial));
+}
+
+struct ial_thread_args {
+	struct intel_allocator *ial;
+	pthread_t thread;
+	uint32_t *handles;
+	uint64_t *offsets;
+	uint32_t count;
+	int threads;
+	int idx;
+};
+
+static void *alloc_bo_in_thread(void *arg)
+{
+	struct ial_thread_args *a = arg;
+	int i;
+
+	for (i = a->idx; i < a->count; i += a->threads) {
+		a->handles[i] = gem_handle_gen();
+		pthread_mutex_lock(&a->ial->mutex);
+		a->offsets[i] = a->ial->alloc(a->ial, a->handles[i], OBJ_SIZE,
+					      1UL << ((random() % 20) + 1));
+		pthread_mutex_unlock(&a->ial->mutex);
+	}
+
+	return NULL;
+}
+
+static void *free_bo_in_thread(void *arg)
+{
+	struct ial_thread_args *a = arg;
+	int i;
+
+	for (i = (a->idx + 1) % a->threads; i < a->count; i += a->threads) {
+		pthread_mutex_lock(&a->ial->mutex);
+		a->ial->free(a->ial, a->handles[i]);
+		pthread_mutex_unlock(&a->ial->mutex);
+	}
+
+	return NULL;
+}
+
+#define THREADS 6
+
+static void parallel_one(int fd, uint8_t type)
+{
+	struct intel_allocator *ial;
+	struct ial_thread_args a[THREADS];
+	uint32_t *handles;
+	uint64_t *offsets;
+	int count, i;
+
+	srandom(0xdeadbeef);
+	ial = from_user_pointer(intel_allocator_open(fd, 0, type));
+	count = 1UL << 12;
+
+	handles = malloc(sizeof(uint32_t) * count);
+	offsets = calloc(1, sizeof(uint64_t) * count);
+
+	for (i = 0; i < THREADS; i++) {
+		a[i].ial = ial;
+		a[i].handles = handles;
+		a[i].offsets = offsets;
+		a[i].count = count;
+		a[i].threads = THREADS;
+		a[i].idx = i;
+		pthread_create(&a[i].thread, NULL, alloc_bo_in_thread, &a[i]);
+	}
+
+	for (i = 0; i < THREADS; i++)
+		pthread_join(a[i].thread, NULL);
+
+	/* Check if all objects are alocated */
+	for (i = 0; i < count; i++) {
+	/* Random allocator don't have state. Always returns different offset */
+		if (type == INTEL_ALLOCATOR_RANDOM)
+			break;
+
+		igt_assert_eq(offsets[i],
+			      a->ial->alloc(ial, handles[i], OBJ_SIZE, 0));
+	}
+
+	for (i = 0; i < THREADS; i++)
+		pthread_create(&a[i].thread, NULL, free_bo_in_thread, &a[i]);
+
+	for (i = 0; i < THREADS; i++)
+		pthread_join(a[i].thread, NULL);
+
+	/* Check if all offsets where objects were are free */
+	for (i = 0; i < count; i++) {
+		if (type == INTEL_ALLOCATOR_RANDOM)
+			break;
+
+		igt_assert(ial->reserve(ial, 0, offsets[i], offsets[i] + 1));
+	}
+
+	free(handles);
+	free(offsets);
+
+	intel_allocator_close(to_user_pointer(ial));
+}
+
+#define SIMPLE_GROUP_ALLOCS 8
+static void __simple_allocs(int fd)
+{
+	uint32_t handles[SIMPLE_GROUP_ALLOCS];
+	uint64_t ahnd;
+	uint32_t ctx;
+	int i;
+
+	ctx = rand() % 2;
+	ahnd = intel_allocator_open(fd, ctx, INTEL_ALLOCATOR_SIMPLE);
+
+	for (i = 0; i < SIMPLE_GROUP_ALLOCS; i++) {
+		uint32_t size;
+
+		size = (rand() % 4 + 1) * 0x1000;
+		handles[i] = gem_create(fd, size);
+		intel_allocator_alloc(ahnd, handles[i], size, 0x1000);
+	}
+
+	for (i = 0; i < SIMPLE_GROUP_ALLOCS; i++) {
+		igt_assert_f(intel_allocator_free(ahnd, handles[i]) == 1,
+			     "Error freeing handle: %u\n", handles[i]);
+		gem_close(fd, handles[i]);
+	}
+
+	intel_allocator_close(ahnd);
+}
+
+static void fork_simple_once(int fd)
+{
+	intel_allocator_multiprocess_start();
+
+	igt_fork(child, 1)
+		__simple_allocs(fd);
+
+	igt_waitchildren();
+
+	intel_allocator_multiprocess_stop();
+}
+
+#define SIMPLE_TIMEOUT 5
+static void *__fork_simple_thread(void *data)
+{
+	int fd = (int) (long) data;
+
+	igt_until_timeout(SIMPLE_TIMEOUT) {
+		__simple_allocs(fd);
+	}
+
+	return NULL;
+}
+
+static void fork_simple_stress(int fd, bool two_level_inception)
+{
+	pthread_t thread0, thread1;
+	uint64_t ahnd0, ahnd1;
+	bool are_empty;
+
+	intel_allocator_multiprocess_start();
+
+	ahnd0 = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd1 = intel_allocator_open(fd, 1, INTEL_ALLOCATOR_SIMPLE);
+
+	pthread_create(&thread0, NULL, __fork_simple_thread, (void *) (long) fd);
+	pthread_create(&thread1, NULL, __fork_simple_thread, (void *) (long) fd);
+
+	igt_fork(child, 8) {
+		if (two_level_inception) {
+			pthread_create(&thread0, NULL, __fork_simple_thread,
+				       (void *) (long) fd);
+			pthread_create(&thread1, NULL, __fork_simple_thread,
+				       (void *) (long) fd);
+		}
+
+		igt_until_timeout(SIMPLE_TIMEOUT) {
+			__simple_allocs(fd);
+		}
+
+		if (two_level_inception) {
+			pthread_join(thread0, NULL);
+			pthread_join(thread1, NULL);
+		}
+	}
+	igt_waitchildren();
+
+	pthread_join(thread0, NULL);
+	pthread_join(thread1, NULL);
+
+	are_empty = intel_allocator_close(ahnd0);
+	are_empty &= intel_allocator_close(ahnd1);
+
+	intel_allocator_multiprocess_stop();
+
+	igt_assert_f(are_empty, "Allocators were not emptied\n");
+}
+
+static void __reopen_allocs(int fd1, int fd2)
+{
+	uint64_t ahnd0, ahnd1, ahnd2;
+
+	ahnd0 = intel_allocator_open(fd1, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd1 = intel_allocator_open(fd2, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd2 = intel_allocator_open(fd2, 0, INTEL_ALLOCATOR_SIMPLE);
+	igt_assert(ahnd0 != ahnd1);
+	igt_assert(ahnd1 == ahnd2);
+
+	intel_allocator_close(ahnd0);
+	intel_allocator_close(ahnd1);
+	intel_allocator_close(ahnd2);
+}
+
+static void reopen(int fd)
+{
+	int fd2;
+
+	igt_require_gem(fd);
+
+	fd2 = gem_reopen_driver(fd);
+
+	__reopen_allocs(fd, fd2);
+
+	close(fd2);
+}
+
+#define REOPEN_TIMEOUT 3
+static void reopen_fork(int fd)
+{
+	int fd2;
+
+	igt_require_gem(fd);
+
+	intel_allocator_multiprocess_start();
+
+	fd2 = gem_reopen_driver(fd);
+
+	igt_fork(child, 1) {
+		igt_until_timeout(REOPEN_TIMEOUT)
+			__reopen_allocs(fd, fd2);
+	}
+	igt_until_timeout(REOPEN_TIMEOUT)
+		__reopen_allocs(fd, fd2);
+
+	igt_waitchildren();
+
+	close(fd2);
+
+	intel_allocator_multiprocess_stop();
+}
+
+struct allocators {
+	const char *name;
+	uint8_t type;
+} als[] = {
+	{"simple", INTEL_ALLOCATOR_SIMPLE},
+	{"random", INTEL_ALLOCATOR_RANDOM},
+	{NULL, 0},
+};
+
+igt_main
+{
+	int fd;
+	struct allocators *a;
+
+	igt_fixture {
+		fd = drm_open_driver(DRIVER_INTEL);
+		atomic_init(&next_handle, 1);
+		srandom(0xdeadbeef);
+	}
+
+	igt_subtest_f("alloc-simple")
+		alloc_simple(fd);
+
+	igt_subtest_f("reserve-simple")
+		reserve_simple(fd);
+
+	igt_subtest_f("print")
+		basic_alloc(fd, 1UL << 2, INTEL_ALLOCATOR_RANDOM);
+
+	igt_subtest_f("reuse")
+		reuse(fd, INTEL_ALLOCATOR_SIMPLE);
+
+	igt_subtest_f("reserve")
+		reserve(fd, INTEL_ALLOCATOR_SIMPLE);
+
+	for (a = als; a->name; a++) {
+		igt_subtest_with_dynamic_f("%s-allocator", a->name) {
+			igt_dynamic("basic")
+				basic_alloc(fd, 1UL << 8, a->type);
+
+			igt_dynamic("parallel-one")
+				parallel_one(fd, a->type);
+
+			if (a->type != INTEL_ALLOCATOR_RANDOM) {
+				igt_dynamic("reuse")
+					reuse(fd, a->type);
+
+				igt_dynamic("reserve")
+					reserve(fd, a->type);
+			}
+		}
+	}
+
+	igt_subtest_f("fork-simple-once")
+		fork_simple_once(fd);
+
+	igt_subtest_f("fork-simple-stress")
+		fork_simple_stress(fd, false);
+
+	igt_subtest_f("fork-simple-stress-signal") {
+		igt_fork_signal_helper();
+		fork_simple_stress(fd, false);
+		igt_stop_signal_helper();
+	}
+
+	igt_subtest_f("two-level-inception")
+		fork_simple_stress(fd, true);
+
+	igt_subtest_f("two-level-inception-interruptible") {
+		igt_fork_signal_helper();
+		fork_simple_stress(fd, true);
+		igt_stop_signal_helper();
+	}
+
+	igt_subtest_f("reopen")
+		reopen(fd);
+
+	igt_subtest_f("reopen-fork")
+		reopen_fork(fd);
+
+	igt_fixture
+		close(fd);
+}
diff --git a/tests/meson.build b/tests/meson.build
index 825e01833..061691903 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -111,6 +111,7 @@ test_progs = [
 ]
 
 i915_progs = [
+	'api_intel_allocator',
 	'api_intel_bb',
 	'gen3_mixed_blits',
 	'gen3_render_linear_blits',
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 27/35] tests/api_intel_allocator: Prepare to run with sanitizer
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (25 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 26/35] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
@ 2021-02-16 11:39 ` Zbigniew Kempczyński
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 28/35] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
                   ` (9 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:39 UTC (permalink / raw)
  To: igt-dev

Allocator code is fragile as it runs in multiprocess environment so
we want to validate its memory consistency using address sanitizer.
As allocator thread is spawned in the main process we need to
create children first before it will start allocating. Otherwise
we can incidentally pass shadow map snapshot to child/children
giving false-positive in sanitizer and fail the test.

Playing with sanitizer reveals it has a bug when multiple children
exists and signals are delivered to them. So we must be careful
when sanitizer and igt_fork_signal_handler() are used together.
Bug is revealed with code:
https://patchwork.freedesktop.org/patch/401979/?series=84102&rev=1

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Reported-by: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Andrzej Turko <andrzej.turko@linux.intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_allocator.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
index 650c2ff5e..a7c23a4ef 100644
--- a/tests/i915/api_intel_allocator.c
+++ b/tests/i915/api_intel_allocator.c
@@ -362,13 +362,7 @@ static void fork_simple_stress(int fd, bool two_level_inception)
 	uint64_t ahnd0, ahnd1;
 	bool are_empty;
 
-	intel_allocator_multiprocess_start();
-
-	ahnd0 = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
-	ahnd1 = intel_allocator_open(fd, 1, INTEL_ALLOCATOR_SIMPLE);
-
-	pthread_create(&thread0, NULL, __fork_simple_thread, (void *) (long) fd);
-	pthread_create(&thread1, NULL, __fork_simple_thread, (void *) (long) fd);
+	__intel_allocator_multiprocess_prepare();
 
 	igt_fork(child, 8) {
 		if (two_level_inception) {
@@ -387,6 +381,15 @@ static void fork_simple_stress(int fd, bool two_level_inception)
 			pthread_join(thread1, NULL);
 		}
 	}
+
+	pthread_create(&thread0, NULL, __fork_simple_thread, (void *) (long) fd);
+	pthread_create(&thread1, NULL, __fork_simple_thread, (void *) (long) fd);
+
+	ahnd0 = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	ahnd1 = intel_allocator_open(fd, 1, INTEL_ALLOCATOR_SIMPLE);
+
+	__intel_allocator_multiprocess_start();
+
 	igt_waitchildren();
 
 	pthread_join(thread0, NULL);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 28/35] tests/api_intel_allocator: Add execbuf with allocator example
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (26 preceding siblings ...)
  2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 27/35] tests/api_intel_allocator: Prepare to run with sanitizer Zbigniew Kempczyński
@ 2021-02-16 11:40 ` Zbigniew Kempczyński
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 29/35] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
                   ` (8 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:40 UTC (permalink / raw)
  To: igt-dev

Simplified version of non-fork test which can be used as a copy-paste
template. Uses blit to show how to prepare batch with addresses
acquired from allocator.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/api_intel_allocator.c | 91 ++++++++++++++++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/tests/i915/api_intel_allocator.c b/tests/i915/api_intel_allocator.c
index a7c23a4ef..2de7a0baa 100644
--- a/tests/i915/api_intel_allocator.c
+++ b/tests/i915/api_intel_allocator.c
@@ -456,6 +456,94 @@ static void reopen_fork(int fd)
 	intel_allocator_multiprocess_stop();
 }
 
+/* Simple execbuf which uses allocator, non-fork mode */
+static void execbuf_with_allocator(int fd)
+{
+	struct drm_i915_gem_execbuffer2 execbuf;
+	struct drm_i915_gem_exec_object2 object[3];
+	uint64_t ahnd, sz = 4096, gtt_size;
+	unsigned int flags = EXEC_OBJECT_PINNED;
+	uint32_t *ptr, batch[32], copied;
+	int gen = intel_gen(intel_get_drm_devid(fd));
+	int i;
+	const uint32_t magic = 0x900df00d;
+
+	igt_require(gem_uses_full_ppgtt(fd));
+
+	gtt_size = gem_aperture_size(fd);
+	if ((gtt_size - 1) >> 32)
+		flags |= EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	memset(object, 0, sizeof(object));
+
+	/* i == 0 (src), i == 1 (dst), i == 2 (batch) */
+	for (i = 0; i < ARRAY_SIZE(object); i++) {
+		uint64_t offset;
+
+		object[i].handle = gem_create(fd, sz);
+		offset = intel_allocator_alloc(ahnd, object[i].handle, sz, 0);
+		object[i].offset = CANONICAL(offset);
+
+		object[i].flags = flags;
+		if (i == 1)
+			object[i].flags |= EXEC_OBJECT_WRITE;
+	}
+
+	/* Prepare src data */
+	ptr = gem_mmap__device_coherent(fd, object[0].handle, 0, sz, PROT_WRITE);
+	ptr[0] = magic;
+	gem_munmap(ptr, sz);
+
+	/* Blit src -> dst */
+	i = 0;
+	batch[i++] = XY_SRC_COPY_BLT_CMD |
+		  XY_SRC_COPY_BLT_WRITE_ALPHA |
+		  XY_SRC_COPY_BLT_WRITE_RGB;
+	if (gen >= 8)
+		batch[i - 1] |= 8;
+	else
+		batch[i - 1] |= 6;
+
+	batch[i++] = (3 << 24) | (0xcc << 16) | 4;
+	batch[i++] = 0;
+	batch[i++] = (1 << 16) | 4;
+	batch[i++] = object[1].offset;
+	if (gen >= 8)
+		batch[i++] = object[1].offset >> 32;
+	batch[i++] = 0;
+	batch[i++] = 4;
+	batch[i++] = object[0].offset;
+	if (gen >= 8)
+		batch[i++] = object[0].offset >> 32;
+	batch[i++] = MI_BATCH_BUFFER_END;
+	batch[i++] = MI_NOOP;
+
+	gem_write(fd, object[2].handle, 0, batch, i * sizeof(batch[0]));
+
+	memset(&execbuf, 0, sizeof(execbuf));
+	execbuf.buffers_ptr = to_user_pointer(object);
+	execbuf.buffer_count = 3;
+	if (gen >= 6)
+		execbuf.flags = I915_EXEC_BLT;
+	gem_execbuf(fd, &execbuf);
+	gem_sync(fd, object[1].handle);
+
+	/* Check dst data */
+	ptr = gem_mmap__device_coherent(fd, object[1].handle, 0, sz, PROT_READ);
+	copied = ptr[0];
+	gem_munmap(ptr, sz);
+
+	for (i = 0; i < ARRAY_SIZE(object); i++) {
+		igt_assert(intel_allocator_free(ahnd, object[i].handle));
+		gem_close(fd, object[i].handle);
+	}
+
+	igt_assert(copied == magic);
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
 struct allocators {
 	const char *name;
 	uint8_t type;
@@ -536,6 +624,9 @@ igt_main
 	igt_subtest_f("reopen-fork")
 		reopen_fork(fd);
 
+	igt_subtest_f("execbuf-with-allocator")
+		execbuf_with_allocator(fd);
+
 	igt_fixture
 		close(fd);
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 29/35] tests/gem_softpin: Verify allocator and execbuf pair work together
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (27 preceding siblings ...)
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 28/35] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
@ 2021-02-16 11:40 ` Zbigniew Kempczyński
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 30/35] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
                   ` (7 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:40 UTC (permalink / raw)
  To: igt-dev

Exercise objects offsets produced by the allocator for execbuf
are valid and no EINVAL/ENOSPC occurs. Check it works properly
also for multiprocess allocations/execbufs for same context.
As we're in full-ppgtt we disable softpin to verify offsets produced
by the allocator are valid and kernel doesn't want to relocate them.

Add allocator-basic and allocator-basic-reserve to BAT.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_softpin.c              | 194 ++++++++++++++++++++++++++
 tests/intel-ci/fast-feedback.testlist |   2 +
 2 files changed, 196 insertions(+)

diff --git a/tests/i915/gem_softpin.c b/tests/i915/gem_softpin.c
index aba060a42..c3bfd10a9 100644
--- a/tests/i915/gem_softpin.c
+++ b/tests/i915/gem_softpin.c
@@ -28,6 +28,7 @@
 
 #include "i915/gem.h"
 #include "igt.h"
+#include "intel_allocator.h"
 
 #define EXEC_OBJECT_PINNED	(1<<4)
 #define EXEC_OBJECT_SUPPORTS_48B_ADDRESS (1<<3)
@@ -697,6 +698,184 @@ static void test_noreloc(int fd, enum sleep sleep, unsigned flags)
 		gem_close(fd, object[i].handle);
 }
 
+static void __reserve(uint64_t ahnd, int i915, bool pinned,
+		      struct drm_i915_gem_exec_object2 *objects,
+		      int num_obj, uint64_t size)
+{
+	uint64_t gtt = gem_aperture_size(i915);
+	unsigned int flags;
+	int i;
+
+	igt_assert(num_obj > 1);
+
+	flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	if (pinned)
+		flags |= EXEC_OBJECT_PINNED;
+
+	memset(objects, 0, sizeof(objects) * num_obj);
+
+	for (i = 0; i < num_obj; i++) {
+		objects[i].handle = gem_create(i915, size);
+		if (i < num_obj/2)
+			objects[i].offset = i * size;
+		else
+			objects[i].offset = gtt - (i + 1 - num_obj/2) * size;
+		objects[i].flags = flags;
+
+		intel_allocator_reserve(ahnd, objects[i].handle,
+					size, objects[i].offset);
+		igt_debug("Reserve i: %d, handle: %u, offset: %llx\n", i,
+			  objects[i].handle, (long long) objects[i].offset);
+	}
+}
+
+static void __unreserve(uint64_t ahnd, int i915,
+			struct drm_i915_gem_exec_object2 *objects,
+			int num_obj, uint64_t size)
+{
+	int i;
+
+	for (i = 0; i < num_obj; i++) {
+		intel_allocator_unreserve(ahnd, objects[i].handle,
+					  size, objects[i].offset);
+		igt_debug("Unreserve i: %d, handle: %u, offset: %llx\n", i,
+			  objects[i].handle, (long long) objects[i].offset);
+		gem_close(i915, objects[i].handle);
+	}
+}
+
+static void __exec_using_allocator(uint64_t ahnd, int i915, int num_obj,
+				   bool pinned)
+{
+	const uint32_t bbe = MI_BATCH_BUFFER_END;
+	struct drm_i915_gem_execbuffer2 execbuf;
+	struct drm_i915_gem_exec_object2 object[num_obj];
+	uint64_t stored_offsets[num_obj];
+	unsigned int flags;
+	uint64_t sz = 4096;
+	int i;
+
+	igt_assert(num_obj > 10);
+
+	flags = EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	if (pinned)
+		flags |= EXEC_OBJECT_PINNED;
+
+	memset(object, 0, sizeof(object));
+
+	for (i = 0; i < num_obj; i++) {
+		sz = (rand() % 15 + 1) * 4096;
+		if (i == num_obj - 1)
+			sz = 4096;
+		object[i].handle = gem_create(i915, sz);
+		object[i].offset =
+			intel_allocator_alloc(ahnd, object[i].handle, sz, 0);
+	}
+	gem_write(i915, object[--i].handle, 0, &bbe, sizeof(bbe));
+
+	for (i = 0; i < num_obj; i++) {
+		object[i].flags = flags;
+		object[i].offset = gen8_canonical_addr(object[i].offset);
+		stored_offsets[i] = object[i].offset;
+	}
+
+	memset(&execbuf, 0, sizeof(execbuf));
+	execbuf.buffers_ptr = to_user_pointer(object);
+	execbuf.buffer_count = num_obj;
+	gem_execbuf(i915, &execbuf);
+
+	for (i = 0; i < num_obj; i++) {
+		igt_assert(intel_allocator_free(ahnd, object[i].handle));
+		gem_close(i915, object[i].handle);
+	}
+
+	/* Check kernel will keep offsets even if pinned is not set. */
+	for (i = 0; i < num_obj; i++)
+		igt_assert_eq_u64(stored_offsets[i], object[i].offset);
+}
+
+static void test_allocator_basic(int fd, bool reserve)
+{
+	const int num_obj = 257, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Check that we can place objects at start/end
+	 * of the GTT using the allocator.
+	 */
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	if (reserve)
+		__reserve(ahnd, fd, true, objects, num_reserved, ressize);
+	__exec_using_allocator(ahnd, fd, num_obj, true);
+	if (reserve)
+		__unreserve(ahnd, fd, objects, num_reserved, ressize);
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
+static void test_allocator_nopin(int fd, bool reserve)
+{
+	const int num_obj = 257, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Check that we can combine manual placement with automatic
+	 * GTT placement.
+	 *
+	 * This will also check that we agree with this small sampling of
+	 * allocator placements -- that is the given the same restrictions
+	 * in execobj[] the kernel does not reject the placement due
+	 * to overlaps or invalid addresses.
+	 */
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+
+	if (reserve)
+		__reserve(ahnd, fd, false, objects, num_reserved, ressize);
+
+	__exec_using_allocator(ahnd, fd, num_obj, false);
+
+	if (reserve)
+		__unreserve(ahnd, fd, objects, num_reserved, ressize);
+
+	igt_assert(intel_allocator_close(ahnd) == true);
+}
+
+static void test_allocator_fork(int fd)
+{
+	const int num_obj = 17, num_reserved = 8;
+	struct drm_i915_gem_exec_object2 objects[num_reserved];
+	uint64_t ahnd, ressize = 4096;
+
+	/*
+	 * Must be called before opening allocator in multiprocess environment
+	 * due to freeing previous allocator infrastructure and proper setup
+	 * of data structures and allocation thread.
+	 */
+	intel_allocator_multiprocess_start();
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	__reserve(ahnd, fd, true, objects, num_reserved, ressize);
+
+	igt_fork(child, 8) {
+		ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+		igt_until_timeout(2)
+			__exec_using_allocator(ahnd, fd, num_obj, true);
+		intel_allocator_close(ahnd);
+	}
+
+	igt_waitchildren();
+
+	__unreserve(ahnd, fd, objects, num_reserved, ressize);
+	igt_assert(intel_allocator_close(ahnd) == true);
+
+	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	igt_assert(intel_allocator_close(ahnd) == true);
+
+	intel_allocator_multiprocess_stop();
+}
+
 igt_main
 {
 	int fd = -1;
@@ -727,6 +906,21 @@ igt_main
 
 		igt_subtest("full")
 			test_full(fd);
+
+		igt_subtest("allocator-basic")
+			test_allocator_basic(fd, false);
+
+		igt_subtest("allocator-basic-reserve")
+			test_allocator_basic(fd, true);
+
+		igt_subtest("allocator-nopin")
+			test_allocator_nopin(fd, false);
+
+		igt_subtest("allocator-nopin-reserve")
+			test_allocator_nopin(fd, true);
+
+		igt_subtest("allocator-fork")
+			test_allocator_fork(fd);
 	}
 
 	igt_subtest("softpin")
diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
index eaa904fa7..fa5006d2e 100644
--- a/tests/intel-ci/fast-feedback.testlist
+++ b/tests/intel-ci/fast-feedback.testlist
@@ -39,6 +39,8 @@ igt@gem_mmap_gtt@basic
 igt@gem_render_linear_blits@basic
 igt@gem_render_tiled_blits@basic
 igt@gem_ringfill@basic-all
+igt@gem_softpin@allocator-basic
+igt@gem_softpin@allocator-basic-reserve
 igt@gem_sync@basic-all
 igt@gem_sync@basic-each
 igt@gem_tiled_blits@basic
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 30/35] tests/gem|kms: Remove intel_bb from fixture
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (28 preceding siblings ...)
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 29/35] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
@ 2021-02-16 11:40 ` Zbigniew Kempczyński
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 31/35] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
                   ` (6 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:40 UTC (permalink / raw)
  To: igt-dev

As intel_bb "opens" connection to allocator when test completes it can
leave allocator in unknown state (mostly in failed tests). As igt_core
was armed in resetting allocator infrastructure connection to it inside
intel_bb is not valid anymore. Trying to use it leads to catastrofic
errors.

Migrate intel_bb out of fixture and create it inside tests individually.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_caching.c              | 14 ++++++++--
 tests/i915/gem_partial_pwrite_pread.c | 40 +++++++++++++++++----------
 tests/i915/gem_render_copy.c          | 31 ++++++++++-----------
 tests/kms_big_fb.c                    | 12 +++++---
 4 files changed, 61 insertions(+), 36 deletions(-)

diff --git a/tests/i915/gem_caching.c b/tests/i915/gem_caching.c
index bdaff68a0..4e844952f 100644
--- a/tests/i915/gem_caching.c
+++ b/tests/i915/gem_caching.c
@@ -158,7 +158,6 @@ igt_main
 			flags = 0;
 		}
 		data.bops = buf_ops_create(data.fd);
-		ibb = intel_bb_create(data.fd, PAGE_SIZE);
 
 		scratch_buf = intel_buf_create(data.bops, BO_SIZE/4, 1,
 					       32, 0, I915_TILING_NONE, 0);
@@ -174,6 +173,8 @@ igt_main
 
 		igt_info("checking partial reads\n");
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		for (i = 0; i < ROUNDS; i++) {
 			uint8_t val0 = i;
 			int start, len;
@@ -195,11 +196,15 @@ igt_main
 
 			igt_progress("partial reads test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_subtest("writes") {
 		igt_require(flags & TEST_WRITE);
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		igt_info("checking partial writes\n");
 
 		for (i = 0; i < ROUNDS; i++) {
@@ -240,11 +245,15 @@ igt_main
 
 			igt_progress("partial writes test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_subtest("read-writes") {
 		igt_require((flags & TEST_BOTH) == TEST_BOTH);
 
+		ibb = intel_bb_create(data.fd, PAGE_SIZE);
+
 		igt_info("checking partial writes after partial reads\n");
 
 		for (i = 0; i < ROUNDS; i++) {
@@ -307,10 +316,11 @@ igt_main
 
 			igt_progress("partial read/writes test: ", i, ROUNDS);
 		}
+
+		intel_bb_destroy(ibb);
 	}
 
 	igt_fixture {
-		intel_bb_destroy(ibb);
 		intel_buf_destroy(scratch_buf);
 		intel_buf_destroy(staging_buf);
 		buf_ops_destroy(data.bops);
diff --git a/tests/i915/gem_partial_pwrite_pread.c b/tests/i915/gem_partial_pwrite_pread.c
index 72c33539d..c2ca561e3 100644
--- a/tests/i915/gem_partial_pwrite_pread.c
+++ b/tests/i915/gem_partial_pwrite_pread.c
@@ -53,7 +53,6 @@ IGT_TEST_DESCRIPTION("Test pwrite/pread consistency when touching partial"
 #define PAGE_SIZE 4096
 #define BO_SIZE (4*4096)
 
-struct intel_bb *ibb;
 struct intel_buf *scratch_buf;
 struct intel_buf *staging_buf;
 
@@ -77,7 +76,8 @@ static void *__try_gtt_map_first(data_t *data, struct intel_buf *buf,
 	return ptr;
 }
 
-static void copy_bo(struct intel_buf *src, struct intel_buf *dst)
+static void copy_bo(struct intel_bb *ibb,
+		    struct intel_buf *src, struct intel_buf *dst)
 {
 	bool has_64b_reloc;
 
@@ -109,8 +109,8 @@ static void copy_bo(struct intel_buf *src, struct intel_buf *dst)
 }
 
 static void
-blt_bo_fill(data_t *data, struct intel_buf *tmp_bo,
-		struct intel_buf *bo, uint8_t val)
+blt_bo_fill(data_t *data, struct intel_bb *ibb,
+	    struct intel_buf *tmp_bo, struct intel_buf *bo, uint8_t val)
 {
 	uint8_t *gtt_ptr;
 	int i;
@@ -124,7 +124,7 @@ blt_bo_fill(data_t *data, struct intel_buf *tmp_bo,
 
 	igt_drop_caches_set(data->drm_fd, DROP_BOUND);
 
-	copy_bo(tmp_bo, bo);
+	copy_bo(ibb, tmp_bo, bo);
 }
 
 #define MAX_BLT_SIZE 128
@@ -139,14 +139,17 @@ static void get_range(int *start, int *len)
 
 static void test_partial_reads(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial reads\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		get_range(&start, &len);
 		gem_read(data->drm_fd, scratch_buf->handle, start, tmp, len);
@@ -159,26 +162,31 @@ static void test_partial_reads(data_t *data)
 
 		igt_progress("partial reads test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void test_partial_writes(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 	uint8_t *gtt_ptr;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial writes\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		memset(tmp, i + 63, BO_SIZE);
 
 		get_range(&start, &len);
 		gem_write(data->drm_fd, scratch_buf->handle, start, tmp, len);
 
-		copy_bo(scratch_buf, staging_buf);
+		copy_bo(ibb, scratch_buf, staging_buf);
 		gtt_ptr = __try_gtt_map_first(data, staging_buf, 0);
 
 		for (j = 0; j < start; j++) {
@@ -200,19 +208,24 @@ static void test_partial_writes(data_t *data)
 
 		igt_progress("partial writes test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void test_partial_read_writes(data_t *data)
 {
+	struct intel_bb *ibb;
 	int i, j;
 	uint8_t *gtt_ptr;
 
+	ibb = intel_bb_create(data->drm_fd, PAGE_SIZE);
+
 	igt_info("checking partial writes after partial reads\n");
 	for (i = 0; i < ROUNDS; i++) {
 		uint8_t val = i;
 		int start, len;
 
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		/* partial read */
 		get_range(&start, &len);
@@ -226,7 +239,7 @@ static void test_partial_read_writes(data_t *data)
 		/* Change contents through gtt to make the pread cachelines
 		 * stale. */
 		val += 17;
-		blt_bo_fill(data, staging_buf, scratch_buf, val);
+		blt_bo_fill(data, ibb, staging_buf, scratch_buf, val);
 
 		/* partial write */
 		memset(tmp, i + 63, BO_SIZE);
@@ -234,7 +247,7 @@ static void test_partial_read_writes(data_t *data)
 		get_range(&start, &len);
 		gem_write(data->drm_fd, scratch_buf->handle, start, tmp, len);
 
-		copy_bo(scratch_buf, staging_buf);
+		copy_bo(ibb, scratch_buf, staging_buf);
 		gtt_ptr = __try_gtt_map_first(data, staging_buf, 0);
 
 		for (j = 0; j < start; j++) {
@@ -256,6 +269,8 @@ static void test_partial_read_writes(data_t *data)
 
 		igt_progress("partial read/writes test: ", i, ROUNDS);
 	}
+
+	intel_bb_destroy(ibb);
 }
 
 static void do_tests(data_t *data, int cache_level, const char *suffix)
@@ -288,8 +303,6 @@ igt_main
 		data.devid = intel_get_drm_devid(data.drm_fd);
 		data.bops = buf_ops_create(data.drm_fd);
 
-		ibb = intel_bb_create(data.drm_fd, PAGE_SIZE);
-
 		/* overallocate the buffers we're actually using because */	
 		scratch_buf = intel_buf_create(data.bops, BO_SIZE/4, 1, 32, 0, I915_TILING_NONE, 0);
 		staging_buf = intel_buf_create(data.bops, BO_SIZE/4, 1, 32, 0, I915_TILING_NONE, 0);
@@ -303,7 +316,6 @@ igt_main
 	do_tests(&data, 2, "-display");
 
 	igt_fixture {
-		intel_bb_destroy(ibb);
 		intel_buf_destroy(scratch_buf);
 		intel_buf_destroy(staging_buf);
 		buf_ops_destroy(data.bops);
diff --git a/tests/i915/gem_render_copy.c b/tests/i915/gem_render_copy.c
index afc490f1a..e48b5b996 100644
--- a/tests/i915/gem_render_copy.c
+++ b/tests/i915/gem_render_copy.c
@@ -58,7 +58,6 @@ typedef struct {
 	int drm_fd;
 	uint32_t devid;
 	struct buf_ops *bops;
-	struct intel_bb *ibb;
 	igt_render_copyfunc_t render_copy;
 	igt_vebox_copyfunc_t vebox_copy;
 } data_t;
@@ -341,6 +340,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		 enum i915_compression dst_compression,
 		 int flags)
 {
+	struct intel_bb *ibb;
 	struct intel_buf ref, src_tiled, src_ccs, dst_ccs, dst;
 	struct {
 		struct intel_buf buf;
@@ -397,6 +397,8 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	    src_compressed || dst_compressed)
 		igt_require(intel_gen(data->devid) >= 9);
 
+	ibb = intel_bb_create(data->drm_fd, 4096);
+
 	for (int i = 0; i < num_src; i++)
 		scratch_buf_init(data, &src[i].buf, WIDTH, HEIGHT, src[i].tiling,
 				 I915_COMPRESSION_NONE);
@@ -456,12 +458,12 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	 */
 	if (src_mixed_tiled) {
 		if (dst_compressed)
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst, 0, 0, WIDTH, HEIGHT,
 					  &dst_ccs, 0, 0);
 
 		for (int i = 0; i < num_src; i++) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &src[i].buf,
 					  WIDTH/4, HEIGHT/4, WIDTH/2-2, HEIGHT/2-2,
 					  dst_compressed ? &dst_ccs : &dst,
@@ -469,13 +471,13 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		}
 
 		if (dst_compressed)
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst_ccs, 0, 0, WIDTH, HEIGHT,
 					  &dst, 0, 0);
 
 	} else {
 		if (src_compression == I915_COMPRESSION_RENDER) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &src_tiled, 0, 0, WIDTH, HEIGHT,
 					  &src_ccs,
 					  0, 0);
@@ -486,7 +488,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 						       "render-src_ccs.bin");
 			}
 		} else if (src_compression == I915_COMPRESSION_MEDIA) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 &src_tiled, WIDTH, HEIGHT,
 					 &src_ccs);
 			if (dump_compressed_src_buf) {
@@ -498,34 +500,34 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 		}
 
 		if (dst_compression == I915_COMPRESSION_RENDER) {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  src_compressed ? &src_ccs : &src_tiled,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst_ccs,
 					  0, 0);
 
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  &dst_ccs,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst,
 					  0, 0);
 		} else if (dst_compression == I915_COMPRESSION_MEDIA) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 src_compressed ? &src_ccs : &src_tiled,
 					 WIDTH, HEIGHT,
 					 &dst_ccs);
 
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 &dst_ccs,
 					 WIDTH, HEIGHT,
 					 &dst);
 		} else if (force_vebox_dst_copy) {
-			data->vebox_copy(data->ibb,
+			data->vebox_copy(ibb,
 					 src_compressed ? &src_ccs : &src_tiled,
 					 WIDTH, HEIGHT,
 					 &dst);
 		} else {
-			data->render_copy(data->ibb,
+			data->render_copy(ibb,
 					  src_compressed ? &src_ccs : &src_tiled,
 					  0, 0, WIDTH, HEIGHT,
 					  &dst,
@@ -572,8 +574,7 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling,
 	for (int i = 0; i < num_src; i++)
 		scratch_buf_fini(&src[i].buf);
 
-	/* handles gone, need to clean the objects cache within intel_bb */
-	intel_bb_reset(data->ibb, true);
+	intel_bb_destroy(ibb);
 }
 
 static int opt_handler(int opt, int opt_index, void *data)
@@ -796,7 +797,6 @@ igt_main_args("dac", NULL, help_str, opt_handler, NULL)
 		data.vebox_copy = igt_get_vebox_copyfunc(data.devid);
 
 		data.bops = buf_ops_create(data.drm_fd);
-		data.ibb = intel_bb_create(data.drm_fd, 4096);
 
 		igt_fork_hang_detector(data.drm_fd);
 	}
@@ -849,7 +849,6 @@ igt_main_args("dac", NULL, help_str, opt_handler, NULL)
 
 	igt_fixture {
 		igt_stop_hang_detector();
-		intel_bb_destroy(data.ibb);
 		buf_ops_destroy(data.bops);
 	}
 }
diff --git a/tests/kms_big_fb.c b/tests/kms_big_fb.c
index 5260176e1..b6707a5a4 100644
--- a/tests/kms_big_fb.c
+++ b/tests/kms_big_fb.c
@@ -664,7 +664,6 @@ igt_main
 			data.render_copy = igt_get_render_copyfunc(data.devid);
 
 		data.bops = buf_ops_create(data.drm_fd);
-		data.ibb = intel_bb_create(data.drm_fd, 4096);
 	}
 
 	/*
@@ -677,7 +676,9 @@ igt_main
 		igt_subtest_f("%s-addfb-size-overflow",
 			      modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_size_overflow(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
@@ -685,15 +686,18 @@ igt_main
 		igt_subtest_f("%s-addfb-size-offset-overflow",
 			      modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_size_offset_overflow(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
 	for (int i = 0; i < ARRAY_SIZE(modifiers); i++) {
 		igt_subtest_f("%s-addfb", modifiers[i].name) {
 			data.modifier = modifiers[i].modifier;
-
+			data.ibb = intel_bb_create(data.drm_fd, 4096);
 			test_addfb(&data);
+			intel_bb_destroy(data.ibb);
 		}
 	}
 
@@ -711,7 +715,9 @@ igt_main
 					igt_require(data.format == DRM_FORMAT_C8 ||
 						    igt_fb_supported_format(data.format));
 					igt_require(igt_display_has_format_mod(&data.display, data.format, data.modifier));
+					data.ibb = intel_bb_create(data.drm_fd, 4096);
 					test_scanout(&data);
+					intel_bb_destroy(data.ibb);
 				}
 			}
 
@@ -722,8 +728,6 @@ igt_main
 
 	igt_fixture {
 		igt_display_fini(&data.display);
-
-		intel_bb_destroy(data.ibb);
 		buf_ops_destroy(data.bops);
 	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 31/35] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (29 preceding siblings ...)
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 30/35] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
@ 2021-02-16 11:40 ` Zbigniew Kempczyński
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 32/35] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
                   ` (5 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:40 UTC (permalink / raw)
  To: igt-dev

Generally when playing with intel_buf we should use wrapper code
instead of adding it to intel_bb directly. Code checks required
alignment on specific gens so protects us from passing unaligned
properly addresses.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_mmap_offset.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/i915/gem_mmap_offset.c b/tests/i915/gem_mmap_offset.c
index 5e48cd697..8f2006274 100644
--- a/tests/i915/gem_mmap_offset.c
+++ b/tests/i915/gem_mmap_offset.c
@@ -614,8 +614,8 @@ static void blt_coherency(int i915)
 	dst = create_bo(bops, 1, width, height);
 	size = src->surface[0].size;
 
-	intel_bb_add_object(ibb, src->handle, size, src->addr.offset, 0, false);
-	intel_bb_add_object(ibb, dst->handle, size, dst->addr.offset, 0, true);
+	intel_bb_add_intel_buf(ibb, src, false);
+	intel_bb_add_intel_buf(ibb, dst, true);
 
 	intel_bb_blt_copy(ibb,
 			  src, 0, 0, src->surface[0].stride,
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 32/35] tests/gem_ppgtt: Adopt test to use intel_bb with allocator
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (30 preceding siblings ...)
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 31/35] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
@ 2021-02-16 11:40 ` Zbigniew Kempczyński
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 33/35] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
                   ` (4 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:40 UTC (permalink / raw)
  To: igt-dev

Tests work in multiprocess environment so we must turn on/off
thread in parent process which is responsible for allocations.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_ppgtt.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tests/i915/gem_ppgtt.c b/tests/i915/gem_ppgtt.c
index 5f8b34224..678a63498 100644
--- a/tests/i915/gem_ppgtt.c
+++ b/tests/i915/gem_ppgtt.c
@@ -283,10 +283,13 @@ igt_main
 		rcs = calloc(sizeof(*rcs), nchild);
 		igt_assert(rcs);
 
+		intel_allocator_multiprocess_start();
+
 		fork_bcs_copy(30, 0x4000, bcs, 1);
 		fork_rcs_copy(30, 0x8000 / nchild, rcs, nchild, 0);
 
 		igt_waitchildren();
+		intel_allocator_multiprocess_stop();
 
 		surfaces_check(bcs, 1, 0x4000);
 		surfaces_check(rcs, nchild, 0x8000 / nchild);
@@ -310,10 +313,13 @@ igt_main
 		rcs = calloc(sizeof(*rcs), nchild);
 		igt_assert(rcs);
 
+		intel_allocator_multiprocess_start();
+
 		fork_rcs_copy(30, 0x8000 / nchild, rcs, nchild, CREATE_CONTEXT);
 		fork_bcs_copy(30, 0x4000, bcs, 1);
 
 		igt_waitchildren();
+		intel_allocator_multiprocess_stop();
 
 		surfaces_check(bcs, 1, 0x4000);
 		surfaces_check(rcs, nchild, 0x8000 / nchild);
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 33/35] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (31 preceding siblings ...)
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 32/35] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
@ 2021-02-16 11:40 ` Zbigniew Kempczyński
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 34/35] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
                   ` (3 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:40 UTC (permalink / raw)
  To: igt-dev

As intel_bb has some strong requirements when allocator is in use
(addresses cannot move when simple allocator is used) ensure gem
buffer which is created from flink will reacquire new address.

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_render_copy_redux.c | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/tests/i915/gem_render_copy_redux.c b/tests/i915/gem_render_copy_redux.c
index 40308a6e6..6c8d7f755 100644
--- a/tests/i915/gem_render_copy_redux.c
+++ b/tests/i915/gem_render_copy_redux.c
@@ -63,7 +63,6 @@ typedef struct {
 	int fd;
 	uint32_t devid;
 	struct buf_ops *bops;
-	struct intel_bb *ibb;
 	igt_render_copyfunc_t render_copy;
 	uint32_t linear[WIDTH * HEIGHT];
 } data_t;
@@ -77,13 +76,10 @@ static void data_init(data_t *data)
 	data->render_copy = igt_get_render_copyfunc(data->devid);
 	igt_require_f(data->render_copy,
 		      "no render-copy function\n");
-
-	data->ibb = intel_bb_create(data->fd, 4096);
 }
 
 static void data_fini(data_t *data)
 {
-	intel_bb_destroy(data->ibb);
 	buf_ops_destroy(data->bops);
 	close(data->fd);
 }
@@ -126,15 +122,17 @@ scratch_buf_check(data_t *data, struct intel_buf *buf, int x, int y,
 
 static void copy(data_t *data)
 {
+	struct intel_bb *ibb;
 	struct intel_buf src, dst;
 
+	ibb = intel_bb_create(data->fd, 4096);
 	scratch_buf_init(data, &src, WIDTH, HEIGHT, STRIDE, SRC_COLOR);
 	scratch_buf_init(data, &dst, WIDTH, HEIGHT, STRIDE, DST_COLOR);
 
 	scratch_buf_check(data, &src, WIDTH / 2, HEIGHT / 2, SRC_COLOR);
 	scratch_buf_check(data, &dst, WIDTH / 2, HEIGHT / 2, DST_COLOR);
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &src, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH / 2, HEIGHT / 2);
 
@@ -143,11 +141,13 @@ static void copy(data_t *data)
 
 	scratch_buf_fini(data, &src);
 	scratch_buf_fini(data, &dst);
+	intel_bb_destroy(ibb);
 }
 
 static void copy_flink(data_t *data)
 {
 	data_t local;
+	struct intel_bb *ibb, *local_ibb;
 	struct intel_buf src, dst;
 	struct intel_buf local_src, local_dst;
 	struct intel_buf flink;
@@ -155,32 +155,36 @@ static void copy_flink(data_t *data)
 
 	data_init(&local);
 
+	ibb = intel_bb_create(data->fd, 4096);
+	local_ibb = intel_bb_create(local.fd, 4096);
 	scratch_buf_init(data, &src, WIDTH, HEIGHT, STRIDE, 0);
 	scratch_buf_init(data, &dst, WIDTH, HEIGHT, STRIDE, DST_COLOR);
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &src, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH, HEIGHT);
 
 	scratch_buf_init(&local, &local_src, WIDTH, HEIGHT, STRIDE, 0);
 	scratch_buf_init(&local, &local_dst, WIDTH, HEIGHT, STRIDE, SRC_COLOR);
 
-	local.render_copy(local.ibb,
+	local.render_copy(local_ibb,
 			  &local_src, 0, 0, WIDTH, HEIGHT,
 			  &local_dst, WIDTH, HEIGHT);
 
 	name = gem_flink(local.fd, local_dst.handle);
 	flink = local_dst;
 	flink.handle = gem_open(data->fd, name);
+	flink.ibb = ibb;
+	flink.addr.offset = INTEL_BUF_INVALID_ADDRESS;
 
-	data->render_copy(data->ibb,
+	data->render_copy(ibb,
 			  &flink, 0, 0, WIDTH, HEIGHT,
 			  &dst, WIDTH / 2, HEIGHT / 2);
 
 	scratch_buf_check(data, &dst, 10, 10, DST_COLOR);
 	scratch_buf_check(data, &dst, WIDTH - 10, HEIGHT - 10, SRC_COLOR);
 
-	intel_bb_reset(data->ibb, true);
+	intel_bb_reset(ibb, true);
 	scratch_buf_fini(data, &src);
 	scratch_buf_fini(data, &flink);
 	scratch_buf_fini(data, &dst);
@@ -188,6 +192,8 @@ static void copy_flink(data_t *data)
 	scratch_buf_fini(&local, &local_src);
 	scratch_buf_fini(&local, &local_dst);
 
+	intel_bb_destroy(local_ibb);
+	intel_bb_destroy(ibb);
 	data_fini(&local);
 }
 
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 34/35] tests/perf.c: Remove buffer from batch
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (32 preceding siblings ...)
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 33/35] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
@ 2021-02-16 11:40 ` Zbigniew Kempczyński
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 35/35] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
                   ` (2 subsequent siblings)
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:40 UTC (permalink / raw)
  To: igt-dev

Currently we need to ensure intel_buf is a part of single ibb due
to fact it is acquiring address from allocator so we cannot keep
it in two separate ibbs (theoretically it is possible when different
ibbs would be in same context but it is not currently implemented).

Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 tests/i915/perf.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tests/i915/perf.c b/tests/i915/perf.c
index 664fd0a94..e641d5d2d 100644
--- a/tests/i915/perf.c
+++ b/tests/i915/perf.c
@@ -3527,6 +3527,9 @@ gen8_test_single_ctx_render_target_writes_a_counter(void)
 			/* Another redundant flush to clarify batch bo is free to reuse */
 			intel_bb_flush_render(ibb0);
 
+			/* Remove intel_buf from ibb0 added implicitly in rendercopy */
+			intel_bb_remove_intel_buf(ibb0, dst_buf);
+
 			/* submit two copies on the other context to avoid a false
 			 * positive in case the driver somehow ended up filtering for
 			 * context1
@@ -3919,6 +3922,9 @@ static void gen12_single_ctx_helper(void)
 				     BO_REPORT_ID0);
 	intel_bb_flush_render(ibb0);
 
+	/* Remove intel_buf from ibb0 added implicitly in rendercopy */
+	intel_bb_remove_intel_buf(ibb0, dst_buf);
+
 	/* This is the work/context that is measured for counter increments */
 	render_copy(ibb0,
 		    &src[0], 0, 0, width, height,
@@ -3965,6 +3971,9 @@ static void gen12_single_ctx_helper(void)
 				     BO_REPORT_ID3);
 	intel_bb_flush_render(ibb1);
 
+	/* Remove intel_buf from ibb1 added implicitly in rendercopy */
+	intel_bb_remove_intel_buf(ibb1, dst_buf);
+
 	/* Submit an mi-rpc to context0 after all measurable work */
 #define BO_TIMESTAMP_OFFSET1 1032
 #define BO_REPORT_OFFSET1 256
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [PATCH i-g-t 35/35] tests/gem_linear_blits: Use intel allocator
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (33 preceding siblings ...)
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 34/35] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
@ 2021-02-16 11:40 ` Zbigniew Kempczyński
  2021-02-16 13:21 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev21) Patchwork
  2021-02-16 15:03 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
  36 siblings, 0 replies; 38+ messages in thread
From: Zbigniew Kempczyński @ 2021-02-16 11:40 UTC (permalink / raw)
  To: igt-dev

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Use intel allocator directly, without intel-bb infrastructure.

v2: for relocations suggests use incremented offsets  instead of 0.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 tests/i915/gem_linear_blits.c | 117 +++++++++++++++++++++++++---------
 1 file changed, 87 insertions(+), 30 deletions(-)

diff --git a/tests/i915/gem_linear_blits.c b/tests/i915/gem_linear_blits.c
index cae42d52a..d351463fc 100644
--- a/tests/i915/gem_linear_blits.c
+++ b/tests/i915/gem_linear_blits.c
@@ -56,7 +56,8 @@ IGT_TEST_DESCRIPTION("Test doing many blits with a working set larger than the"
 static uint32_t linear[WIDTH*HEIGHT];
 
 static void
-copy(int fd, uint32_t dst, uint32_t src)
+copy(int fd, uint64_t ahnd, uint32_t dst, uint32_t src,
+     uint64_t dst_offset, uint64_t src_offset, bool do_relocs)
 {
 	uint32_t batch[12];
 	struct drm_i915_gem_relocation_entry reloc[2];
@@ -77,41 +78,58 @@ copy(int fd, uint32_t dst, uint32_t src)
 		  WIDTH*4;
 	batch[i++] = 0; /* dst x1,y1 */
 	batch[i++] = (HEIGHT << 16) | WIDTH; /* dst x2,y2 */
-	batch[i++] = 0; /* dst reloc */
+	batch[i++] = dst_offset;
 	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
-		batch[i++] = 0;
+		batch[i++] = dst_offset >> 32;
 	batch[i++] = 0; /* src x1,y1 */
 	batch[i++] = WIDTH*4;
-	batch[i++] = 0; /* src reloc */
+	batch[i++] = src_offset;
 	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
-		batch[i++] = 0;
+		batch[i++] = src_offset >> 32;
 	batch[i++] = MI_BATCH_BUFFER_END;
 	batch[i++] = MI_NOOP;
 
-	memset(reloc, 0, sizeof(reloc));
-	reloc[0].target_handle = dst;
-	reloc[0].delta = 0;
-	reloc[0].offset = 4 * sizeof(batch[0]);
-	reloc[0].presumed_offset = 0;
-	reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
-	reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
-
-	reloc[1].target_handle = src;
-	reloc[1].delta = 0;
-	reloc[1].offset = 7 * sizeof(batch[0]);
-	if (intel_gen(intel_get_drm_devid(fd)) >= 8)
-		reloc[1].offset += sizeof(batch[0]);
-	reloc[1].presumed_offset = 0;
-	reloc[1].read_domains = I915_GEM_DOMAIN_RENDER;
-	reloc[1].write_domain = 0;
-
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = dst;
 	obj[1].handle = src;
 	obj[2].handle = gem_create(fd, 4096);
 	gem_write(fd, obj[2].handle, 0, batch, i * sizeof(batch[0]));
-	obj[2].relocation_count = 2;
-	obj[2].relocs_ptr = to_user_pointer(reloc);
+
+	if (do_relocs) {
+		memset(reloc, 0, sizeof(reloc));
+		reloc[0].target_handle = dst;
+		reloc[0].delta = 0;
+		reloc[0].offset = 4 * sizeof(batch[0]);
+		reloc[0].presumed_offset = dst_offset;
+		reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
+		reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
+
+		reloc[1].target_handle = src;
+		reloc[1].delta = 0;
+		reloc[1].offset = 7 * sizeof(batch[0]);
+		if (intel_gen(intel_get_drm_devid(fd)) >= 8)
+			reloc[1].offset += sizeof(batch[0]);
+		reloc[1].presumed_offset = src_offset;
+		reloc[1].read_domains = I915_GEM_DOMAIN_RENDER;
+		reloc[1].write_domain = 0;
+
+		obj[2].relocation_count = 2;
+		obj[2].relocs_ptr = to_user_pointer(reloc);
+	} else {
+		obj[0].offset = CANONICAL(dst_offset);
+		obj[0].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE |
+			       EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+		obj[1].offset = CANONICAL(src_offset);
+		obj[1].flags = EXEC_OBJECT_PINNED |
+			       EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+
+		obj[2].offset = intel_allocator_alloc(ahnd, obj[2].handle,
+						      sizeof(linear), 0);
+		obj[2].offset = CANONICAL(obj[2].offset);
+		obj[2].flags = EXEC_OBJECT_PINNED |
+			       EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+	}
 
 	memset(&exec, 0, sizeof(exec));
 	exec.buffers_ptr = to_user_pointer(obj);
@@ -120,6 +138,9 @@ copy(int fd, uint32_t dst, uint32_t src)
 	exec.flags = gem_has_blt(fd) ? I915_EXEC_BLT : 0;
 
 	gem_execbuf(fd, &exec);
+
+	if (!do_relocs)
+		intel_allocator_free(ahnd, obj[2].handle);
 	gem_close(fd, obj[2].handle);
 }
 
@@ -157,17 +178,38 @@ check_bo(int fd, uint32_t handle, uint32_t val)
 	igt_assert_eq(num_errors, 0);
 }
 
-static void run_test(int fd, int count)
+/* We don't have alignment detection yet, so assume worst scenario */
+#define ALIGNMENT (2048*1024)
+static void run_test(int fd, int count, bool do_relocs)
 {
 	uint32_t *handle, *start_val;
+	uint64_t *offset, ahnd, gtt_size, address = 0;
 	uint32_t start = 0;
 	int i;
 
+	if (!do_relocs) {
+		ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
+	} else {
+		gtt_size = gem_aperture_size(fd);
+		if ((gtt_size - 1) >> 32)
+			gtt_size = 1ULL << 32;
+	}
+
 	handle = malloc(sizeof(uint32_t) * count * 2);
 	start_val = handle + count;
+	offset = calloc(1, sizeof(uint64_t) * count);
 
 	for (i = 0; i < count; i++) {
 		handle[i] = create_bo(fd, start);
+
+		if (!do_relocs) {
+			offset[i] = intel_allocator_alloc(ahnd, handle[i],
+							  sizeof(linear), 4096);
+		} else {
+			offset[i] = address % gtt_size;
+			address = ALIGN(address + sizeof(linear), ALIGNMENT);
+		}
+
 		start_val[i] = start;
 		start += 1024 * 1024 / 4;
 	}
@@ -178,17 +220,23 @@ static void run_test(int fd, int count)
 
 		if (src == dst)
 			continue;
+		copy(fd, ahnd, handle[dst], handle[src],
+		     offset[dst], offset[src], do_relocs);
 
-		copy(fd, handle[dst], handle[src]);
 		start_val[dst] = start_val[src];
 	}
 
 	for (i = 0; i < count; i++) {
 		check_bo(fd, handle[i], start_val[i]);
+		if (!do_relocs)
+			intel_allocator_free(ahnd, handle[i]);
 		gem_close(fd, handle[i]);
 	}
 
 	free(handle);
+
+	if (!do_relocs)
+		intel_allocator_close(ahnd);
 }
 
 #define MAX_32b ((1ull << 32) - 4096)
@@ -197,38 +245,47 @@ igt_main
 {
 	const int ncpus = sysconf(_SC_NPROCESSORS_ONLN);
 	uint64_t count = 0;
+	bool do_relocs;
 	int fd = -1;
 
 	igt_fixture {
 		fd = drm_open_driver(DRIVER_INTEL);
 		igt_require_gem(fd);
 		gem_require_blitter(fd);
+		do_relocs = !gem_uses_ppgtt(fd);
 
 		count = gem_aperture_size(fd);
 		if (count >> 32)
 			count = MAX_32b;
+		else
+			do_relocs = true;
+
 		count = 3 + count / (1024*1024);
 		igt_require(count > 1);
 		intel_require_memory(count, sizeof(linear), CHECK_RAM);
-
 		igt_debug("Using %'"PRIu64" 1MiB buffers\n", count);
 		count = (count + ncpus - 1) / ncpus;
 	}
 
 	igt_subtest("basic")
-		run_test(fd, 2);
+		run_test(fd, 2, do_relocs);
 
 	igt_subtest("normal") {
+		intel_allocator_multiprocess_start();
+		do_relocs = true;
 		igt_fork(child, ncpus)
-			run_test(fd, count);
+			run_test(fd, count, do_relocs);
 		igt_waitchildren();
+		intel_allocator_multiprocess_stop();
 	}
 
 	igt_subtest("interruptible") {
+		intel_allocator_multiprocess_start();
 		igt_fork_signal_helper();
 		igt_fork(child, ncpus)
-			run_test(fd, count);
+			run_test(fd, count, do_relocs);
 		igt_waitchildren();
 		igt_stop_signal_helper();
+		intel_allocator_multiprocess_stop();
 	}
 }
-- 
2.26.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev21)
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (34 preceding siblings ...)
  2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 35/35] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
@ 2021-02-16 13:21 ` Patchwork
  2021-02-16 15:03 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
  36 siblings, 0 replies; 38+ messages in thread
From: Patchwork @ 2021-02-16 13:21 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 12471 bytes --]

== Series Details ==

Series: Introduce IGT allocator (rev21)
URL   : https://patchwork.freedesktop.org/series/82954/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9778 -> IGTPW_5513
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_5513:

### IGT changes ###

#### Possible regressions ####

  * {igt@gem_softpin@allocator-basic-reserve} (NEW):
    - fi-glk-dsi:         NOTRUN -> [WARN][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-glk-dsi/igt@gem_softpin@allocator-basic-reserve.html
    - fi-cml-s:           NOTRUN -> [WARN][2]
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-cml-s/igt@gem_softpin@allocator-basic-reserve.html
    - fi-cfl-guc:         NOTRUN -> [WARN][3]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-cfl-guc/igt@gem_softpin@allocator-basic-reserve.html
    - fi-kbl-soraka:      NOTRUN -> [WARN][4]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-kbl-soraka/igt@gem_softpin@allocator-basic-reserve.html
    - {fi-ehl-1}:         NOTRUN -> [WARN][5]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-ehl-1/igt@gem_softpin@allocator-basic-reserve.html
    - fi-skl-6700k2:      NOTRUN -> [WARN][6]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-skl-6700k2/igt@gem_softpin@allocator-basic-reserve.html
    - fi-bsw-n3050:       NOTRUN -> [WARN][7]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-bsw-n3050/igt@gem_softpin@allocator-basic-reserve.html
    - fi-tgl-u2:          NOTRUN -> [WARN][8]
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-tgl-u2/igt@gem_softpin@allocator-basic-reserve.html
    - fi-cml-u2:          NOTRUN -> [WARN][9]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-cml-u2/igt@gem_softpin@allocator-basic-reserve.html
    - {fi-jsl-1}:         NOTRUN -> [WARN][10]
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-jsl-1/igt@gem_softpin@allocator-basic-reserve.html
    - fi-cfl-8700k:       NOTRUN -> [WARN][11]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-cfl-8700k/igt@gem_softpin@allocator-basic-reserve.html
    - fi-bxt-dsi:         NOTRUN -> [WARN][12]
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-bxt-dsi/igt@gem_softpin@allocator-basic-reserve.html
    - fi-skl-6600u:       NOTRUN -> [WARN][13]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-skl-6600u/igt@gem_softpin@allocator-basic-reserve.html
    - fi-cfl-8109u:       NOTRUN -> [WARN][14]
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-cfl-8109u/igt@gem_softpin@allocator-basic-reserve.html
    - {fi-tgl-dsi}:       NOTRUN -> [WARN][15]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-tgl-dsi/igt@gem_softpin@allocator-basic-reserve.html
    - fi-bsw-nick:        NOTRUN -> [WARN][16]
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-bsw-nick/igt@gem_softpin@allocator-basic-reserve.html
    - fi-icl-y:           NOTRUN -> [WARN][17]
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-icl-y/igt@gem_softpin@allocator-basic-reserve.html
    - fi-apl-guc:         NOTRUN -> [WARN][18]
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-apl-guc/igt@gem_softpin@allocator-basic-reserve.html
    - {fi-ehl-2}:         NOTRUN -> [WARN][19]
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-ehl-2/igt@gem_softpin@allocator-basic-reserve.html
    - fi-kbl-r:           NOTRUN -> [WARN][20]
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-kbl-r/igt@gem_softpin@allocator-basic-reserve.html
    - fi-bdw-5557u:       NOTRUN -> [WARN][21]
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-bdw-5557u/igt@gem_softpin@allocator-basic-reserve.html
    - fi-kbl-7500u:       NOTRUN -> [WARN][22]
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-kbl-7500u/igt@gem_softpin@allocator-basic-reserve.html
    - fi-kbl-x1275:       NOTRUN -> [WARN][23]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-kbl-x1275/igt@gem_softpin@allocator-basic-reserve.html
    - fi-bsw-kefka:       NOTRUN -> [WARN][24]
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-bsw-kefka/igt@gem_softpin@allocator-basic-reserve.html
    - fi-kbl-guc:         NOTRUN -> [WARN][25]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-kbl-guc/igt@gem_softpin@allocator-basic-reserve.html
    - {fi-rkl-11500t}:    NOTRUN -> [WARN][26]
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-rkl-11500t/igt@gem_softpin@allocator-basic-reserve.html

  
New tests
---------

  New tests have been introduced between CI_DRM_9778 and IGTPW_5513:

### New IGT tests (2) ###

  * igt@gem_softpin@allocator-basic:
    - Statuses : 27 pass(s) 9 skip(s)
    - Exec time: [0.0, 1.06] s

  * igt@gem_softpin@allocator-basic-reserve:
    - Statuses : 1 dmesg-warn(s) 9 skip(s) 26 warn(s)
    - Exec time: [0.0, 1.16] s

  

Known issues
------------

  Here are the changes found in IGTPW_5513 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_suspend@basic-s3:
    - fi-tgl-y:           [PASS][27] -> [DMESG-WARN][28] ([i915#2411] / [i915#402])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/fi-tgl-y/igt@gem_exec_suspend@basic-s3.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-tgl-y/igt@gem_exec_suspend@basic-s3.html

  * {igt@gem_softpin@allocator-basic} (NEW):
    - fi-byt-j1900:       NOTRUN -> [SKIP][29] ([fdo#109271]) +1 similar issue
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-byt-j1900/igt@gem_softpin@allocator-basic.html
    - fi-pnv-d510:        NOTRUN -> [SKIP][30] ([fdo#109271]) +1 similar issue
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-pnv-d510/igt@gem_softpin@allocator-basic.html
    - {fi-hsw-gt1}:       NOTRUN -> [SKIP][31] ([fdo#109271]) +1 similar issue
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-hsw-gt1/igt@gem_softpin@allocator-basic.html
    - fi-snb-2520m:       NOTRUN -> [SKIP][32] ([fdo#109271]) +1 similar issue
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-snb-2520m/igt@gem_softpin@allocator-basic.html
    - fi-ivb-3770:        NOTRUN -> [SKIP][33] ([fdo#109271]) +1 similar issue
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-ivb-3770/igt@gem_softpin@allocator-basic.html
    - fi-snb-2600:        NOTRUN -> [SKIP][34] ([fdo#109271]) +1 similar issue
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-snb-2600/igt@gem_softpin@allocator-basic.html

  * {igt@gem_softpin@allocator-basic-reserve} (NEW):
    - fi-elk-e7500:       NOTRUN -> [SKIP][35] ([fdo#109271]) +1 similar issue
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-elk-e7500/igt@gem_softpin@allocator-basic-reserve.html
    - fi-hsw-4770:        NOTRUN -> [SKIP][36] ([fdo#109271]) +1 similar issue
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-hsw-4770/igt@gem_softpin@allocator-basic-reserve.html
    - fi-ilk-650:         NOTRUN -> [SKIP][37] ([fdo#109271]) +1 similar issue
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-ilk-650/igt@gem_softpin@allocator-basic-reserve.html
    - fi-tgl-y:           NOTRUN -> [DMESG-WARN][38] ([i915#402])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-tgl-y/igt@gem_softpin@allocator-basic-reserve.html

  * igt@i915_module_load@reload:
    - fi-kbl-soraka:      [PASS][39] -> [DMESG-WARN][40] ([i915#1982])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/fi-kbl-soraka/igt@i915_module_load@reload.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-kbl-soraka/igt@i915_module_load@reload.html

  * igt@i915_pm_rpm@module-reload:
    - fi-byt-j1900:       [PASS][41] -> [INCOMPLETE][42] ([i915#142] / [i915#2405])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/fi-byt-j1900/igt@i915_pm_rpm@module-reload.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-byt-j1900/igt@i915_pm_rpm@module-reload.html

  * igt@prime_self_import@basic-with_one_bo:
    - fi-tgl-y:           [PASS][43] -> [DMESG-WARN][44] ([i915#402])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/fi-tgl-y/igt@prime_self_import@basic-with_one_bo.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-tgl-y/igt@prime_self_import@basic-with_one_bo.html

  * igt@runner@aborted:
    - fi-byt-j1900:       NOTRUN -> [FAIL][45] ([i915#1814] / [i915#2505])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-byt-j1900/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@debugfs_test@read_all_entries:
    - fi-tgl-y:           [DMESG-WARN][46] ([i915#402]) -> [PASS][47] +2 similar issues
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/fi-tgl-y/igt@debugfs_test@read_all_entries.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-tgl-y/igt@debugfs_test@read_all_entries.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-kbl-7500u:       [FAIL][48] ([i915#1372]) -> [PASS][49]
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#1372]: https://gitlab.freedesktop.org/drm/intel/issues/1372
  [i915#142]: https://gitlab.freedesktop.org/drm/intel/issues/142
  [i915#1814]: https://gitlab.freedesktop.org/drm/intel/issues/1814
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2405]: https://gitlab.freedesktop.org/drm/intel/issues/2405
  [i915#2411]: https://gitlab.freedesktop.org/drm/intel/issues/2411
  [i915#2505]: https://gitlab.freedesktop.org/drm/intel/issues/2505
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402


Participating hosts (45 -> 39)
------------------------------

  Missing    (6): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-dg1-1 fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_6004 -> IGTPW_5513

  CI-20190529: 20190529
  CI_DRM_9778: b5716c4364252ef29432da4bb14c9e4149d2d4ce @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_5513: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/index.html
  IGT_6004: fe9ac2aeffc1828c6d61763a611a44fbd450aa96 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools



== Testlist changes ==

+igt@api_intel_allocator@alloc-simple
+igt@api_intel_allocator@execbuf-with-allocator
+igt@api_intel_allocator@fork-simple-once
+igt@api_intel_allocator@fork-simple-stress
+igt@api_intel_allocator@fork-simple-stress-signal
+igt@api_intel_allocator@print
+igt@api_intel_allocator@random-allocator
+igt@api_intel_allocator@reopen
+igt@api_intel_allocator@reopen-fork
+igt@api_intel_allocator@reserve
+igt@api_intel_allocator@reserve-simple
+igt@api_intel_allocator@reuse
+igt@api_intel_allocator@simple-allocator
+igt@api_intel_allocator@two-level-inception
+igt@api_intel_allocator@two-level-inception-interruptible
+igt@api_intel_bb@add-remove-objects
+igt@api_intel_bb@bb-with-allocator
+igt@api_intel_bb@blit-noreloc-keep-cache-random
+igt@api_intel_bb@blit-noreloc-purge-cache-random
+igt@api_intel_bb@destroy-bb
+igt@api_intel_bb@last-page
+igt@api_intel_bb@object-noreloc-keep-cache-random
+igt@api_intel_bb@object-noreloc-keep-cache-simple
+igt@api_intel_bb@object-noreloc-purge-cache-random
+igt@api_intel_bb@object-noreloc-purge-cache-simple
+igt@api_intel_bb@object-reloc-keep-cache
+igt@api_intel_bb@object-reloc-purge-cache
+igt@api_intel_bb@purge-bb
+igt@api_intel_bb@reset-bb
+igt@gem_softpin@allocator-basic
+igt@gem_softpin@allocator-basic-reserve
+igt@gem_softpin@allocator-fork
+igt@gem_softpin@allocator-nopin
+igt@gem_softpin@allocator-nopin-reserve
-igt@api_intel_bb@check-canonical

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/index.html

[-- Attachment #1.2: Type: text/html, Size: 14801 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [igt-dev] ✗ Fi.CI.IGT: failure for Introduce IGT allocator (rev21)
  2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
                   ` (35 preceding siblings ...)
  2021-02-16 13:21 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev21) Patchwork
@ 2021-02-16 15:03 ` Patchwork
  36 siblings, 0 replies; 38+ messages in thread
From: Patchwork @ 2021-02-16 15:03 UTC (permalink / raw)
  To: Zbigniew Kempczyński; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 30249 bytes --]

== Series Details ==

Series: Introduce IGT allocator (rev21)
URL   : https://patchwork.freedesktop.org/series/82954/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_9778_full -> IGTPW_5513_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with IGTPW_5513_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in IGTPW_5513_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_5513_full:

### IGT changes ###

#### Possible regressions ####

  * {igt@api_intel_bb@last-page} (NEW):
    - shard-glk:          NOTRUN -> [FAIL][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk6/igt@api_intel_bb@last-page.html

  * igt@gem_exec_balancer@bonded-sync:
    - shard-kbl:          [PASS][2] -> [FAIL][3]
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-kbl7/igt@gem_exec_balancer@bonded-sync.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl2/igt@gem_exec_balancer@bonded-sync.html

  * {igt@gem_softpin@allocator-basic-reserve} (NEW):
    - shard-kbl:          NOTRUN -> [WARN][4] +2 similar issues
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl4/igt@gem_softpin@allocator-basic-reserve.html
    - shard-apl:          NOTRUN -> [WARN][5] +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl7/igt@gem_softpin@allocator-basic-reserve.html
    - shard-glk:          NOTRUN -> [WARN][6] +2 similar issues
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk1/igt@gem_softpin@allocator-basic-reserve.html

  * {igt@gem_softpin@allocator-fork} (NEW):
    - shard-tglb:         NOTRUN -> [WARN][7] +2 similar issues
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb6/igt@gem_softpin@allocator-fork.html

  * {igt@gem_softpin@allocator-nopin-reserve} (NEW):
    - shard-iclb:         NOTRUN -> [WARN][8] +2 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb3/igt@gem_softpin@allocator-nopin-reserve.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-hsw:          [PASS][9] -> [DMESG-WARN][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-hsw7/igt@i915_module_load@reload-with-fault-injection.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw2/igt@i915_module_load@reload-with-fault-injection.html

  
New tests
---------

  New tests have been introduced between CI_DRM_9778_full and IGTPW_5513_full:

### New IGT tests (40) ###

  * igt@api_intel_allocator@alloc-simple:
    - Statuses : 5 pass(s)
    - Exec time: [0.0, 0.00] s

  * igt@api_intel_allocator@execbuf-with-allocator:
    - Statuses : 5 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_allocator@fork-simple-once:
    - Statuses : 6 pass(s)
    - Exec time: [0.01, 0.04] s

  * igt@api_intel_allocator@fork-simple-stress:
    - Statuses : 6 pass(s)
    - Exec time: [5.41, 5.51] s

  * igt@api_intel_allocator@fork-simple-stress-signal:
    - Statuses : 5 pass(s)
    - Exec time: [5.42, 5.59] s

  * igt@api_intel_allocator@print:
    - Statuses : 6 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@random-allocator:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@random-allocator@basic:
    - Statuses : 6 pass(s)
    - Exec time: [0.0, 0.00] s

  * igt@api_intel_allocator@random-allocator@parallel-one:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_allocator@reopen:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_allocator@reopen-fork:
    - Statuses : 6 pass(s)
    - Exec time: [3.24, 3.27] s

  * igt@api_intel_allocator@reserve:
    - Statuses : 6 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reserve-simple:
    - Statuses : 4 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@reuse:
    - Statuses : 3 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@simple-allocator:
    - Statuses :
    - Exec time: [None] s

  * igt@api_intel_allocator@simple-allocator@basic:
    - Statuses : 7 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_allocator@simple-allocator@parallel-one:
    - Statuses : 7 pass(s)
    - Exec time: [0.10, 0.23] s

  * igt@api_intel_allocator@simple-allocator@reserve:
    - Statuses : 7 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@simple-allocator@reuse:
    - Statuses : 7 pass(s)
    - Exec time: [0.0] s

  * igt@api_intel_allocator@two-level-inception:
    - Statuses : 7 pass(s)
    - Exec time: [5.41, 5.53] s

  * igt@api_intel_allocator@two-level-inception-interruptible:
    - Statuses : 6 pass(s)
    - Exec time: [5.42, 5.56] s

  * igt@api_intel_bb@add-remove-objects:
    - Statuses : 6 pass(s)
    - Exec time: [0.01, 0.02] s

  * igt@api_intel_bb@bb-with-allocator:
    - Statuses : 4 pass(s) 2 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@blit-noreloc-keep-cache-random:
    - Statuses : 5 pass(s) 2 skip(s)
    - Exec time: [0.0, 0.02] s

  * igt@api_intel_bb@blit-noreloc-purge-cache-random:
    - Statuses : 3 pass(s) 2 skip(s)
    - Exec time: [0.0, 0.02] s

  * igt@api_intel_bb@destroy-bb:
    - Statuses : 6 pass(s)
    - Exec time: [0.01, 0.03] s

  * igt@api_intel_bb@last-page:
    - Statuses : 1 fail(s) 2 pass(s) 1 skip(s)
    - Exec time: [0.0, 7.02] s

  * igt@api_intel_bb@object-noreloc-keep-cache-random:
    - Statuses : 4 pass(s) 2 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@object-noreloc-keep-cache-simple:
    - Statuses : 3 pass(s) 1 skip(s)
    - Exec time: [0.0, 0.01] s

  * igt@api_intel_bb@object-noreloc-purge-cache-random:
    - Statuses : 4 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@object-noreloc-purge-cache-simple:
    - Statuses : 2 skip(s)
    - Exec time: [0.0] s

  * igt@api_intel_bb@object-reloc-keep-cache:
    - Statuses : 6 pass(s)
    - Exec time: [0.01, 0.02] s

  * igt@api_intel_bb@object-reloc-purge-cache:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.02] s

  * igt@api_intel_bb@purge-bb:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@api_intel_bb@reset-bb:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@gem_softpin@allocator-basic:
    - Statuses : 4 pass(s) 2 skip(s)
    - Exec time: [0.0, 0.29] s

  * igt@gem_softpin@allocator-basic-reserve:
    - Statuses : 1 skip(s) 5 warn(s)
    - Exec time: [0.0, 0.30] s

  * igt@gem_softpin@allocator-fork:
    - Statuses : 2 skip(s) 4 warn(s)
    - Exec time: [0.0, 2.36] s

  * igt@gem_softpin@allocator-nopin:
    - Statuses : 3 pass(s) 2 skip(s)
    - Exec time: [0.0, 0.30] s

  * igt@gem_softpin@allocator-nopin-reserve:
    - Statuses : 2 skip(s) 5 warn(s)
    - Exec time: [0.0, 0.29] s

  

Known issues
------------

  Here are the changes found in IGTPW_5513_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@api_intel_bb@blit-noreloc-keep-cache:
    - shard-hsw:          [PASS][11] -> [SKIP][12] ([fdo#109271]) +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-hsw6/igt@api_intel_bb@blit-noreloc-keep-cache.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw6/igt@api_intel_bb@blit-noreloc-keep-cache.html

  * igt@api_intel_bb@blit-noreloc-purge-cache:
    - shard-snb:          [PASS][13] -> [SKIP][14] ([fdo#109271])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-snb2/igt@api_intel_bb@blit-noreloc-purge-cache.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-snb5/igt@api_intel_bb@blit-noreloc-purge-cache.html

  * igt@gem_create@create-massive:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][15] ([i915#3002])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl1/igt@gem_create@create-massive.html

  * igt@gem_ctx_persistence@smoketest:
    - shard-snb:          NOTRUN -> [SKIP][16] ([fdo#109271] / [i915#1099]) +8 similar issues
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-snb6/igt@gem_ctx_persistence@smoketest.html
    - shard-hsw:          NOTRUN -> [SKIP][17] ([fdo#109271] / [i915#1099])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw4/igt@gem_ctx_persistence@smoketest.html

  * igt@gem_eio@in-flight-contexts-immediate:
    - shard-tglb:         NOTRUN -> [TIMEOUT][18] ([i915#3063])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb7/igt@gem_eio@in-flight-contexts-immediate.html

  * igt@gem_exec_fair@basic-flow@rcs0:
    - shard-tglb:         [PASS][19] -> [FAIL][20] ([i915#2842]) +1 similar issue
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-tglb6/igt@gem_exec_fair@basic-flow@rcs0.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb1/igt@gem_exec_fair@basic-flow@rcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [PASS][21] -> [FAIL][22] ([i915#2842]) +3 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-glk3/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk3/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@bcs0:
    - shard-iclb:         [PASS][23] -> [FAIL][24] ([i915#2842])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-iclb2/igt@gem_exec_fair@basic-pace@bcs0.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb2/igt@gem_exec_fair@basic-pace@bcs0.html

  * igt@gem_exec_reloc@basic-many-active@rcs0:
    - shard-snb:          NOTRUN -> [FAIL][25] ([i915#2389]) +2 similar issues
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-snb6/igt@gem_exec_reloc@basic-many-active@rcs0.html

  * igt@gem_exec_reloc@basic-many-active@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][26] ([i915#2389])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb2/igt@gem_exec_reloc@basic-many-active@vcs1.html

  * igt@gem_exec_reloc@basic-wide-active@rcs0:
    - shard-kbl:          NOTRUN -> [FAIL][27] ([i915#2389]) +4 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl4/igt@gem_exec_reloc@basic-wide-active@rcs0.html

  * igt@gem_exec_whisper@basic-contexts:
    - shard-glk:          [PASS][28] -> [DMESG-WARN][29] ([i915#118] / [i915#95])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-glk3/igt@gem_exec_whisper@basic-contexts.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk8/igt@gem_exec_whisper@basic-contexts.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-iclb:         NOTRUN -> [WARN][30] ([i915#2658])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb6/igt@gem_pwrite@basic-exhaustion.html
    - shard-tglb:         NOTRUN -> [WARN][31] ([i915#2658])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb5/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_render_copy@yf-tiled-mc-ccs-to-vebox-y-tiled:
    - shard-glk:          NOTRUN -> [SKIP][32] ([fdo#109271]) +30 similar issues
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk8/igt@gem_render_copy@yf-tiled-mc-ccs-to-vebox-y-tiled.html
    - shard-iclb:         NOTRUN -> [SKIP][33] ([i915#768]) +1 similar issue
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb6/igt@gem_render_copy@yf-tiled-mc-ccs-to-vebox-y-tiled.html

  * igt@gem_userptr_blits@input-checking:
    - shard-apl:          NOTRUN -> [DMESG-WARN][34] ([i915#3002])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl2/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@process-exit-mmap@gtt:
    - shard-kbl:          NOTRUN -> [SKIP][35] ([fdo#109271] / [i915#1699]) +3 similar issues
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl6/igt@gem_userptr_blits@process-exit-mmap@gtt.html

  * igt@gem_userptr_blits@process-exit-mmap@wb:
    - shard-apl:          NOTRUN -> [SKIP][36] ([fdo#109271] / [i915#1699]) +3 similar issues
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl6/igt@gem_userptr_blits@process-exit-mmap@wb.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-snb:          NOTRUN -> [FAIL][37] ([i915#2724])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-snb6/igt@gem_userptr_blits@vma-merge.html

  * igt@gen9_exec_parse@basic-rejected-ctx-param:
    - shard-tglb:         NOTRUN -> [SKIP][38] ([fdo#112306])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb2/igt@gen9_exec_parse@basic-rejected-ctx-param.html
    - shard-iclb:         NOTRUN -> [SKIP][39] ([fdo#112306])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb5/igt@gen9_exec_parse@basic-rejected-ctx-param.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [PASS][40] -> [FAIL][41] ([i915#454])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-iclb5/igt@i915_pm_dc@dc6-psr.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb6/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_selftest@live@gt_lrc:
    - shard-tglb:         NOTRUN -> [DMESG-FAIL][42] ([i915#2373])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb6/igt@i915_selftest@live@gt_lrc.html

  * igt@i915_selftest@live@gt_pm:
    - shard-tglb:         NOTRUN -> [DMESG-FAIL][43] ([i915#1759] / [i915#2291])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb6/igt@i915_selftest@live@gt_pm.html

  * igt@i915_suspend@forcewake:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][44] ([i915#180]) +1 similar issue
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl4/igt@i915_suspend@forcewake.html

  * igt@kms_big_fb@x-tiled-8bpp-rotate-90:
    - shard-tglb:         NOTRUN -> [SKIP][45] ([fdo#111614]) +1 similar issue
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb1/igt@kms_big_fb@x-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-90:
    - shard-iclb:         NOTRUN -> [SKIP][46] ([fdo#110725] / [fdo#111614]) +1 similar issue
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb2/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-270:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([fdo#111615]) +1 similar issue
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb5/igt@kms_big_fb@yf-tiled-64bpp-rotate-270.html
    - shard-iclb:         NOTRUN -> [SKIP][48] ([fdo#110723])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb6/igt@kms_big_fb@yf-tiled-64bpp-rotate-270.html

  * igt@kms_chamelium@dp-mode-timings:
    - shard-apl:          NOTRUN -> [SKIP][49] ([fdo#109271] / [fdo#111827]) +22 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl3/igt@kms_chamelium@dp-mode-timings.html

  * igt@kms_chamelium@vga-hpd:
    - shard-tglb:         NOTRUN -> [SKIP][50] ([fdo#109284] / [fdo#111827]) +1 similar issue
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb6/igt@kms_chamelium@vga-hpd.html
    - shard-glk:          NOTRUN -> [SKIP][51] ([fdo#109271] / [fdo#111827]) +1 similar issue
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk6/igt@kms_chamelium@vga-hpd.html
    - shard-hsw:          NOTRUN -> [SKIP][52] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw2/igt@kms_chamelium@vga-hpd.html

  * igt@kms_chamelium@vga-hpd-for-each-pipe:
    - shard-kbl:          NOTRUN -> [SKIP][53] ([fdo#109271] / [fdo#111827]) +27 similar issues
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl2/igt@kms_chamelium@vga-hpd-for-each-pipe.html

  * igt@kms_color@pipe-d-ctm-max:
    - shard-iclb:         NOTRUN -> [SKIP][54] ([fdo#109278] / [i915#1149])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb8/igt@kms_color@pipe-d-ctm-max.html

  * igt@kms_color_chamelium@pipe-a-ctm-0-25:
    - shard-snb:          NOTRUN -> [SKIP][55] ([fdo#109271] / [fdo#111827]) +23 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-snb2/igt@kms_color_chamelium@pipe-a-ctm-0-25.html

  * igt@kms_color_chamelium@pipe-b-ctm-red-to-blue:
    - shard-iclb:         NOTRUN -> [SKIP][56] ([fdo#109284] / [fdo#111827]) +1 similar issue
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb4/igt@kms_color_chamelium@pipe-b-ctm-red-to-blue.html

  * igt@kms_content_protection@atomic:
    - shard-kbl:          NOTRUN -> [TIMEOUT][57] ([i915#1319])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl7/igt@kms_content_protection@atomic.html
    - shard-apl:          NOTRUN -> [TIMEOUT][58] ([i915#1319])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl2/igt@kms_content_protection@atomic.html

  * igt@kms_content_protection@uevent:
    - shard-kbl:          NOTRUN -> [FAIL][59] ([i915#2105])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl2/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-b-cursor-512x512-rapid-movement:
    - shard-iclb:         NOTRUN -> [SKIP][60] ([fdo#109278] / [fdo#109279]) +2 similar issues
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb8/igt@kms_cursor_crc@pipe-b-cursor-512x512-rapid-movement.html

  * igt@kms_cursor_crc@pipe-c-cursor-512x170-offscreen:
    - shard-tglb:         NOTRUN -> [SKIP][61] ([fdo#109279]) +2 similar issues
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb5/igt@kms_cursor_crc@pipe-c-cursor-512x170-offscreen.html

  * igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy:
    - shard-iclb:         NOTRUN -> [SKIP][62] ([fdo#109274] / [fdo#109278]) +1 similar issue
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb5/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy.html

  * igt@kms_cursor_legacy@pipe-d-single-bo:
    - shard-kbl:          NOTRUN -> [SKIP][63] ([fdo#109271] / [i915#533]) +1 similar issue
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl7/igt@kms_cursor_legacy@pipe-d-single-bo.html

  * igt@kms_cursor_legacy@pipe-d-single-move:
    - shard-iclb:         NOTRUN -> [SKIP][64] ([fdo#109278]) +9 similar issues
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb3/igt@kms_cursor_legacy@pipe-d-single-move.html

  * igt@kms_flip@2x-flip-vs-rmfb-interruptible:
    - shard-iclb:         NOTRUN -> [SKIP][65] ([fdo#109274]) +1 similar issue
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb4/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html

  * igt@kms_flip@modeset-vs-vblank-race@c-hdmi-a2:
    - shard-glk:          [PASS][66] -> [FAIL][67] ([i915#407])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-glk3/igt@kms_flip@modeset-vs-vblank-race@c-hdmi-a2.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk6/igt@kms_flip@modeset-vs-vblank-race@c-hdmi-a2.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs:
    - shard-apl:          NOTRUN -> [FAIL][68] ([i915#2641])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl3/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt:
    - shard-tglb:         NOTRUN -> [SKIP][69] ([fdo#111825]) +14 similar issues
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb3/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-pgflip-blt:
    - shard-iclb:         NOTRUN -> [SKIP][70] ([fdo#109280]) +10 similar issues
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt:
    - shard-apl:          NOTRUN -> [SKIP][71] ([fdo#109271]) +208 similar issues
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-wc:
    - shard-hsw:          NOTRUN -> [SKIP][72] ([fdo#109271]) +33 similar issues
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw1/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-wc:
    - shard-kbl:          NOTRUN -> [SKIP][73] ([fdo#109271]) +206 similar issues
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl4/igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-wc.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-kbl:          [PASS][74] -> [DMESG-WARN][75] ([i915#180]) +2 similar issues
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-kbl6/igt@kms_hdr@bpc-switch-suspend.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl2/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence:
    - shard-apl:          NOTRUN -> [SKIP][76] ([fdo#109271] / [i915#533])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl7/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb:
    - shard-apl:          NOTRUN -> [FAIL][77] ([fdo#108145] / [i915#265]) +2 similar issues
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl8/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb:
    - shard-kbl:          NOTRUN -> [FAIL][78] ([i915#265])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl7/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb:
    - shard-glk:          NOTRUN -> [FAIL][79] ([fdo#108145] / [i915#265]) +1 similar issue
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk1/igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb:
    - shard-kbl:          NOTRUN -> [FAIL][80] ([fdo#108145] / [i915#265]) +2 similar issues
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl1/igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb.html

  * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping:
    - shard-apl:          NOTRUN -> [SKIP][81] ([fdo#109271] / [i915#2733])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl7/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4:
    - shard-apl:          NOTRUN -> [SKIP][82] ([fdo#109271] / [i915#658]) +4 similar issues
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-5:
    - shard-kbl:          NOTRUN -> [SKIP][83] ([fdo#109271] / [i915#658]) +5 similar issues
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl1/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-5.html

  * igt@kms_psr@psr2_no_drrs:
    - shard-iclb:         [PASS][84] -> [SKIP][85] ([fdo#109441]) +2 similar issues
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-iclb2/igt@kms_psr@psr2_no_drrs.html
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb5/igt@kms_psr@psr2_no_drrs.html

  * igt@kms_psr@psr2_primary_mmap_gtt:
    - shard-hsw:          NOTRUN -> [SKIP][86] ([fdo#109271] / [i915#1072])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw4/igt@kms_psr@psr2_primary_mmap_gtt.html
    - shard-iclb:         NOTRUN -> [SKIP][87] ([fdo#109441])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb4/igt@kms_psr@psr2_primary_mmap_gtt.html

  * igt@kms_sysfs_edid_timing:
    - shard-apl:          NOTRUN -> [FAIL][88] ([IGT#2])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl1/igt@kms_sysfs_edid_timing.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
    - shard-kbl:          [PASS][89] -> [DMESG-WARN][90] ([i915#180] / [i915#295])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-kbl2/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl2/igt@kms_vblank@pipe-a-ts-continuation-suspend.html

  * igt@kms_vblank@pipe-d-query-forked-hang:
    - shard-snb:          NOTRUN -> [SKIP][91] ([fdo#109271]) +347 similar issues
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-snb7/igt@kms_vblank@pipe-d-query-forked-hang.html

  * igt@kms_vblank@pipe-d-wait-forked-busy:
    - shard-hsw:          NOTRUN -> [SKIP][92] ([fdo#109271] / [i915#533]) +6 similar issues
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw7/igt@kms_vblank@pipe-d-wait-forked-busy.html

  * igt@kms_writeback@writeback-check-output:
    - shard-apl:          NOTRUN -> [SKIP][93] ([fdo#109271] / [i915#2437]) +1 similar issue
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl2/igt@kms_writeback@writeback-check-output.html

  * igt@nouveau_crc@pipe-a-source-outp-inactive:
    - shard-iclb:         NOTRUN -> [SKIP][94] ([i915#2530]) +1 similar issue
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb6/igt@nouveau_crc@pipe-a-source-outp-inactive.html
    - shard-tglb:         NOTRUN -> [SKIP][95] ([i915#2530]) +1 similar issue
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb5/igt@nouveau_crc@pipe-a-source-outp-inactive.html

  * igt@sysfs_clients@busy@bcs0:
    - shard-kbl:          NOTRUN -> [FAIL][96] ([i915#3009])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl1/igt@sysfs_clients@busy@bcs0.html

  * igt@sysfs_clients@recycle:
    - shard-hsw:          [PASS][97] -> [FAIL][98] ([i915#3028])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-hsw4/igt@sysfs_clients@recycle.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw4/igt@sysfs_clients@recycle.html

  * igt@sysfs_clients@sema-10@vcs0:
    - shard-kbl:          NOTRUN -> [SKIP][99] ([fdo#109271] / [i915#3026]) +3 similar issues
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-kbl2/igt@sysfs_clients@sema-10@vcs0.html

  * igt@tools_test@sysfs_l3_parity:
    - shard-iclb:         NOTRUN -> [SKIP][100] ([fdo#109307])
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb2/igt@tools_test@sysfs_l3_parity.html
    - shard-tglb:         NOTRUN -> [SKIP][101] ([fdo#109307])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb6/igt@tools_test@sysfs_l3_parity.html

  
#### Possible fixes ####

  * igt@gem_ctx_shared@q-smoketest-all:
    - shard-glk:          [DMESG-WARN][102] ([i915#118] / [i915#95]) -> [PASS][103] +2 similar issues
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-glk9/igt@gem_ctx_shared@q-smoketest-all.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk4/igt@gem_ctx_shared@q-smoketest-all.html

  * igt@gem_exec_balancer@hang:
    - shard-iclb:         [INCOMPLETE][104] ([i915#1895] / [i915#2295] / [i915#3031]) -> [PASS][105]
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-iclb1/igt@gem_exec_balancer@hang.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-iclb8/igt@gem_exec_balancer@hang.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-glk:          [FAIL][106] ([i915#2842]) -> [PASS][107]
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-glk7/igt@gem_exec_fair@basic-throttle@rcs0.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk8/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_exec_reloc@basic-many-active@rcs0:
    - shard-glk:          [FAIL][108] ([i915#2389]) -> [PASS][109]
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-glk8/igt@gem_exec_reloc@basic-many-active@rcs0.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-glk6/igt@gem_exec_reloc@basic-many-active@rcs0.html

  * igt@kms_async_flips@test-time-stamp:
    - shard-tglb:         [FAIL][110] ([i915#2597]) -> [PASS][111]
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-tglb8/igt@kms_async_flips@test-time-stamp.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-tglb7/igt@kms_async_flips@test-time-stamp.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-apl:          [DMESG-WARN][112] ([i915#180]) -> [PASS][113]
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-apl2/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-apl6/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  * igt@kms_cursor_legacy@cursor-vs-flip-varying-size:
    - shard-hsw:          [FAIL][114] ([i915#2370]) -> [PASS][115]
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9778/shard-hsw1/igt@kms_cursor_legacy@cursor-vs-flip-varying-size.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/shard-hsw8/igt@kms_cursor_legacy@cursor-vs-flip-varying-size.html

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - shard-tglb:         [FAIL][116] ([i91

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5513/index.html

[-- Attachment #1.2: Type: text/html, Size: 34648 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2021-02-16 15:03 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-16 11:39 [igt-dev] [PATCH i-g-t 00/35] Introduce IGT allocator Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 01/35] lib/gem_submission: Add gem_has_relocations() check Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 02/35] lib/igt_list: Add igt_list_del_init() Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 03/35] lib/igt_list: igt_hlist implementation Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 04/35] lib/igt_map: Introduce igt_map Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 05/35] lib/igt_core: Track child process pid and tid Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 06/35] lib/intel_allocator_simple: Add simple allocator Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 07/35] lib/intel_allocator_random: Add random allocator Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 08/35] lib/intel_allocator: Add intel_allocator core Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 09/35] lib/intel_allocator: Try to stop smoothly instead of deinit Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 10/35] lib/intel_allocator_msgchannel: Scale to 4k of parallel clients Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 11/35] lib/intel_allocator: Separate allocator multiprocess start Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 12/35] lib/intel_bufops: Change size from 32->64 bit Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 13/35] lib/intel_bufops: Add init with handle and size function Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 14/35] lib/intel_batchbuffer: Integrate intel_bb with allocator Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 15/35] lib/intel_batchbuffer: Use relocations in intel-bb up to gen12 Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 16/35] lib/intel_batchbuffer: Create bb with strategy / vm ranges Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 17/35] lib/intel_batchbuffer: Add tracking intel_buf to intel_bb Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 18/35] lib/igt_fb: Initialize intel_buf with same size as fb Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 19/35] tests/api_intel_bb: Modify test to verify intel_bb with allocator Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 20/35] tests/api_intel_bb: Add subtest to check render batch on the last page Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 21/35] tests/api_intel_bb: Add compressed->compressed copy Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 22/35] tests/api_intel_bb: Add purge-bb test Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 23/35] tests/api_intel_bb: Remove check-canonical test Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 24/35] tests/api_intel_bb: Add simple intel-bb which uses allocator Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 25/35] tests/api_intel_bb: Use allocator in delta-check test Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 26/35] tests/api_intel_allocator: Simple allocator test suite Zbigniew Kempczyński
2021-02-16 11:39 ` [igt-dev] [PATCH i-g-t 27/35] tests/api_intel_allocator: Prepare to run with sanitizer Zbigniew Kempczyński
2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 28/35] tests/api_intel_allocator: Add execbuf with allocator example Zbigniew Kempczyński
2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 29/35] tests/gem_softpin: Verify allocator and execbuf pair work together Zbigniew Kempczyński
2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 30/35] tests/gem|kms: Remove intel_bb from fixture Zbigniew Kempczyński
2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 31/35] tests/gem_mmap_offset: Use intel_buf wrapper code instead direct Zbigniew Kempczyński
2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 32/35] tests/gem_ppgtt: Adopt test to use intel_bb with allocator Zbigniew Kempczyński
2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 33/35] tests/gem_render_copy_redux: Adopt to use with intel_bb and allocator Zbigniew Kempczyński
2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 34/35] tests/perf.c: Remove buffer from batch Zbigniew Kempczyński
2021-02-16 11:40 ` [igt-dev] [PATCH i-g-t 35/35] tests/gem_linear_blits: Use intel allocator Zbigniew Kempczyński
2021-02-16 13:21 ` [igt-dev] ✓ Fi.CI.BAT: success for Introduce IGT allocator (rev21) Patchwork
2021-02-16 15:03 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.