All of lore.kernel.org
 help / color / mirror / Atom feed
* struct drm_mm fixes
@ 2016-12-12 11:53 Chris Wilson
  2016-12-12 11:53 ` [PATCH 01/34] drm/i915: Use the MRU stack search after evicting Chris Wilson
                   ` (34 more replies)
  0 siblings, 35 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

For a long time, we've known DRM_MM_SEARCH_HIGH/CREATE_TOP to be slow and
broken (both not returning the right hole nor the right alignment). This
series ends by fixing that but takes a quick detour through adding a set
of testcases to catch the broken behaviour and hopefully keep the working
set, and takes a stop at tightening eviction.

I've folded in most of Joonas's comments in the selftests, but didn't
apply his r-b as they have subtantially changed (mostly layout, splitting
of library functions and what-not, fundamental test logic/efficacy should
be unchanged, hopefully).
-Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* [PATCH 01/34] drm/i915: Use the MRU stack search after evicting
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13  9:29   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 02/34] drm/i915: Simplify i915_gtt_color_adjust() Chris Wilson
                   ` (33 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

When we evict from the GTT to make room for an object, the hole we
create is put onto the MRU stack inside the drm_mm range manager. On the
next search pass, we can speed up a PIN_HIGH allocation by referencing
that stack for the new hole.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_vma.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 37c3eebe8316..ca62a3371d94 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -442,8 +442,10 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 						       obj->cache_level,
 						       start, end,
 						       flags);
-			if (ret == 0)
+			if (ret == 0) {
+				search_flag = DRM_MM_SEARCH_DEFAULT;
 				goto search_free;
+			}
 
 			goto err_unpin;
 		}
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 02/34] drm/i915: Simplify i915_gtt_color_adjust()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
  2016-12-12 11:53 ` [PATCH 01/34] drm/i915: Use the MRU stack search after evicting Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13  9:32   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 03/34] drm: Add drm_mm_for_each_node_safe() Chris Wilson
                   ` (32 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

If we remember that node_list is a circular list containing the fake
head_node, we can use a simple list_next_entry() and skip the NULL check
for the allocated check against the head_node.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index ef00d36680c9..b5d4a6357d8a 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2723,10 +2723,8 @@ static void i915_gtt_color_adjust(struct drm_mm_node *node,
 	if (node->color != color)
 		*start += 4096;
 
-	node = list_first_entry_or_null(&node->node_list,
-					struct drm_mm_node,
-					node_list);
-	if (node && node->allocated && node->color != color)
+	node = list_next_entry(node, node_list);
+	if (node->allocated && node->color != color)
 		*end -= 4096;
 }
 
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 03/34] drm: Add drm_mm_for_each_node_safe()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
  2016-12-12 11:53 ` [PATCH 01/34] drm/i915: Use the MRU stack search after evicting Chris Wilson
  2016-12-12 11:53 ` [PATCH 02/34] drm/i915: Simplify i915_gtt_color_adjust() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13  9:35   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 04/34] drm: Add some kselftests for the DRM range manager (struct drm_mm) Chris Wilson
                   ` (31 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

A complement to drm_mm_for_each_node(), wraps list_for_each_entry_safe()
for walking the list of nodes safe against removal.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c |  9 ++++-----
 include/drm/drm_mm.h     | 19 ++++++++++++++++---
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index ca1e344f318d..6e0735539545 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -138,7 +138,7 @@ static void show_leaks(struct drm_mm *mm)
 	if (!buf)
 		return;
 
-	list_for_each_entry(node, &mm->head_node.node_list, node_list) {
+	list_for_each_entry(node, __drm_mm_nodes(mm), node_list) {
 		struct stack_trace trace = {
 			.entries = entries,
 			.max_entries = STACKDEPTH
@@ -320,8 +320,7 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
 		if (hole->start < end)
 			return -ENOSPC;
 	} else {
-		hole = list_entry(&mm->head_node.node_list,
-				  typeof(*hole), node_list);
+		hole = list_entry(__drm_mm_nodes(mm), typeof(*hole), node_list);
 	}
 
 	hole = list_last_entry(&hole->node_list, typeof(*hole), node_list);
@@ -884,7 +883,7 @@ EXPORT_SYMBOL(drm_mm_scan_remove_block);
  */
 bool drm_mm_clean(struct drm_mm * mm)
 {
-	struct list_head *head = &mm->head_node.node_list;
+	struct list_head *head = __drm_mm_nodes(mm);
 
 	return (head->next->next == head);
 }
@@ -930,7 +929,7 @@ EXPORT_SYMBOL(drm_mm_init);
  */
 void drm_mm_takedown(struct drm_mm *mm)
 {
-	if (WARN(!list_empty(&mm->head_node.node_list),
+	if (WARN(!list_empty(__drm_mm_nodes(mm)),
 		 "Memory manager not clean during takedown.\n"))
 		show_leaks(mm);
 
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 0b8371795aeb..8faa28ad97b3 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -179,6 +179,8 @@ static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node)
 	return __drm_mm_hole_node_end(hole_node);
 }
 
+#define __drm_mm_nodes(mm) (&(mm)->head_node.node_list)
+
 /**
  * drm_mm_for_each_node - iterator to walk over all allocated nodes
  * @entry: drm_mm_node structure to assign to in each iteration step
@@ -187,9 +189,20 @@ static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node)
  * This iterator walks over all nodes in the range allocator. It is implemented
  * with list_for_each, so not save against removal of elements.
  */
-#define drm_mm_for_each_node(entry, mm) list_for_each_entry(entry, \
-						&(mm)->head_node.node_list, \
-						node_list)
+#define drm_mm_for_each_node(entry, mm) \
+	list_for_each_entry(entry, __drm_mm_nodes(mm), node_list)
+
+/**
+ * drm_mm_for_each_node_safe - iterator to walk over all allocated nodes
+ * @entry: drm_mm_node structure to assign to in each iteration step
+ * @next: drm_mm_node structure to store the next step
+ * @mm: drm_mm allocator to walk
+ *
+ * This iterator walks over all nodes in the range allocator. It is implemented
+ * with list_for_each_safe, so save against removal of elements.
+ */
+#define drm_mm_for_each_node_safe(entry, n, mm) \
+	list_for_each_entry_safe(entry, n, __drm_mm_nodes(mm), node_list)
 
 #define __drm_mm_for_each_hole(entry, mm, hole_start, hole_end, backwards) \
 	for (entry = list_entry((backwards) ? (mm)->hole_stack.prev : (mm)->hole_stack.next, struct drm_mm_node, hole_stack); \
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 04/34] drm: Add some kselftests for the DRM range manager (struct drm_mm)
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (2 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 03/34] drm: Add drm_mm_for_each_node_safe() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13  9:58   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 05/34] drm: kselftest for drm_mm_init() Chris Wilson
                   ` (30 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

First we introduce a smattering of infrastructure for writing selftests.
The idea is that we have a test module that exercises a particular
portion of the exported API, and that module provides a set of tests
that can either be run as an ensemble via kselftest or individually via
an igt harness (in this case igt/drm_mm). To accommodate selecting
individual tests, we export a boolean parameter to control selection of
each test - that is hidden inside a bunch of reusable boilerplate macros
to keep writing the tests simple.

Testcase: igt/drm_mm
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/Kconfig                       |  13 ++++
 drivers/gpu/drm/Makefile                      |   2 +
 drivers/gpu/drm/selftests/drm_mm_selftests.h  |   8 ++
 drivers/gpu/drm/selftests/drm_selftest.c      | 104 ++++++++++++++++++++++++++
 drivers/gpu/drm/selftests/drm_selftest.h      |  41 ++++++++++
 drivers/gpu/drm/selftests/test-drm_mm.c       |  47 ++++++++++++
 tools/testing/selftests/drivers/gpu/drm_mm.sh |  15 ++++
 7 files changed, 230 insertions(+)
 create mode 100644 drivers/gpu/drm/selftests/drm_mm_selftests.h
 create mode 100644 drivers/gpu/drm/selftests/drm_selftest.c
 create mode 100644 drivers/gpu/drm/selftests/drm_selftest.h
 create mode 100644 drivers/gpu/drm/selftests/test-drm_mm.c
 create mode 100644 tools/testing/selftests/drivers/gpu/drm_mm.sh

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index ebfe8404c25f..fd341ab69c46 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -48,6 +48,19 @@ config DRM_DEBUG_MM
 
 	  If in doubt, say "N".
 
+config DRM_DEBUG_MM_SELFTEST
+	tristate "kselftests for DRM range manager (struct drm_mm)"
+	depends on DRM
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option provides a kernel module that can be used to test
+	  the DRM range manager (drm_mm) and its API. This option is not
+	  useful for distributions or general kernels, but only for kernel
+	  developers working on DRM and associated drivers.
+
+	  Say N if you are unsure
+
 config DRM_KMS_HELPER
 	tristate
 	depends on DRM
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index b9ae4280de9d..c8aed3688b20 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -18,6 +18,8 @@ drm-y       :=	drm_auth.o drm_bufs.o drm_cache.o \
 		drm_plane.o drm_color_mgmt.o drm_print.o \
 		drm_dumb_buffers.o drm_mode_config.o
 
+obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += selftests/test-drm_mm.o
+
 drm-$(CONFIG_COMPAT) += drm_ioc32.o
 drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
 drm-$(CONFIG_PCI) += ati_pcigart.o
diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
new file mode 100644
index 000000000000..0a2e98a33ba0
--- /dev/null
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -0,0 +1,8 @@
+/* List each unit test as selftest(name, function)
+ *
+ * The name is used as both an enum and expanded as igt__name to create
+ * a module parameter. It must be unique and legal for a C identifier.
+ *
+ * Tests are executed in reverse order by igt/drm_mm
+ */
+selftest(sanitycheck, igt_sanitycheck) /* keep last */
diff --git a/drivers/gpu/drm/selftests/drm_selftest.c b/drivers/gpu/drm/selftests/drm_selftest.c
new file mode 100644
index 000000000000..8e9ac3c16da4
--- /dev/null
+++ b/drivers/gpu/drm/selftests/drm_selftest.c
@@ -0,0 +1,104 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#define selftest(name, func) __idx_##name,
+enum {
+#include TESTS
+};
+#undef selftest
+
+#define selftest(n, f) [__idx_##n] = { .name = #n, .func = f },
+static struct drm_selftest {
+	bool enabled;
+	const char *name;
+	int (*func)(void *);
+} selftests[] = {
+#include TESTS
+};
+#undef selftest
+
+#define selftest(n, func) \
+module_param_named(igt__##n, selftests[__idx_##n].enabled, bool, 0400);
+#include TESTS
+#undef selftest
+
+static void set_default_test_all(struct drm_selftest *st, unsigned long count)
+{
+	unsigned long i;
+
+	for (i = 0; i < count; i++)
+		if (st[i].enabled)
+			return;
+
+	for (i = 0; i < count; i++)
+		st[i].enabled = true;
+}
+
+static int run_selftests(struct drm_selftest *st,
+			 unsigned long count,
+			 void *data)
+{
+	int err = 0;
+
+	set_default_test_all(st, count);
+
+	/* Tests are listed in reverse order in drm_*_selftests.h */
+	for (st += count - 1; count--; st--) {
+		if (!st->enabled)
+			continue;
+
+		pr_debug("drm: Running %s\n", st->name);
+		err = st->func(data);
+		if (err)
+			break;
+	}
+
+	if (WARN(err > 0 || err == -ENOTTY,
+		 "%s returned %d, conflicting with selftest's magic values!\n",
+		 st->name, err))
+		err = -1;
+
+	rcu_barrier();
+	return err;
+}
+
+static int __maybe_unused
+__drm_subtests(const char *caller,
+	       const struct drm_subtest *st,
+	       int count,
+	       void *data)
+{
+	int err;
+
+	for (; count--; st++) {
+		pr_debug("Running %s/%s\n", caller, st->name);
+		err = st->func(data);
+		if (err) {
+			pr_err("%s: %s failed with error %d\n",
+			       caller, st->name, err);
+			return err;
+		}
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/selftests/drm_selftest.h b/drivers/gpu/drm/selftests/drm_selftest.h
new file mode 100644
index 000000000000..c784ec02ff53
--- /dev/null
+++ b/drivers/gpu/drm/selftests/drm_selftest.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef __DRM_SELFTEST_H__
+#define __DRM_SELFTEST_H__
+
+struct drm_subtest {
+	int (*func)(void *data);
+	const char *name;
+};
+
+static int __drm_subtests(const char *caller,
+			  const struct drm_subtest *st,
+			  int count,
+			  void *data);
+#define drm_subtests(T, data) \
+	__drm_subtests(__func__, T, ARRAY_SIZE(T), data)
+
+#define SUBTEST(x) { x, #x }
+
+#endif /* __DRM_SELFTEST_H__ */
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
new file mode 100644
index 000000000000..b06590c860e2
--- /dev/null
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -0,0 +1,47 @@
+/*
+ * Test cases for the drm_mm range manager
+ */
+
+#define pr_fmt(fmt) "drm_mm: " fmt
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/vmalloc.h>
+
+#include <drm/drm_mm.h>
+
+#define TESTS "drm_mm_selftests.h"
+#include "drm_selftest.h"
+
+static unsigned int random_seed = 0x12345678;
+
+static int igt_sanitycheck(void *ignored)
+{
+	pr_info("%s - ok!\n", __func__);
+	return 0;
+}
+
+#include "drm_selftest.c"
+
+static int __init test_drm_mm_init(void)
+{
+	int err;
+
+	pr_info("Testing DRM range manger (struct drm_mm)\n");
+	err = run_selftests(selftests, ARRAY_SIZE(selftests), NULL);
+
+	return err > 0 ? 0 : err;
+}
+
+static void __exit test_drm_mm_exit(void)
+{
+}
+
+module_init(test_drm_mm_init);
+module_exit(test_drm_mm_exit);
+
+module_param(random_seed, uint, 0400);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/selftests/drivers/gpu/drm_mm.sh b/tools/testing/selftests/drivers/gpu/drm_mm.sh
new file mode 100644
index 000000000000..96dd55c92799
--- /dev/null
+++ b/tools/testing/selftests/drivers/gpu/drm_mm.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+# Runs API tests for struct drm_mm (DRM range manager)
+
+if ! /sbin/modprobe -n -q test-drm_mm; then
+       echo "drivers/gpu/drm_mm: [skip]"
+       exit 77
+fi
+
+if /sbin/modprobe -q test-drm_mm; then
+       /sbin/modprobe -q -r test-drm_mm
+       echo "drivers/gpu/drm_mm: ok"
+else
+       echo "drivers/gpu/drm_mm: [FAIL]"
+       exit 1
+fi
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 05/34] drm: kselftest for drm_mm_init()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (3 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 04/34] drm: Add some kselftests for the DRM range manager (struct drm_mm) Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 10:16   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 06/34] drm: Add a simple linear congruent generator PRNG Chris Wilson
                   ` (29 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Simple first test to just exercise initialisation of struct drm_mm.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |  2 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 83 ++++++++++++++++++++++++++++
 2 files changed, 85 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 0a2e98a33ba0..fddf2c01c97f 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,4 +5,6 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(debug, igt_debug)
+selftest(init, igt_init)
 selftest(sanitycheck, igt_sanitycheck) /* keep last */
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index b06590c860e2..1fe104cc05d5 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -22,6 +22,89 @@ static int igt_sanitycheck(void *ignored)
 	return 0;
 }
 
+static int igt_init(void *ignored)
+{
+	struct drm_mm mm;
+	struct drm_mm_node *hole;
+	struct drm_mm_node tmp;
+	u64 start, end;
+	int ret = -EINVAL;
+
+	memset(&mm, 0, sizeof(mm));
+	if (drm_mm_initialized(&mm)) {
+		pr_err("zeroed mm claims to be initialized\n");
+		return ret;
+	}
+
+	memset(&mm, 0xff, sizeof(mm));
+	drm_mm_init(&mm, 0, 4096);
+	if (!drm_mm_initialized(&mm)) {
+		pr_err("mm claims not to be initialized\n");
+		goto out;
+	}
+
+	if (!drm_mm_clean(&mm)) {
+		pr_err("mm not empty on creation\n");
+		goto out;
+	}
+
+	drm_mm_for_each_hole(hole, &mm, start, end) {
+		if (start != 0 || end != 4096) {
+			pr_err("empty mm has incorrect hole, found (%llx, %llx), expect (%llx, %llx)\n",
+			       start, end,
+			       0ull, 4096ull);
+			goto out;
+		}
+	}
+
+	memset(&tmp, 0, sizeof(tmp));
+	tmp.start = 0;
+	tmp.size = 4096;
+	ret = drm_mm_reserve_node(&mm, &tmp);
+	if (ret) {
+		pr_err("failed to reserve whole drm_mm\n");
+		goto out;
+	}
+	drm_mm_remove_node(&tmp);
+
+out:
+	if (ret)
+		drm_mm_debug_table(&mm, __func__);
+	drm_mm_takedown(&mm);
+	return ret;
+}
+
+static int igt_debug(void *ignored)
+{
+	struct drm_mm mm;
+	struct drm_mm_node nodes[2];
+	int ret;
+
+	drm_mm_init(&mm, 0, 4096);
+
+	memset(nodes, 0, sizeof(nodes));
+	nodes[0].start = 512;
+	nodes[0].size = 1024;
+	ret = drm_mm_reserve_node(&mm, &nodes[0]);
+	if (ret) {
+		pr_err("failed to reserve node[0] {start=%lld, size=%lld)\n",
+		       nodes[0].start, nodes[0].size);
+		return ret;
+	}
+
+	nodes[1].start = 512 - 1024 - 512;
+	nodes[1].size = 1024;
+	ret = drm_mm_reserve_node(&mm, &nodes[1]);
+	if (ret) {
+		pr_err("failed to reserve node[1] {start=%lld, size=%lld)\n",
+		       nodes[1].start, nodes[1].size);
+		return ret;
+	}
+
+	drm_mm_debug_table(&mm, __func__);
+	return 0;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (4 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 05/34] drm: kselftest for drm_mm_init() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 10:44   ` Joonas Lahtinen
  2016-12-13 15:16   ` David Herrmann
  2016-12-12 11:53 ` [PATCH 07/34] drm: Add a simple prime number generator Chris Wilson
                   ` (28 subsequent siblings)
  34 siblings, 2 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

For testing, we want a reproducible PRNG, a plain linear congruent
generator is suitable for our very limited selftests.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/Kconfig        |  5 +++++
 drivers/gpu/drm/Makefile       |  1 +
 drivers/gpu/drm/lib/drm_rand.c | 51 ++++++++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/lib/drm_rand.h |  9 ++++++++
 4 files changed, 66 insertions(+)
 create mode 100644 drivers/gpu/drm/lib/drm_rand.c
 create mode 100644 drivers/gpu/drm/lib/drm_rand.h

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index fd341ab69c46..04d1d0a32c5c 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -48,10 +48,15 @@ config DRM_DEBUG_MM
 
 	  If in doubt, say "N".
 
+config DRM_LIB_RAND
+	bool
+	default n
+
 config DRM_DEBUG_MM_SELFTEST
 	tristate "kselftests for DRM range manager (struct drm_mm)"
 	depends on DRM
 	depends on DEBUG_KERNEL
+	select DRM_LIB_RAND
 	default n
 	help
 	  This option provides a kernel module that can be used to test
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index c8aed3688b20..363eb1a23151 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -18,6 +18,7 @@ drm-y       :=	drm_auth.o drm_bufs.o drm_cache.o \
 		drm_plane.o drm_color_mgmt.o drm_print.o \
 		drm_dumb_buffers.o drm_mode_config.o
 
+drm-$(CONFIG_DRM_LIB_RAND) += lib/drm_rand.o
 obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += selftests/test-drm_mm.o
 
 drm-$(CONFIG_COMPAT) += drm_ioc32.o
diff --git a/drivers/gpu/drm/lib/drm_rand.c b/drivers/gpu/drm/lib/drm_rand.c
new file mode 100644
index 000000000000..f203c47bb525
--- /dev/null
+++ b/drivers/gpu/drm/lib/drm_rand.c
@@ -0,0 +1,51 @@
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "drm_rand.h"
+
+u32 drm_lcg_random(u32 *state)
+{
+	u32 s = *state;
+
+#define rol(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
+	s = (s ^ rol(s, 5) ^ rol(s, 24)) + 0x37798849;
+#undef rol
+
+	*state = s;
+	return s;
+}
+EXPORT_SYMBOL(drm_lcg_random);
+
+int *drm_random_reorder(int *order, int count, u32 *state)
+{
+	int n;
+
+	for (n = count-1; n > 1; n--) {
+		int r = drm_lcg_random(state) % (n + 1);
+		if (r != n) {
+			int tmp = order[n];
+			order[n] = order[r];
+			order[r] = tmp;
+		}
+	}
+
+	return order;
+}
+EXPORT_SYMBOL(drm_random_reorder);
+
+int *drm_random_order(int count, u32 *state)
+{
+	int *order;
+	int n;
+
+	order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY);
+	if (!order)
+		return order;
+
+	for (n = 0; n < count; n++)
+		order[n] = n;
+
+	return drm_random_reorder(order, count, state);
+}
+EXPORT_SYMBOL(drm_random_order);
diff --git a/drivers/gpu/drm/lib/drm_rand.h b/drivers/gpu/drm/lib/drm_rand.h
new file mode 100644
index 000000000000..a3f22d115aac
--- /dev/null
+++ b/drivers/gpu/drm/lib/drm_rand.h
@@ -0,0 +1,9 @@
+#ifndef __DRM_RAND_H__
+#define __DRM_RAND_H
+
+u32 drm_lcg_random(u32 *state);
+
+int *drm_random_reorder(int *order, int count, u32 *state);
+int *drm_random_order(int count, u32 *state);
+
+#endif /* __DRM_RAND_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 07/34] drm: Add a simple prime number generator
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (5 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 06/34] drm: Add a simple linear congruent generator PRNG Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-12 11:53 ` [PATCH 08/34] drm: kselftest for drm_mm_reserve_node() Chris Wilson
                   ` (27 subsequent siblings)
  34 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Prime numbers are interesting for testing components that use multiplies
and divides, such as testing struct drm_mm alignment computations.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/Kconfig                 |   5 +
 drivers/gpu/drm/Makefile                |   1 +
 drivers/gpu/drm/lib/drm_prime_numbers.c | 174 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/lib/drm_prime_numbers.h |  10 ++
 4 files changed, 190 insertions(+)
 create mode 100644 drivers/gpu/drm/lib/drm_prime_numbers.c
 create mode 100644 drivers/gpu/drm/lib/drm_prime_numbers.h

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 04d1d0a32c5c..09f076896844 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -52,11 +52,16 @@ config DRM_LIB_RAND
 	bool
 	default n
 
+config DRM_LIB_PRIMES
+	bool
+	default n
+
 config DRM_DEBUG_MM_SELFTEST
 	tristate "kselftests for DRM range manager (struct drm_mm)"
 	depends on DRM
 	depends on DEBUG_KERNEL
 	select DRM_LIB_RAND
+	select DRM_LIB_PRIMES
 	default n
 	help
 	  This option provides a kernel module that can be used to test
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 363eb1a23151..24fe5d47960a 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -19,6 +19,7 @@ drm-y       :=	drm_auth.o drm_bufs.o drm_cache.o \
 		drm_dumb_buffers.o drm_mode_config.o
 
 drm-$(CONFIG_DRM_LIB_RAND) += lib/drm_rand.o
+obj-$(CONFIG_DRM_LIB_PRIMES) += lib/drm_prime_numbers.o
 obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += selftests/test-drm_mm.o
 
 drm-$(CONFIG_COMPAT) += drm_ioc32.o
diff --git a/drivers/gpu/drm/lib/drm_prime_numbers.c b/drivers/gpu/drm/lib/drm_prime_numbers.c
new file mode 100644
index 000000000000..c6cc0c32ce22
--- /dev/null
+++ b/drivers/gpu/drm/lib/drm_prime_numbers.c
@@ -0,0 +1,174 @@
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+
+#include "drm_prime_numbers.h"
+
+static DEFINE_MUTEX(lock);
+
+static struct primes {
+	struct rcu_head rcu;
+	unsigned long last, sz;
+	unsigned long primes[];
+} __rcu *primes;
+
+static bool slow_is_prime_number(unsigned long x)
+{
+	unsigned long y = int_sqrt(x) + 1;
+
+	while (y > 1) {
+		if ((x % y) == 0)
+			break;
+		y--;
+	}
+
+	return y == 1;
+}
+
+static unsigned long slow_next_prime_number(unsigned long x)
+{
+	for (;;) {
+		if (slow_is_prime_number(++x))
+			return x;
+	}
+}
+
+static unsigned long mark_multiples(unsigned long x,
+				    unsigned long *p,
+				    unsigned long start,
+				    unsigned long end)
+{
+	unsigned long m;
+
+	m = 2 * x;
+	if (m < start)
+		m = (start / x + 1) * x;
+
+	while (m < end) {
+		__clear_bit(m, p);
+		m += x;
+	}
+
+	return x;
+}
+
+static struct primes *expand(unsigned long x)
+{
+	unsigned long sz, y, prev;
+	struct primes *p, *new;
+
+	sz = x * x;
+	if (sz < x)
+		return NULL;
+
+	mutex_lock(&lock);
+	p = rcu_dereference_protected(primes, lockdep_is_held(&lock));
+	if (p && x < p->last)
+		goto unlock;
+
+	sz = round_up(sz, BITS_PER_LONG);
+	new = kmalloc(sizeof(*new) + sz / sizeof(long), GFP_KERNEL);
+	if (!new) {
+		p = NULL;
+		goto unlock;
+	}
+
+	/* Where memory permits, track the primes using the
+	 * Sieve of Eratosthenes.
+	 */
+	if (p) {
+		prev = p->sz;
+		memcpy(new->primes, p->primes, prev / BITS_PER_LONG);
+	} else {
+		prev = 0;
+	}
+	memset(new->primes + prev / BITS_PER_LONG,
+	       0xff, (sz - prev) / sizeof(long));
+	for (y = 2UL; y < sz; y = find_next_bit(new->primes, sz, y + 1))
+		new->last = mark_multiples(y, new->primes, prev, sz);
+	new->sz = sz;
+
+	rcu_assign_pointer(primes, new);
+	if (p)
+		kfree_rcu(p, rcu);
+	p = new;
+
+unlock:
+	mutex_unlock(&lock);
+	return p;
+}
+
+unsigned long drm_next_prime_number(unsigned long x)
+{
+	struct primes *p;
+
+	if (x < 2)
+		return 2;
+
+	rcu_read_lock();
+	p = rcu_dereference(primes);
+	if (!p || x >= p->last) {
+		rcu_read_unlock();
+
+		p = expand(x);
+		if (!p)
+			return slow_next_prime_number(x);
+
+		rcu_read_lock();
+	}
+
+	x = find_next_bit(p->primes, p->last, x + 1);
+	rcu_read_unlock();
+
+	return x;
+}
+EXPORT_SYMBOL(drm_next_prime_number);
+
+bool drm_is_prime_number(unsigned long x)
+{
+	struct primes *p;
+	bool result;
+
+	switch (x) {
+	case 0:
+		return false;
+	case 1:
+	case 2:
+	case 3:
+		return true;
+	}
+
+	rcu_read_lock();
+	p = rcu_dereference(primes);
+	if (!p || x >= p->last) {
+		rcu_read_unlock();
+
+		p = expand(x);
+		if (!p)
+			return slow_is_prime_number(x);
+
+		rcu_read_lock();
+	}
+
+	result = test_bit(x, p->primes);
+	rcu_read_unlock();
+
+	return result;
+}
+EXPORT_SYMBOL(drm_is_prime_number);
+
+static int __init drm_primes_init(void)
+{
+	return 0;
+}
+
+static void __exit drm_primes_exit(void)
+{
+	kfree(primes);
+}
+
+module_init(drm_primes_init);
+module_exit(drm_primes_exit);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/drm/lib/drm_prime_numbers.h b/drivers/gpu/drm/lib/drm_prime_numbers.h
new file mode 100644
index 000000000000..7bc58cf9a86c
--- /dev/null
+++ b/drivers/gpu/drm/lib/drm_prime_numbers.h
@@ -0,0 +1,10 @@
+#ifndef __DRM_PRIMES_H__
+#define __DRM_PRIMES_H__
+
+bool drm_is_prime_number(unsigned long x);
+unsigned long drm_next_prime_number(unsigned long x);
+
+#define drm_for_each_prime(prime, max) \
+	for (prime = 1;	prime < (max); prime = drm_next_prime_number(prime))
+
+#endif /* __DRM_PRIMES_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 08/34] drm: kselftest for drm_mm_reserve_node()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (6 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 07/34] drm: Add a simple prime number generator Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-14  9:55   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 09/34] drm: kselftest for drm_mm_insert_node() Chris Wilson
                   ` (26 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Exercise drm_mm_reserve_node(), check that we can't reserve an already
occupied range and that the lists are correct after reserving/removing.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 132 +++++++++++++++++++++++++++
 2 files changed, 133 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index fddf2c01c97f..639913a69101 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(reserve, igt_reserve)
 selftest(debug, igt_debug)
 selftest(init, igt_init)
 selftest(sanitycheck, igt_sanitycheck) /* keep last */
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 1fe104cc05d5..13b2cfdb4d44 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -11,6 +11,9 @@
 
 #include <drm/drm_mm.h>
 
+#include "../lib/drm_rand.h"
+#include "../lib/drm_prime_numbers.h"
+
 #define TESTS "drm_mm_selftests.h"
 #include "drm_selftest.h"
 
@@ -105,6 +108,135 @@ static int igt_debug(void *ignored)
 	return 0;
 }
 
+static int __igt_reserve(int count, u64 size)
+{
+	u32 lcg_state = random_seed;
+	struct drm_mm mm;
+	struct drm_mm_node *node, *next;
+	int *order, n;
+	int ret;
+
+	/* Fill a range with lots of nodes, check it doesn't fail too early */
+
+	ret = -ENOMEM;
+	order = drm_random_order(count, &lcg_state);
+	if (!order)
+		goto err;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, count * size);
+	if (!drm_mm_clean(&mm)) {
+		pr_err("mm not empty on creation\n");
+		goto out;
+	}
+
+	for (n = 0; n < count; n++) {
+		int err;
+
+		node = kzalloc(sizeof(*node), GFP_KERNEL);
+		if (!node) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		node->start = order[n] * size;
+		node->size = size;
+
+		err = drm_mm_reserve_node(&mm, node);
+		if (err) {
+			pr_err("reserve failed, step %d, start %llu\n",
+			       n, node->start);
+			ret = err;
+			goto out;
+		}
+
+		if (!drm_mm_node_allocated(node)) {
+			pr_err("reserved node not allocated! step %d, start %llu\n",
+			       n, node->start);
+			goto out;
+		}
+	}
+
+	/* Repeated use should then fail */
+	drm_random_reorder(order, count, &lcg_state);
+	for (n = 0; n < count; n++) {
+		struct drm_mm_node tmp = {
+			.start = order[n] * size,
+			.size = 1
+		};
+
+		if (!drm_mm_reserve_node(&mm, &tmp)) {
+			drm_mm_remove_node(&tmp);
+			pr_err("impossible reserve succeeded, step %d, start %llu\n",
+			       n, tmp.start);
+			goto out;
+		}
+	}
+
+	/* Overlapping use should then fail */
+	for (n = 0; n < count; n++) {
+		struct drm_mm_node tmp = {
+			.start = 0,
+			.size = size * count,
+		};
+
+		if (!drm_mm_reserve_node(&mm, &tmp)) {
+			drm_mm_remove_node(&tmp);
+			pr_err("impossible reserve succeeded, step %d, start %llu\n",
+			       n, tmp.start);
+			goto out;
+		}
+	}
+	for (n = 0; n < count; n++) {
+		struct drm_mm_node tmp = {
+			.start = size * n,
+			.size = size * (count - n),
+		};
+
+		if (!drm_mm_reserve_node(&mm, &tmp)) {
+			drm_mm_remove_node(&tmp);
+			pr_err("impossible reserve succeeded, step %d, start %llu\n",
+			       n, tmp.start);
+			goto out;
+		}
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm) {
+		drm_mm_remove_node(node);
+		kfree(node);
+	}
+	drm_mm_takedown(&mm);
+	kfree(order);
+err:
+	return ret;
+}
+
+static int igt_reserve(void *ignored)
+{
+	const unsigned int count = BIT(10);
+	int n, ret;
+
+	drm_for_each_prime(n, 54) {
+		u64 size = BIT_ULL(n);
+
+		ret = __igt_reserve(count, size - 1);
+		if (ret)
+			return ret;
+
+		ret = __igt_reserve(count, size);
+		if (ret)
+			return ret;
+
+		ret = __igt_reserve(count, size + 1);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 09/34] drm: kselftest for drm_mm_insert_node()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (7 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 08/34] drm: kselftest for drm_mm_reserve_node() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-14 12:26   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 10/34] drm: kselftest for drm_mm_replace_node() Chris Wilson
                   ` (25 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Exercise drm_mm_insert_node(), check that we can't overfill a range and
that the lists are correct after reserving/removing.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 209 +++++++++++++++++++++++++++
 2 files changed, 210 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 639913a69101..0927695c3fd4 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(insert, igt_insert)
 selftest(reserve, igt_reserve)
 selftest(debug, igt_debug)
 selftest(init, igt_init)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 13b2cfdb4d44..cd57a1de7f89 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -237,6 +237,215 @@ static int igt_reserve(void *ignored)
 	return 0;
 }
 
+static int __igt_insert(int count, u64 size)
+{
+	u32 lcg_state = random_seed;
+	struct drm_mm mm;
+	struct drm_mm_node *nodes, *node, *next;
+	int *order, n, o = 0;
+	int ret;
+
+	/* Fill a range with lots of nodes, check it doesn't fail too early */
+
+	ret = -ENOMEM;
+	nodes = vzalloc(count * sizeof(*nodes));
+	if (!nodes)
+		goto err;
+
+	order = drm_random_order(count, &lcg_state);
+	if (!order)
+		goto err_nodes;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, count * size);
+	if (!drm_mm_clean(&mm)) {
+		pr_err("mm not empty on creation\n");
+		goto out;
+	}
+
+	for (n = 0; n < count; n++) {
+		int err;
+
+		node = &nodes[n];
+		err = drm_mm_insert_node(&mm, node, size, 0,
+					 DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("insert failed, step %d, start %llu\n",
+			       n, nodes[n].start);
+			ret = err;
+			goto out;
+		}
+
+		if (!drm_mm_node_allocated(node)) {
+			pr_err("inserted node not allocated! step %d, start %llu\n",
+			       n, node->start);
+			goto out;
+		}
+	}
+
+	/* Repeated use should then fail */
+	if (1) {
+		struct drm_mm_node tmp;
+
+		memset(&tmp, 0, sizeof(tmp));
+		if (!drm_mm_insert_node(&mm, &tmp, size, 0,
+					DRM_MM_SEARCH_DEFAULT)) {
+			drm_mm_remove_node(&tmp);
+			pr_err("impossible insert succeeded, step %d, start %llu\n",
+			       n, tmp.start);
+			goto out;
+		}
+	}
+
+	n = 0;
+	drm_mm_for_each_node(node, &mm) {
+		if (node->start != n * size) {
+			pr_err("node %d out of order, expected start %llx, found %llx\n",
+			       n, n * size, node->start);
+			goto out;
+		}
+
+		if (node->size != size) {
+			pr_err("node %d has wrong size, expected size %llx, found %llx\n",
+			       n, size, node->size);
+			goto out;
+		}
+
+		if (node->hole_follows) {
+			pr_err("node %d is followed by a hole!\n", n);
+			goto out;
+		}
+
+		n++;
+	}
+
+	for (n = 0; n < count; n++) {
+		drm_mm_for_each_node_in_range(node, &mm, n * size, (n + 1) * size) {
+			if (node->start != n * size) {
+				pr_err("lookup node %d out of order, expected start %llx, found %llx\n",
+				       n, n * size, node->start);
+				goto out;
+			}
+		}
+	}
+
+	/* Remove one and reinsert, as the only hole it should refill itself */
+	for (n = 0; n < count; n++) {
+		int err;
+
+		drm_mm_remove_node(&nodes[n]);
+		err = drm_mm_insert_node(&mm, &nodes[n], size, 0,
+					 DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("reinsert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+
+		if (nodes[n].start != n * size) {
+			pr_err("reinsert node moved, step %d, expected %llx, found %llx\n",
+			       n, n * size, nodes[n].start);
+			goto out;
+		}
+	}
+
+	/* Remove several, reinsert, check full */
+	drm_for_each_prime(n, min(128, count)) {
+		int m;
+
+		for (m = 0; m < n; m++) {
+			node = &nodes[order[(o + m) % count]];
+			drm_mm_remove_node(node);
+		}
+
+		for (m = 0; m < n; m++) {
+			int err;
+
+			node = &nodes[order[(o + m) % count]];
+			err = drm_mm_insert_node(&mm, node, size, 0,
+						 DRM_MM_SEARCH_DEFAULT);
+			if (err) {
+				pr_err("insert failed, step %d, start %llu\n",
+				       n, node->start);
+				ret = err;
+				goto out;
+			}
+		}
+
+		o += n;
+
+		if (1) {
+			struct drm_mm_node tmp;
+
+			memset(&tmp, 0, sizeof(tmp));
+			if (!drm_mm_insert_node(&mm, &tmp, size, 0,
+						DRM_MM_SEARCH_DEFAULT)) {
+				drm_mm_remove_node(&tmp);
+				pr_err("impossible insert succeeded, start %llu\n",
+				       tmp.start);
+				goto out;
+			}
+		}
+
+		m = 0;
+		drm_mm_for_each_node(node, &mm) {
+			if (node->start != m * size) {
+				pr_err("node %d out of order, expected start %llx, found %llx\n",
+				       m, m * size, node->start);
+				goto out;
+			}
+
+			if (node->size != size) {
+				pr_err("node %d has wrong size, expected size %llx, found %llx\n",
+				       m, size, node->size);
+				goto out;
+			}
+
+			if (node->hole_follows) {
+				pr_err("node %d is followed by a hole!\n", m);
+				goto out;
+			}
+
+			m++;
+		}
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm)
+		drm_mm_remove_node(node);
+	drm_mm_takedown(&mm);
+	kfree(order);
+err_nodes:
+	vfree(nodes);
+err:
+	return ret;
+}
+
+static int igt_insert(void *ignored)
+{
+	const unsigned int count = BIT(10);
+	int n, ret;
+
+	drm_for_each_prime(n, 54) {
+		u64 size = BIT_ULL(n);
+
+		ret = __igt_insert(count, size - 1);
+		if (ret)
+			return ret;
+
+		ret = __igt_insert(count, size);
+		if (ret)
+			return ret;
+
+		ret = __igt_insert(count, size + 1);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 10/34] drm: kselftest for drm_mm_replace_node()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (8 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 09/34] drm: kselftest for drm_mm_insert_node() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-14 12:01   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 11/34] drm: kselftest for drm_mm_insert_node_in_range() Chris Wilson
                   ` (24 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Reuse drm_mm_insert_node() with a temporary node to exercise
drm_mm_replace_node(). We use the previous test in order to exercise the
various lists following replacement.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |  1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 45 ++++++++++++++++++++++++----
 2 files changed, 41 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 0927695c3fd4..12d91c77244a 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(replace, igt_replace)
 selftest(insert, igt_insert)
 selftest(reserve, igt_reserve)
 selftest(debug, igt_debug)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index cd57a1de7f89..2ea671f34883 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -237,7 +237,7 @@ static int igt_reserve(void *ignored)
 	return 0;
 }
 
-static int __igt_insert(int count, u64 size)
+static int __igt_insert(int count, u64 size, bool replace)
 {
 	u32 lcg_state = random_seed;
 	struct drm_mm mm;
@@ -264,9 +264,10 @@ static int __igt_insert(int count, u64 size)
 	}
 
 	for (n = 0; n < count; n++) {
+		struct drm_mm_node tmp, *node;
 		int err;
 
-		node = &nodes[n];
+		node = memset(replace ? &tmp : &nodes[n], 0, sizeof(*node));
 		err = drm_mm_insert_node(&mm, node, size, 0,
 					 DRM_MM_SEARCH_DEFAULT);
 		if (err) {
@@ -281,6 +282,20 @@ static int __igt_insert(int count, u64 size)
 			       n, node->start);
 			goto out;
 		}
+
+		if (replace) {
+			drm_mm_replace_node(&tmp, &nodes[n]);
+			if (!drm_mm_node_allocated(&nodes[n])) {
+				pr_err("replaced new-node not allocated! step %d\n",
+				       n);
+				goto out;
+			}
+			if (drm_mm_node_allocated(&tmp)) {
+				pr_err("replaced old-node still allocated! step %d\n",
+				       n);
+				goto out;
+			}
+		}
 	}
 
 	/* Repeated use should then fail */
@@ -430,17 +445,37 @@ static int igt_insert(void *ignored)
 	drm_for_each_prime(n, 54) {
 		u64 size = BIT_ULL(n);
 
-		ret = __igt_insert(count, size - 1);
+		ret = __igt_insert(count, size - 1, false);
 		if (ret)
 			return ret;
 
-		ret = __igt_insert(count, size);
+		ret = __igt_insert(count, size, false);
+		if (ret)
+			return ret;
+
+		ret = __igt_insert(count, size + 1, false);
+	}
+
+	return 0;
+}
+
+static int igt_replace(void *ignored)
+{
+	const unsigned int count = BIT(10);
+	int n, ret;
+
+	drm_for_each_prime(n, 54) {
+		u64 size = BIT_ULL(n);
+
+		ret = __igt_insert(count, size - 1, true);
 		if (ret)
 			return ret;
 
-		ret = __igt_insert(count, size + 1);
+		ret = __igt_insert(count, size, true);
 		if (ret)
 			return ret;
+
+		ret = __igt_insert(count, size + 1, true);
 	}
 
 	return 0;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 11/34] drm: kselftest for drm_mm_insert_node_in_range()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (9 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 10/34] drm: kselftest for drm_mm_replace_node() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-15  8:43   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 12/34] drm: kselftest for drm_mm and alignment Chris Wilson
                   ` (23 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Exercise drm_mm_insert_node_in_range(), check that we only allocate from
the specified range.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 188 +++++++++++++++++++++++++++
 2 files changed, 189 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 12d91c77244a..f2bd271fc21c 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(insert_range, igt_insert_range)
 selftest(replace, igt_replace)
 selftest(insert, igt_insert)
 selftest(reserve, igt_reserve)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 2ea671f34883..92744221749b 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -481,6 +481,194 @@ static int igt_replace(void *ignored)
 	return 0;
 }
 
+static int __igt_insert_range(int count, u64 size, u64 start, u64 end)
+{
+	struct drm_mm mm;
+	struct drm_mm_node *nodes, *node, *next;
+	int n, start_n, end_n;
+	int ret, err;
+
+	/* Fill a range with lots of nodes, check it doesn't fail too early */
+	pr_debug("%s: count=%d, size=%llx, start=%llx, end=%llx\n",
+		 __func__, count, size, start, end);
+
+	ret = -ENOMEM;
+	nodes = vzalloc(count * sizeof(*nodes));
+	if (!nodes)
+		goto err;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, count * size);
+	if (!drm_mm_clean(&mm)) {
+		pr_err("mm not empty on creation\n");
+		goto out;
+	}
+
+	for (n = 0; n < count; n++) {
+		err = drm_mm_insert_node(&mm, &nodes[n], size, 0,
+					 DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("insert failed, step %d, start %llu\n",
+			       n, nodes[n].start);
+			ret = err;
+			goto out;
+		}
+	}
+
+	/* Repeated use should then fail */
+	if (1) {
+		struct drm_mm_node tmp;
+
+		memset(&tmp, 0, sizeof(tmp));
+		if (!drm_mm_insert_node_in_range(&mm, &tmp,
+						 size, 0,
+						 start, end,
+						 DRM_MM_SEARCH_DEFAULT)) {
+			drm_mm_remove_node(&tmp);
+			pr_err("impossible insert succeeded, step %d, start %llu\n",
+			       n, tmp.start);
+			goto out;
+		}
+	}
+
+	n = div64_u64(start, size);
+	drm_mm_for_each_node_in_range(node, &mm, start, end) {
+		if (node->start > end) {
+			pr_err("node %d out of rangei [%d, %d]\n",
+			       n,
+			       (int)div64_u64(start, size),
+			       (int)div64_u64(start + size - 1, size));
+			goto out;
+		}
+
+		if (node->start != n * size) {
+			pr_err("node %d out of order, expected start %llx, found %llx\n",
+			       n, n * size, node->start);
+			goto out;
+		}
+
+		if (node->size != size) {
+			pr_err("node %d has wrong size, expected size %llx, found %llx\n",
+			       n, size, node->size);
+			goto out;
+		}
+
+		if (node->hole_follows) {
+			pr_err("node %d is followed by a hole!\n", n);
+			goto out;
+		}
+
+		n++;
+	}
+
+	/* Remove one and reinsert, as the only hole it should refill itself */
+	start_n = div64_u64(start + size - 1, size);
+	end_n = div64_u64(end - size, size);
+	for (n = start_n; n <= end_n; n++) {
+		drm_mm_remove_node(&nodes[n]);
+		err = drm_mm_insert_node_in_range(&mm, &nodes[n], size, 0,
+						  start, end,
+						  DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("reinsert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+
+		if (nodes[n].start != n * size) {
+			pr_err("reinsert node moved, step %d, expected %llx, found %llx\n",
+			       n, n * size, nodes[n].start);
+			goto out;
+		}
+	}
+
+	/* Remove the entire block, reinsert (order will then be undefined) */
+	for (n = start_n; n <= end_n; n++)
+		drm_mm_remove_node(&nodes[n]);
+
+	for (n = start_n; n <= end_n; n++) {
+		err = drm_mm_insert_node_in_range(&mm, &nodes[n], size, 0,
+						  start, end,
+						  DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("reinsert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+	}
+
+	n = div64_u64(start, size);
+	drm_mm_for_each_node_in_range(node, &mm, start, end) {
+		if (node->start > end) {
+			pr_err("node %d out of rangei [%d, %d]\n",
+			       n,
+			       (int)div64_u64(start, size),
+			       (int)div64_u64(start + size - 1, size));
+			goto out;
+		}
+
+		if (node->start != n * size) {
+			pr_err("node %d out of order, expected start %llx, found %llx\n",
+			       n, n * size, node->start);
+			goto out;
+		}
+
+		if (node->size != size) {
+			pr_err("node %d has wrong size, expected size %llx, found %llx\n",
+			       n, size, node->size);
+			goto out;
+		}
+
+		if (node->hole_follows) {
+			pr_err("node %d is followed by a hole!\n", n);
+			goto out;
+		}
+
+		n++;
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm)
+		drm_mm_remove_node(node);
+	drm_mm_takedown(&mm);
+	vfree(nodes);
+err:
+	return ret;
+}
+
+static int igt_insert_range(void *ignored)
+{
+	const int max = 4096 * 8192;
+	int ret;
+
+	ret = __igt_insert_range(8192, 4096, 0, max);
+	if (ret)
+		return ret;
+
+	ret = __igt_insert_range(8192, 4096, 1, max);
+	if (ret)
+		return ret;
+
+	ret = __igt_insert_range(8192, 4096, 0, max - 1);
+	if (ret)
+		return ret;
+
+	ret = __igt_insert_range(8192, 4096, 0, max/2);
+	if (ret)
+		return ret;
+
+	ret = __igt_insert_range(8192, 4096, max/2, max);
+	if (ret)
+		return ret;
+
+	ret = __igt_insert_range(8192, 4096, max/4+1, 3*max/4-1);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 12/34] drm: kselftest for drm_mm and alignment
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (10 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 11/34] drm: kselftest for drm_mm_insert_node_in_range() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-15  8:59   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 13/34] drm: kselftest for drm_mm and eviction Chris Wilson
                   ` (22 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Check that we can request alignment to any power-of-two or prime using a
plain drm_mm_node_insert(), and also handle a reasonable selection of
primes.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   3 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 104 +++++++++++++++++++++++++++
 2 files changed, 107 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index f2bd271fc21c..9eb0c9fd9dff 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,9 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(align64, igt_align64)
+selftest(align32, igt_align32)
+selftest(align, igt_align)
 selftest(insert_range, igt_insert_range)
 selftest(replace, igt_replace)
 selftest(insert, igt_insert)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 92744221749b..be47e4aeec12 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -669,6 +669,110 @@ static int igt_insert_range(void *ignored)
 	return 0;
 }
 
+static int igt_align(void *ignored)
+{
+	struct drm_mm mm;
+	struct drm_mm_node *node, *next;
+	int ret = -EINVAL;
+	int prime;
+
+	drm_mm_init(&mm, 1, U64_MAX - 1);
+
+	drm_for_each_prime(prime, 8192) {
+		u64 size;
+		int err;
+
+		node = kzalloc(sizeof(*node), GFP_KERNEL);
+		if (!node) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		size = drm_next_prime_number(prime);
+		err = drm_mm_insert_node_generic(&mm, node, size, prime, 0,
+						 DRM_MM_SEARCH_DEFAULT,
+						 DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("insert failed with alignment=%d", prime);
+			ret = err;
+			goto out;
+		}
+
+		if ((int)node->start % prime) {
+			pr_err("node inserted into wrong location %llx, expected alignment to %d [rem %d]\n",
+			       node->start, prime, (int)node->start % prime);
+			goto out;
+		}
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm) {
+		drm_mm_remove_node(node);
+		kfree(node);
+	}
+	drm_mm_takedown(&mm);
+	return ret;
+}
+
+static int igt_align_pot(int max)
+{
+	struct drm_mm mm;
+	struct drm_mm_node *node, *next;
+	int bit;
+	int ret = -EINVAL;
+
+	drm_mm_init(&mm, 1, U64_MAX - 1);
+
+	for (bit = max - 1; bit; bit--) {
+		u64 align, size;
+		int err;
+
+		node = kzalloc(sizeof(*node), GFP_KERNEL);
+		if (!node) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		align = BIT_ULL(bit);
+		size = BIT_ULL(bit-1) + 1;
+		err = drm_mm_insert_node_generic(&mm, node, size, align, 0,
+						 DRM_MM_SEARCH_DEFAULT,
+						 DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("insert failed with alignment=%llx [%d]",
+			       align, bit);
+			ret = err;
+			goto out;
+		}
+
+		if (node->start & (align - 1)) {
+			pr_err("node inserted into wrong location %llx, expected alignment to %llx [%d]\n",
+			       node->start, align, bit);
+			goto out;
+		}
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm) {
+		drm_mm_remove_node(node);
+		kfree(node);
+	}
+	drm_mm_takedown(&mm);
+	return ret;
+}
+
+static int igt_align32(void *ignored)
+{
+	return igt_align_pot(32);
+}
+
+static int igt_align64(void *ignored)
+{
+	return igt_align_pot(64);
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 13/34] drm: kselftest for drm_mm and eviction
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (11 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 12/34] drm: kselftest for drm_mm and alignment Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-15  9:29   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 14/34] drm: kselftest for drm_mm and range restricted eviction Chris Wilson
                   ` (21 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Check that we add arbitrary blocks to the eviction scanner in order to
find the first minimal hole that matches our request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 252 +++++++++++++++++++++++++++
 2 files changed, 253 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 9eb0c9fd9dff..7aaca10e7029 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(evict, igt_evict)
 selftest(align64, igt_align64)
 selftest(align32, igt_align32)
 selftest(align, igt_align)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index be47e4aeec12..f9d6a25a8006 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -773,6 +773,258 @@ static int igt_align64(void *ignored)
 	return igt_align_pot(64);
 }
 
+static void show_scan(const struct drm_mm *scan)
+{
+	pr_info("scan: hit [%llx, %llx], size=%lld, align=%d, color=%ld\n",
+		scan->scan_hit_start, scan->scan_hit_end,
+		scan->scan_size, scan->scan_alignment, scan->scan_color);
+}
+
+static void show_holes(const struct drm_mm *mm, int count)
+{
+	u64 hole_start, hole_end;
+	struct drm_mm_node *hole;
+
+	drm_mm_for_each_hole(hole, mm, hole_start, hole_end) {
+		struct drm_mm_node *next = list_next_entry(hole, node_list);
+		const char *node1 = NULL, *node2 = NULL;
+
+		if (hole->allocated)
+			node1 = kasprintf(GFP_KERNEL,
+					  "[%llx + %lld, color=%ld], ",
+					  hole->start, hole->size, hole->color);
+
+		if (next->allocated)
+			node2 = kasprintf(GFP_KERNEL,
+					  ", [%llx + %lld, color=%ld]",
+					  next->start, next->size, next->color);
+
+		pr_info("%sHole [%llx - %llx, size %lld]%s\n",
+			node1,
+			hole_start, hole_end, hole_end - hole_start,
+			node2);
+
+		kfree(node2);
+		kfree(node1);
+
+		if (!--count)
+			break;
+	}
+}
+
+static int igt_evict(void *ignored)
+{
+	u32 lcg_state = random_seed;
+	const int size = 8192;
+	struct drm_mm mm;
+	struct evict_node {
+		struct drm_mm_node node;
+		struct list_head link;
+	} *nodes;
+	struct drm_mm_node *node, *next;
+	int *order, n, m;
+	int ret;
+
+	ret = -ENOMEM;
+	nodes = vzalloc(size * sizeof(*nodes));
+	if (!nodes)
+		goto err;
+
+	order = drm_random_order(size, &lcg_state);
+	if (!order)
+		goto err_nodes;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, size);
+	for (n = 0; n < size; n++) {
+		int err;
+
+		err = drm_mm_insert_node(&mm, &nodes[n].node, 1, 0,
+					 DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("insert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+	}
+
+	/* First check that using the scanner doesn't break the mm */
+	if (1) {
+		LIST_HEAD(evict_list);
+		struct evict_node *e;
+
+		drm_mm_init_scan(&mm, 1, 0, 0);
+		for (m = 0; m < size; m++) {
+			e = &nodes[m];
+			list_add(&e->link, &evict_list);
+			drm_mm_scan_add_block(&e->node);
+		}
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_scan_remove_block(&e->node);
+
+		for (m = 0; m < size; m++) {
+			e = &nodes[m];
+
+			if (!drm_mm_node_allocated(&e->node)) {
+				pr_err("node[%d] no longer allocated!\n", m);
+				goto out;
+			}
+
+			e->link.next = NULL;
+		}
+
+		drm_mm_for_each_node(node, &mm) {
+			e = container_of(node, typeof(*e), node);
+			e->link.next = &e->link;
+		}
+
+		for (m = 0; m < size; m++) {
+			e = &nodes[m];
+
+			if (!e->link.next) {
+				pr_err("node[%d] no longer connected!\n", m);
+				goto out;
+			}
+		}
+
+		if (!list_empty(&mm.hole_stack)) {
+			pr_err("mm now contains holes!\n");
+			goto out;
+		}
+	}
+
+	for (n = 1; n < size; n <<= 1) {
+		const int nsize = size / 2;
+		LIST_HEAD(evict_list);
+		struct evict_node *e, *en;
+		struct drm_mm_node tmp;
+		int err;
+
+		drm_mm_init_scan(&mm, nsize, n, 0);
+		drm_random_reorder(order, size, &lcg_state);
+		for (m = 0; m < size; m++) {
+			e = &nodes[order[m]];
+			list_add(&e->link, &evict_list);
+			if (drm_mm_scan_add_block(&e->node))
+				break;
+		}
+		list_for_each_entry_safe(e, en, &evict_list, link) {
+			if (!drm_mm_scan_remove_block(&e->node))
+				list_del(&e->link);
+		}
+		if (list_empty(&evict_list)) {
+			pr_err("Failed to find eviction: size=%d, align=%d\n",
+			       nsize, n);
+			goto out;
+		}
+
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_remove_node(&e->node);
+
+		memset(&tmp, 0, sizeof(tmp));
+		err = drm_mm_insert_node(&mm, &tmp, nsize, n,
+					 DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("Failed to insert into eviction hole: size=%d, align=%d\n",
+			       nsize, n);
+			show_scan(&mm);
+			show_holes(&mm, 3);
+			ret = err;
+			goto out;
+		}
+
+		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
+			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d], start=%llx, hole-follows?=%d\n",
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
+
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		drm_mm_remove_node(&tmp);
+		list_for_each_entry(e, &evict_list, link) {
+			err = drm_mm_reserve_node(&mm, &e->node);
+			if (err) {
+				pr_err("Failed to reinsert node after eviction: start=%llx\n",
+				       e->node.start);
+				ret = err;
+				goto out;
+			}
+		}
+	}
+
+	drm_for_each_prime(n, size) {
+		LIST_HEAD(evict_list);
+		struct evict_node *e, *en;
+		struct drm_mm_node tmp;
+		int nsize = (size - n + 1) / 2;
+		int err;
+
+		drm_mm_init_scan(&mm, nsize, n, 0);
+		drm_random_reorder(order, size, &lcg_state);
+		for (m = 0; m < size; m++) {
+			e = &nodes[order[m]];
+			list_add(&e->link, &evict_list);
+			if (drm_mm_scan_add_block(&e->node))
+				break;
+		}
+		list_for_each_entry_safe(e, en, &evict_list, link) {
+			if (!drm_mm_scan_remove_block(&e->node))
+				list_del(&e->link);
+		}
+		if (list_empty(&evict_list)) {
+			pr_err("Failed to find eviction: size=%d, align=%d (prime)\n",
+			       nsize, n);
+			goto out;
+		}
+
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_remove_node(&e->node);
+
+		memset(&tmp, 0, sizeof(tmp));
+		err = drm_mm_insert_node(&mm, &tmp, nsize, n,
+					 DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime)\n",
+			       nsize, n);
+			show_scan(&mm);
+			show_holes(&mm, 3);
+			ret = err;
+			goto out;
+		}
+
+		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
+			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d] (prime), start=%llx, hole-follows?=%d\n",
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
+
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		drm_mm_remove_node(&tmp);
+		list_for_each_entry(e, &evict_list, link) {
+			err = drm_mm_reserve_node(&mm, &e->node);
+			if (err) {
+				pr_err("Failed to reinsert node after eviction: start=%llx\n",
+				       e->node.start);
+				ret = err;
+				goto out;
+			}
+		}
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm)
+		drm_mm_remove_node(node);
+	drm_mm_takedown(&mm);
+	kfree(order);
+err_nodes:
+	vfree(nodes);
+err:
+	return ret;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 14/34] drm: kselftest for drm_mm and range restricted eviction
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (12 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 13/34] drm: kselftest for drm_mm and eviction Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-15  9:50   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 15/34] drm: kselftest for drm_mm and top-down allocation Chris Wilson
                   ` (20 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Check that we add arbitrary blocks to a restrited eviction scanner in
order to find the first minimal hole that matches our request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 189 +++++++++++++++++++++++++++
 2 files changed, 190 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 7aaca10e7029..0b60b1da39a1 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(evict_range, igt_evict_range)
 selftest(evict, igt_evict)
 selftest(align64, igt_align64)
 selftest(align32, igt_align32)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index f9d6a25a8006..48e576531ead 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1025,6 +1025,195 @@ static int igt_evict(void *ignored)
 	return ret;
 }
 
+static int igt_evict_range(void *ignored)
+{
+	u32 lcg_state = random_seed;
+	const int size = 8192;
+	const int range_size = size / 2;
+	const int range_start = size / 4;
+	const int range_end = range_start + range_size;
+	struct drm_mm mm;
+	struct evict_node {
+		struct drm_mm_node node;
+		struct list_head link;
+	} *nodes;
+	struct drm_mm_node *node, *next;
+	int *order, n, m;
+	int ret;
+
+	ret = -ENOMEM;
+	nodes = vzalloc(size * sizeof(*nodes));
+	if (!nodes)
+		goto err;
+
+	order = drm_random_order(size, &lcg_state);
+	if (!order)
+		goto err_nodes;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, size);
+	for (n = 0; n < size; n++) {
+		int err;
+
+		err = drm_mm_insert_node(&mm, &nodes[n].node, 1, 0,
+					 DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("insert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+	}
+
+	for (n = 1; n < size; n <<= 1) {
+		const int nsize = range_size / 2;
+		LIST_HEAD(evict_list);
+		struct evict_node *e, *en;
+		struct drm_mm_node tmp;
+		int err;
+
+		drm_mm_init_scan_with_range(&mm, nsize, n, 0,
+					    range_start, range_end);
+		drm_random_reorder(order, size, &lcg_state);
+		for (m = 0; m < size; m++) {
+			e = &nodes[order[m]];
+			list_add(&e->link, &evict_list);
+			if (drm_mm_scan_add_block(&e->node))
+				break;
+		}
+		list_for_each_entry_safe(e, en, &evict_list, link) {
+			if (!drm_mm_scan_remove_block(&e->node))
+				list_del(&e->link);
+		}
+		if (list_empty(&evict_list)) {
+			pr_err("Failed to find eviction: size=%d, align=%d\n",
+			       nsize, n);
+			goto out;
+		}
+
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_remove_node(&e->node);
+
+		memset(&tmp, 0, sizeof(tmp));
+		err = drm_mm_insert_node_in_range(&mm, &tmp, nsize, n,
+						  range_start, range_end,
+						  DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("Failed to insert into eviction hole: size=%d, align=%d\n",
+			       nsize, n);
+			show_scan(&mm);
+			show_holes(&mm, 3);
+			ret = err;
+			goto out;
+		}
+
+		if (tmp.start < range_start || tmp.start + tmp.size > range_end) {
+			pr_err("Inserted node did not fit into target range, start=%llx+%llx range=[%d, %d]\n",
+			       tmp.start, tmp.size, range_start, range_end);
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
+			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d], start=%llx, hole-follows?=%d\n",
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
+
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		drm_mm_remove_node(&tmp);
+		list_for_each_entry(e, &evict_list, link) {
+			err = drm_mm_reserve_node(&mm, &e->node);
+			if (err) {
+				pr_err("Failed to reinsert node after eviction: start=%llx\n",
+				       e->node.start);
+				ret = err;
+				goto out;
+			}
+		}
+	}
+
+	drm_for_each_prime(n, range_size) {
+		LIST_HEAD(evict_list);
+		struct evict_node *e, *en;
+		struct drm_mm_node tmp;
+		int nsize = (range_size - n + 1) / 2;
+		int err;
+
+		drm_mm_init_scan_with_range(&mm, nsize, n, 0,
+					    range_start, range_end);
+		drm_random_reorder(order, size, &lcg_state);
+		for (m = 0; m < size; m++) {
+			e = &nodes[order[m]];
+			list_add(&e->link, &evict_list);
+			if (drm_mm_scan_add_block(&e->node))
+				break;
+		}
+		list_for_each_entry_safe(e, en, &evict_list, link) {
+			if (!drm_mm_scan_remove_block(&e->node))
+				list_del(&e->link);
+		}
+		if (list_empty(&evict_list)) {
+			pr_err("Failed to find eviction: size=%d, align=%d (prime)\n",
+			       nsize, n);
+			goto out;
+		}
+
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_remove_node(&e->node);
+
+		memset(&tmp, 0, sizeof(tmp));
+		err = drm_mm_insert_node_in_range(&mm, &tmp, nsize, n,
+						  range_start, range_end,
+						  DRM_MM_SEARCH_DEFAULT);
+		if (err) {
+			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime)\n",
+			       nsize, n);
+			show_scan(&mm);
+			show_holes(&mm, 3);
+			ret = err;
+			goto out;
+		}
+
+		if (tmp.start < range_start || tmp.start + tmp.size > range_end) {
+			pr_err("Inserted node did not fit into target range, start=%llx+%llx range=[%d, %d]\n",
+			       tmp.start, tmp.size, range_start, range_end);
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
+			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d] (prime), start=%llx, hole-follows?=%d\n",
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
+
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		drm_mm_remove_node(&tmp);
+		list_for_each_entry(e, &evict_list, link) {
+			err = drm_mm_reserve_node(&mm, &e->node);
+			if (err) {
+				pr_err("Failed to reinsert node after eviction: start=%llx\n",
+				       e->node.start);
+				ret = err;
+				goto out;
+			}
+		}
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm)
+		drm_mm_remove_node(node);
+	drm_mm_takedown(&mm);
+	kfree(order);
+err_nodes:
+	vfree(nodes);
+err:
+	return ret;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 15/34] drm: kselftest for drm_mm and top-down allocation
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (13 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 14/34] drm: kselftest for drm_mm and range restricted eviction Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-14 11:33   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 16/34] drm: kselftest for drm_mm and top-down alignment Chris Wilson
                   ` (19 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Check that if we request top-down allocation from drm_mm_insert_node()
we receive the next available hole from the top.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |  1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 97 ++++++++++++++++++++++++++++
 2 files changed, 98 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 0b60b1da39a1..357fe430e36e 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(topdown, igt_topdown)
 selftest(evict_range, igt_evict_range)
 selftest(evict, igt_evict)
 selftest(align64, igt_align64)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 48e576531ead..22e7904cdaea 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1214,6 +1214,103 @@ static int igt_evict_range(void *ignored)
 	return ret;
 }
 
+static int igt_topdown(void *ignored)
+{
+	u32 lcg_state = random_seed;
+	const int size = 8192;
+	unsigned long *bitmap;
+	struct drm_mm mm;
+	struct drm_mm_node *nodes, *node, *next;
+	int *order, n, m, o = 0;
+	int ret;
+
+	ret = -ENOMEM;
+	nodes = vzalloc(size * sizeof(*nodes));
+	if (!nodes)
+		goto err;
+
+	bitmap = kzalloc(size / BITS_PER_LONG * sizeof(unsigned long),
+			 GFP_TEMPORARY);
+	if (!bitmap)
+		goto err_nodes;
+
+	order = drm_random_order(size, &lcg_state);
+	if (!order)
+		goto err_bitmap;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, size);
+	for (n = 0; n < size; n++) {
+		int err;
+
+		err = drm_mm_insert_node_generic(&mm, &nodes[n], 1, 0, 0,
+						 DRM_MM_SEARCH_BELOW,
+						 DRM_MM_CREATE_TOP);
+		if (err) {
+			pr_err("insert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+
+		if (nodes[n].hole_follows) {
+			pr_err("hole after topdown insert %d, start=%llx\n",
+			       n, nodes[n].start);
+			goto out;
+		}
+	}
+
+	drm_for_each_prime(n, size) {
+		for (m = 0; m < n; m++) {
+			node = &nodes[order[(o + m) % size]];
+			drm_mm_remove_node(node);
+			__set_bit(node->start, bitmap);
+		}
+
+		for (m = 0; m < n; m++) {
+			int err, last;
+
+			node = &nodes[order[(o + m) % size]];
+			err = drm_mm_insert_node_generic(&mm, node, 1, 0, 0,
+							 DRM_MM_SEARCH_BELOW,
+							 DRM_MM_CREATE_TOP);
+			if (err) {
+				pr_err("insert failed, step %d/%d\n", m, n);
+				ret = err;
+				goto out;
+			}
+
+			if (node->hole_follows) {
+				pr_err("hole after topdown insert %d/%d, start=%llx\n",
+				       m, n, node->start);
+				goto out;
+			}
+
+			last = find_last_bit(bitmap, size);
+			if (node->start != last) {
+				pr_err("node %d/%d not inserted into upmost hole, expected %d, found %lld\n",
+				       m, n, last, node->start);
+				goto out;
+			}
+			__clear_bit(last, bitmap);
+		}
+
+		o += n;
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm)
+		drm_mm_remove_node(node);
+	drm_mm_takedown(&mm);
+	kfree(order);
+err_bitmap:
+	kfree(bitmap);
+err_nodes:
+	vfree(nodes);
+err:
+	return ret;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 16/34] drm: kselftest for drm_mm and top-down alignment
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (14 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 15/34] drm: kselftest for drm_mm and top-down allocation Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-14 11:51   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 17/34] drm: kselftest for drm_mm and color adjustment Chris Wilson
                   ` (18 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Check that if we request top-down allocation with a particular alignment
from drm_mm_insert_node() that the start of the node matches our
request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |  1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 92 ++++++++++++++++++++++++++++
 2 files changed, 93 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 357fe430e36e..a7cceeb21356 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(topdown_align, igt_topdown_align)
 selftest(topdown, igt_topdown)
 selftest(evict_range, igt_evict_range)
 selftest(evict, igt_evict)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 22e7904cdaea..078511c76efb 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1311,6 +1311,98 @@ static int igt_topdown(void *ignored)
 	return ret;
 }
 
+static int igt_topdown_align(void *ignored)
+{
+	struct drm_mm mm;
+	struct drm_mm_node tmp, resv;
+	int ret = -EINVAL;
+	int n, m, err;
+
+	drm_mm_init(&mm, 0, ~0ull);
+	memset(&tmp, 0, sizeof(tmp));
+	memset(&resv, 0, sizeof(resv));
+
+	for (m = 0; m < 32; m++) {
+		u64 end = ~0ull;
+
+		if (m) {
+			resv.size = BIT_ULL(m);
+			end -= resv.size;
+			resv.start = end;
+
+			err = drm_mm_reserve_node(&mm, &resv);
+			if (err) {
+				pr_err("reservation of sentinel node failed\n");
+				ret = err;
+				goto out;
+			}
+		}
+
+		for (n = 0; n < 63 - m; n++) {
+			u64 align = BIT_ULL(n);
+
+			err = drm_mm_insert_node_generic(&mm, &tmp, 1, align, 0,
+							 DRM_MM_SEARCH_BELOW,
+							 DRM_MM_CREATE_TOP);
+			drm_mm_remove_node(&tmp);
+			if (err) {
+				pr_err("insert failed, ret=%d\n", err);
+				ret = err;
+				goto out;
+			}
+
+			if (tmp.start & (align - 1)) {
+				pr_err("insert alignment failed, aligment=%llx, start=%llx\n",
+				       align, tmp.start);
+				goto out;
+			}
+
+			if (tmp.start < end - align) {
+				pr_err("topdown insert failed, start=%llx, align=%llx, end=%llx\n",
+				       tmp.start, align, end);
+				goto out;
+			}
+		}
+
+		drm_for_each_prime(n, min(8192ull, end - 1)) {
+			u64 rem;
+
+			err = drm_mm_insert_node_generic(&mm, &tmp, 1, n, 0,
+							 DRM_MM_SEARCH_BELOW,
+							 DRM_MM_CREATE_TOP);
+			drm_mm_remove_node(&tmp);
+			if (err) {
+				pr_err("insert failed, ret=%d\n", err);
+				ret = err;
+				goto out;
+			}
+
+			div64_u64_rem(tmp.start, n, &rem);
+			if (rem) {
+				pr_err("insert alignment failed, aligment=%d, start=%llx (offset %d)\n",
+				       n, tmp.start, (int)rem);
+				goto out;
+			}
+
+			if (tmp.start < end - n) {
+				pr_err("topdown insert failed, start=%llx, align=%d, end=%llx\n",
+				       tmp.start, n, end);
+				goto out;
+			}
+		}
+
+		if (resv.allocated)
+			drm_mm_remove_node(&resv);
+	}
+
+	ret = 0;
+out:
+	if (resv.allocated)
+		drm_mm_remove_node(&resv);
+	drm_mm_takedown(&mm);
+	return ret;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 17/34] drm: kselftest for drm_mm and color adjustment
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (15 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 16/34] drm: kselftest for drm_mm and top-down alignment Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-15 11:04   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 18/34] drm: kselftest for drm_mm and color eviction Chris Wilson
                   ` (17 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Check that after applying the driver's color adjustment, fitting of the
node and its alignment are still correct.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 183 +++++++++++++++++++++++++++
 2 files changed, 184 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index a7cceeb21356..408083b54177 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(color, igt_color)
 selftest(topdown_align, igt_topdown_align)
 selftest(topdown, igt_topdown)
 selftest(evict_range, igt_evict_range)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 078511c76efb..0fe5344fe789 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1403,6 +1403,189 @@ static int igt_topdown_align(void *ignored)
 	return ret;
 }
 
+static void no_color_touching(struct drm_mm_node *node,
+			      unsigned long color,
+			      u64 *start,
+			      u64 *end)
+{
+	if (node->allocated && node->color != color)
+		++*start;
+
+	node = list_next_entry(node, node_list);
+	if (node->allocated && node->color != color)
+		--*end;
+}
+
+static int igt_color(void *ignored)
+{
+	const int count = 4096;
+	struct drm_mm mm;
+	struct drm_mm_node *node, *nn;
+	const struct modes {
+		const char *name;
+		unsigned int search;
+		unsigned int create;
+	} modes[] = {
+		{ "default", DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT },
+		{ "best", DRM_MM_SEARCH_BEST, DRM_MM_CREATE_DEFAULT },
+		{ "top-down", DRM_MM_SEARCH_BELOW, DRM_MM_CREATE_TOP },
+	};
+	int ret = -EINVAL;
+	int n, m;
+
+	drm_mm_init(&mm, 0, ~0ull);
+
+	for (n = 1; n <= count; n++) {
+		int err;
+
+		node = kzalloc(sizeof(*node), GFP_KERNEL);
+		if (!node) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		err = drm_mm_insert_node_generic(&mm, node, n, 0, n,
+						 DRM_MM_SEARCH_DEFAULT,
+						 DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("insert failed, step %d\n", n);
+			kfree(node);
+			ret = err;
+			goto out;
+		}
+	}
+
+	drm_mm_for_each_node_safe(node, nn, &mm) {
+		if (node->color != node->size) {
+			pr_err("invalid color stored: expected %lld, found %ld\n",
+			       node->size, node->color);
+
+			goto out;
+		}
+
+		drm_mm_remove_node(node);
+		kfree(node);
+	}
+
+	/* Now, let's start experimenting with applying a color callback */
+	mm.color_adjust = no_color_touching;
+	for (m = 0; m < ARRAY_SIZE(modes); m++) {
+		u64 last;
+		int err;
+
+		node = kzalloc(sizeof(*node), GFP_KERNEL);
+		if (!node) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		node->size = 1 + 2*count;
+		node->color = node->size;
+
+		err = drm_mm_reserve_node(&mm, node);
+		if (err) {
+			pr_err("initial reserve failed!\n");
+			goto out;
+		}
+
+		last = node->start + node->size;
+
+		for (n = 1; n <= count; n++) {
+			int rem;
+
+			node = kzalloc(sizeof(*node), GFP_KERNEL);
+			if (!node) {
+				ret = -ENOMEM;
+				goto out;
+			}
+
+			node->start = last;
+			node->size = n + count;
+			node->color = node->size;
+
+			err = drm_mm_reserve_node(&mm, node);
+			if (err != -ENOSPC) {
+				pr_err("reserve %d did not report color overlap! err=%d\n",
+				       n, err);
+				goto out;
+			}
+
+			node->start += n + 1;
+			rem = node->start;
+			rem %= n + count;
+			node->start += n + count - rem;
+
+			err = drm_mm_reserve_node(&mm, node);
+			if (err) {
+				pr_err("reserve %d failed, err=%d\n", n, err);
+				goto out;
+			}
+
+			last = node->start + node->size;
+		}
+
+		for (n = 1; n <= count; n++) {
+			node = kzalloc(sizeof(*node), GFP_KERNEL);
+			if (!node) {
+				ret = -ENOMEM;
+				goto out;
+			}
+
+			err = drm_mm_insert_node_generic(&mm, node, n, n, n,
+							 modes[m].search,
+							 modes[m].create);
+			if (err) {
+				pr_err("%s insert failed, step %d, err=%d\n",
+				       modes[m].name, n, err);
+				kfree(node);
+				ret = err;
+				goto out;
+			}
+		}
+
+		drm_mm_for_each_node_safe(node, nn, &mm) {
+			u64 rem;
+
+			if (node->color != node->size) {
+				pr_err("%s invalid color stored: expected %lld, found %ld\n",
+				       modes[m].name, node->size, node->color);
+
+				goto out;
+			}
+
+			if (!node->hole_follows &&
+			    list_next_entry(node, node_list)->allocated) {
+				pr_err("%s colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
+				       modes[m].name,
+				       node->color, node->start, node->size,
+				       list_next_entry(node, node_list)->color,
+				       list_next_entry(node, node_list)->start,
+				       list_next_entry(node, node_list)->size);
+				goto out;
+			}
+
+			div64_u64_rem(node->start, node->size, &rem);
+			if (rem) {
+				pr_err("%s colored node misaligned, start=%llx expected alignment=%lld [rem=%lld]\n",
+				       modes[m].name, node->start, node->size, rem);
+				goto out;
+			}
+
+			drm_mm_remove_node(node);
+			kfree(node);
+		}
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, nn, &mm) {
+		drm_mm_remove_node(node);
+		kfree(node);
+	}
+	drm_mm_takedown(&mm);
+	return ret;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 18/34] drm: kselftest for drm_mm and color eviction
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (16 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 17/34] drm: kselftest for drm_mm and color adjustment Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-12 11:53 ` [PATCH 19/34] drm: kselftest for drm_mm and restricted " Chris Wilson
                   ` (16 subsequent siblings)
  34 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Check that after applying the driver's color adjustment, eviction
scanning find a suitable hole.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 198 +++++++++++++++++++++++++++
 2 files changed, 199 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index 408083b54177..cdb7c72ecadf 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(color_evict, igt_color_evict)
 selftest(color, igt_color)
 selftest(topdown_align, igt_topdown_align)
 selftest(topdown, igt_topdown)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 0fe5344fe789..663db5408b3a 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1586,6 +1586,204 @@ static int igt_color(void *ignored)
 	return ret;
 }
 
+static int igt_color_evict(void *ignored)
+{
+	u32 lcg_state = random_seed;
+	const int total_size = 8192;
+	unsigned long color = 0;
+	struct drm_mm mm;
+	struct evict_node {
+		struct drm_mm_node node;
+		struct list_head link;
+	} *nodes;
+	struct drm_mm_node *node, *next;
+	int *order, n, m;
+	int ret;
+
+	ret = -ENOMEM;
+	nodes = vzalloc(total_size * sizeof(*nodes));
+	if (!nodes)
+		goto err;
+
+	order = drm_random_order(total_size, &lcg_state);
+	if (!order)
+		goto err_nodes;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, 2*total_size - 1);
+	mm.color_adjust = no_color_touching;
+	for (n = 0; n < total_size; n++) {
+		int err;
+
+		err = drm_mm_insert_node_generic(&mm, &nodes[n].node,
+						 1, 0, color++,
+						 DRM_MM_SEARCH_DEFAULT,
+						 DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("insert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+	}
+
+	for (n = 1; n <= total_size; n <<= 1) {
+		const int nsize = total_size - 1;
+		unsigned int c = color++;
+		LIST_HEAD(evict_list);
+		struct evict_node *e, *en;
+		struct drm_mm_node tmp;
+		int err;
+
+		drm_mm_init_scan(&mm, nsize, n, c);
+		drm_random_reorder(order, total_size, &lcg_state);
+		for (m = 0; m < total_size; m++) {
+			e = &nodes[order[m]];
+			list_add(&e->link, &evict_list);
+			if (drm_mm_scan_add_block(&e->node))
+				break;
+		}
+		list_for_each_entry_safe(e, en, &evict_list, link) {
+			if (!drm_mm_scan_remove_block(&e->node))
+				list_del(&e->link);
+		}
+		if (list_empty(&evict_list)) {
+			pr_err("Failed to find eviction: size=%d, align=%d, color=%d\n",
+			       nsize, n, c);
+			goto out;
+		}
+
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_remove_node(&e->node);
+
+		memset(&tmp, 0, sizeof(tmp));
+		err = drm_mm_insert_node_generic(&mm, &tmp, nsize, n, c,
+						 DRM_MM_SEARCH_DEFAULT,
+						 DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("Failed to insert into eviction hole: size=%d, align=%d, color=%d, err=%d\n",
+			       nsize, n, c, err);
+			show_scan(&mm);
+			show_holes(&mm, 3);
+			ret = err;
+			goto out;
+		}
+
+		if ((int)tmp.start % n || tmp.size != nsize) {
+			pr_err("Inserted did not fit the eviction hole: size=%lld [%d], align=%d [rem=%d], start=%llx\n",
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start);
+
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		if (!tmp.hole_follows &&
+		    list_next_entry(&tmp, node_list)->allocated) {
+			pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
+			       tmp.color, tmp.start, tmp.size,
+			       list_next_entry(&tmp, node_list)->color,
+			       list_next_entry(&tmp, node_list)->start,
+			       list_next_entry(&tmp, node_list)->size);
+			goto out;
+		}
+
+		drm_mm_remove_node(&tmp);
+		list_for_each_entry(e, &evict_list, link) {
+			err = drm_mm_reserve_node(&mm, &e->node);
+			if (err) {
+				pr_err("Failed to reinsert node after eviction: start=%llx\n",
+				       e->node.start);
+				ret = err;
+				goto out;
+			}
+		}
+	}
+
+	drm_for_each_prime(n, total_size) {
+		LIST_HEAD(evict_list);
+		unsigned int c = color++;
+		struct evict_node *e, *en;
+		struct drm_mm_node tmp;
+		int nsize = (total_size - n + 1) / 2;
+		int err;
+
+		drm_mm_init_scan(&mm, nsize, n, c);
+		drm_random_reorder(order, total_size, &lcg_state);
+		for (m = 0; m < total_size; m++) {
+			e = &nodes[order[m]];
+			list_add(&e->link, &evict_list);
+			if (drm_mm_scan_add_block(&e->node))
+				break;
+		}
+		list_for_each_entry_safe(e, en, &evict_list, link) {
+			if (!drm_mm_scan_remove_block(&e->node))
+				list_del(&e->link);
+		}
+		if (list_empty(&evict_list)) {
+			pr_err("Failed to find eviction: size=%d, align=%d (prime), color=%d\n",
+			       nsize, n, c);
+			goto out;
+		}
+
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_remove_node(&e->node);
+
+		memset(&tmp, 0, sizeof(tmp));
+		err = drm_mm_insert_node_generic(&mm, &tmp, nsize, n, c,
+						 DRM_MM_SEARCH_DEFAULT,
+						 DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime), color=%d, err=%d\n",
+			       nsize, n, c, err);
+			show_scan(&mm);
+			show_holes(&mm, 3);
+			ret = err;
+			goto out;
+		}
+
+		if ((int)tmp.start % n || tmp.size != nsize) {
+			pr_err("Inserted did not fit the eviction hole: size=%lld [%d], align=%d [rem=%d] (prime), start=%llx\n",
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start);
+
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		if (!tmp.hole_follows &&
+		    list_next_entry(&tmp, node_list)->allocated) {
+			pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
+			       tmp.color, tmp.start, tmp.size,
+			       list_next_entry(&tmp, node_list)->color,
+			       list_next_entry(&tmp, node_list)->start,
+			       list_next_entry(&tmp, node_list)->size);
+			goto out;
+		}
+
+		drm_mm_remove_node(&tmp);
+		list_for_each_entry(e, &evict_list, link) {
+			err = drm_mm_reserve_node(&mm, &e->node);
+			if (err) {
+				pr_err("Failed to reinsert node after eviction: start=%llx\n",
+				       e->node.start);
+				ret = err;
+				goto out;
+			}
+		}
+	}
+
+	ret = 0;
+out:
+	if (ret)
+		drm_mm_debug_table(&mm, __func__);
+	drm_mm_for_each_node_safe(node, next, &mm)
+		drm_mm_remove_node(node);
+	drm_mm_takedown(&mm);
+	kfree(order);
+err_nodes:
+	vfree(nodes);
+err:
+	return ret;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 19/34] drm: kselftest for drm_mm and restricted color eviction
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (17 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 18/34] drm: kselftest for drm_mm and color eviction Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-12 11:53 ` [PATCH 20/34] drm/i915: Build DRM range manager selftests for CI Chris Wilson
                   ` (15 subsequent siblings)
  34 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Check that after applying the driver's color adjustment, restricted
eviction scanning find a suitable hole.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |   1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 205 +++++++++++++++++++++++++++
 2 files changed, 206 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index cdb7c72ecadf..fb9d704e7943 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -5,6 +5,7 @@
  *
  * Tests are executed in reverse order by igt/drm_mm
  */
+selftest(color_evict_range, igt_color_evict_range)
 selftest(color_evict, igt_color_evict)
 selftest(color, igt_color)
 selftest(topdown_align, igt_topdown_align)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 663db5408b3a..02f10d9d1396 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1784,6 +1784,211 @@ static int igt_color_evict(void *ignored)
 	return ret;
 }
 
+static int igt_color_evict_range(void *ignored)
+{
+	u32 lcg_state = random_seed;
+	const int total_size = 8192;
+	const int range_size = total_size / 2;
+	const int range_start = total_size / 4;
+	const int range_end = range_start + range_size;
+	unsigned long color = 0;
+	struct drm_mm mm;
+	struct evict_node {
+		struct drm_mm_node node;
+		struct list_head link;
+	} *nodes;
+	struct drm_mm_node *node, *next;
+	int *order, n, m;
+	int ret;
+
+	ret = -ENOMEM;
+	nodes = vzalloc(total_size * sizeof(*nodes));
+	if (!nodes)
+		goto err;
+
+	order = drm_random_order(total_size, &lcg_state);
+	if (!order)
+		goto err_nodes;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, 2*total_size - 1);
+	mm.color_adjust = no_color_touching;
+	for (n = 0; n < total_size; n++) {
+		int err;
+
+		err = drm_mm_insert_node_generic(&mm, &nodes[n].node,
+						 1, 0, color++,
+						 DRM_MM_SEARCH_DEFAULT,
+						 DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("insert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+	}
+
+	for (n = 1; n <= range_size; n <<= 1) {
+		const int nsize = range_size / 2;
+		unsigned int c = color++;
+		LIST_HEAD(evict_list);
+		struct evict_node *e, *en;
+		struct drm_mm_node tmp;
+		int err;
+
+		drm_mm_init_scan_with_range(&mm, nsize, n, c,
+					    range_start, range_end);
+		drm_random_reorder(order, total_size, &lcg_state);
+		for (m = 0; m < total_size; m++) {
+			e = &nodes[order[m]];
+			list_add(&e->link, &evict_list);
+			if (drm_mm_scan_add_block(&e->node))
+				break;
+		}
+		list_for_each_entry_safe(e, en, &evict_list, link) {
+			if (!drm_mm_scan_remove_block(&e->node))
+				list_del(&e->link);
+		}
+		if (list_empty(&evict_list)) {
+			pr_err("Failed to find eviction: size=%d, align=%d, color=%d\n",
+			       nsize, n, c);
+			goto out;
+		}
+
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_remove_node(&e->node);
+
+		memset(&tmp, 0, sizeof(tmp));
+		err = drm_mm_insert_node_in_range_generic(&mm, &tmp, nsize, n, c,
+							  range_start, range_end,
+							  DRM_MM_SEARCH_DEFAULT,
+							  DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("Failed to insert into eviction hole: size=%d, align=%d, color=%d, err=%d\n",
+			       nsize, n, c, err);
+			show_scan(&mm);
+			show_holes(&mm, 3);
+			ret = err;
+			goto out;
+		}
+
+		if ((int)tmp.start % n || tmp.size != nsize) {
+			pr_err("Inserted did not fit the eviction hole: size=%lld [%d], align=%d [rem=%d], start=%llx\n",
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start);
+
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		if (!tmp.hole_follows &&
+		    list_next_entry(&tmp, node_list)->allocated) {
+			pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
+			       tmp.color, tmp.start, tmp.size,
+			       list_next_entry(&tmp, node_list)->color,
+			       list_next_entry(&tmp, node_list)->start,
+			       list_next_entry(&tmp, node_list)->size);
+			goto out;
+		}
+
+		drm_mm_remove_node(&tmp);
+		list_for_each_entry(e, &evict_list, link) {
+			err = drm_mm_reserve_node(&mm, &e->node);
+			if (err) {
+				pr_err("Failed to reinsert node after eviction: start=%llx\n",
+				       e->node.start);
+				ret = err;
+				goto out;
+			}
+		}
+	}
+
+	drm_for_each_prime(n, range_size) {
+		LIST_HEAD(evict_list);
+		unsigned int c = color++;
+		struct evict_node *e, *en;
+		struct drm_mm_node tmp;
+		int nsize = (range_size - n + 1) / 2;
+		int err;
+
+		drm_mm_init_scan_with_range(&mm, nsize, n, c,
+					    range_start, range_end);
+		drm_random_reorder(order, total_size, &lcg_state);
+		for (m = 0; m < total_size; m++) {
+			e = &nodes[order[m]];
+			list_add(&e->link, &evict_list);
+			if (drm_mm_scan_add_block(&e->node))
+				break;
+		}
+		list_for_each_entry_safe(e, en, &evict_list, link) {
+			if (!drm_mm_scan_remove_block(&e->node))
+				list_del(&e->link);
+		}
+		if (list_empty(&evict_list)) {
+			pr_err("Failed to find eviction: size=%d, align=%d (prime), color=%d\n",
+			       nsize, n, c);
+			goto out;
+		}
+
+		list_for_each_entry(e, &evict_list, link)
+			drm_mm_remove_node(&e->node);
+
+		memset(&tmp, 0, sizeof(tmp));
+		err = drm_mm_insert_node_in_range_generic(&mm, &tmp, nsize, n, c,
+							  range_start, range_end,
+							  DRM_MM_SEARCH_DEFAULT,
+							  DRM_MM_CREATE_DEFAULT);
+		if (err) {
+			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime), color=%d, err=%d\n",
+			       nsize, n, c, err);
+			show_scan(&mm);
+			show_holes(&mm, 3);
+			ret = err;
+			goto out;
+		}
+
+		if ((int)tmp.start % n || tmp.size != nsize) {
+			pr_err("Inserted did not fit the eviction hole: size=%lld [%d], align=%d [rem=%d] (prime), start=%llx\n",
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start);
+
+			drm_mm_remove_node(&tmp);
+			goto out;
+		}
+
+		if (!tmp.hole_follows &&
+		    list_next_entry(&tmp, node_list)->allocated) {
+			pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
+			       tmp.color, tmp.start, tmp.size,
+			       list_next_entry(&tmp, node_list)->color,
+			       list_next_entry(&tmp, node_list)->start,
+			       list_next_entry(&tmp, node_list)->size);
+			goto out;
+		}
+
+		drm_mm_remove_node(&tmp);
+		list_for_each_entry(e, &evict_list, link) {
+			err = drm_mm_reserve_node(&mm, &e->node);
+			if (err) {
+				pr_err("Failed to reinsert node after eviction: start=%llx\n",
+				       e->node.start);
+				ret = err;
+				goto out;
+			}
+		}
+	}
+
+	ret = 0;
+out:
+	if (ret)
+		drm_mm_debug_table(&mm, __func__);
+	drm_mm_for_each_node_safe(node, next, &mm)
+		drm_mm_remove_node(node);
+	drm_mm_takedown(&mm);
+	kfree(order);
+err_nodes:
+	vfree(nodes);
+err:
+	return ret;
+}
+
 #include "drm_selftest.c"
 
 static int __init test_drm_mm_init(void)
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 20/34] drm/i915: Build DRM range manager selftests for CI
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (18 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 19/34] drm: kselftest for drm_mm and restricted " Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-15 12:31   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 21/34] drm: Promote drm_mm alignment to u64 Chris Wilson
                   ` (14 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Build the struct drm_mm selftests so that we can trivially run them
within our CI.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/Kconfig.debug | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
index 597648c7a645..598551dbf62c 100644
--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -24,6 +24,7 @@ config DRM_I915_DEBUG
         select X86_MSR # used by igt/pm_rpm
         select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
         select DRM_DEBUG_MM if DRM=y
+	select DRM_DEBUG_MM_SELFTEST
 	select DRM_I915_SW_FENCE_DEBUG_OBJECTS
         default n
         help
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 21/34] drm: Promote drm_mm alignment to u64
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (19 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 20/34] drm/i915: Build DRM range manager selftests for CI Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-12 12:39   ` Christian König
  2016-12-12 11:53 ` [PATCH 22/34] drm: Constify the drm_mm API Chris Wilson
                   ` (13 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

In places (e.g. i915.ko), the alignment is exported to userspace as u64
and there now exists hardware for which we can indeed utilize a u64
alignment. As such, we need to keep 64bit integers throughout when
handling alignment.

Testcase: igt/drm_mm/align64
Testcase: igt/gem_exec_alignment
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/drm_mm.c                | 37 +++++++++++++++------------------
 drivers/gpu/drm/selftests/test-drm_mm.c |  2 +-
 include/drm/drm_mm.h                    | 16 +++++++-------
 3 files changed, 26 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 6e0735539545..621a5792a0dd 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -93,12 +93,12 @@
 
 static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
 						u64 size,
-						unsigned alignment,
+						u64 alignment,
 						unsigned long color,
 						enum drm_mm_search_flags flags);
 static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
 						u64 size,
-						unsigned alignment,
+						u64 alignment,
 						unsigned long color,
 						u64 start,
 						u64 end,
@@ -227,7 +227,7 @@ static void drm_mm_interval_tree_add_node(struct drm_mm_node *hole_node,
 
 static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
 				 struct drm_mm_node *node,
-				 u64 size, unsigned alignment,
+				 u64 size, u64 alignment,
 				 unsigned long color,
 				 enum drm_mm_allocator_flags flags)
 {
@@ -246,10 +246,9 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
 		adj_start = adj_end - size;
 
 	if (alignment) {
-		u64 tmp = adj_start;
-		unsigned rem;
+		u64 rem;
 
-		rem = do_div(tmp, alignment);
+		div64_u64_rem(adj_start, alignment, &rem);
 		if (rem) {
 			if (flags & DRM_MM_CREATE_TOP)
 				adj_start -= rem;
@@ -376,7 +375,7 @@ EXPORT_SYMBOL(drm_mm_reserve_node);
  * 0 on success, -ENOSPC if there's no suitable hole.
  */
 int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node,
-			       u64 size, unsigned alignment,
+			       u64 size, u64 alignment,
 			       unsigned long color,
 			       enum drm_mm_search_flags sflags,
 			       enum drm_mm_allocator_flags aflags)
@@ -398,7 +397,7 @@ EXPORT_SYMBOL(drm_mm_insert_node_generic);
 
 static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
 				       struct drm_mm_node *node,
-				       u64 size, unsigned alignment,
+				       u64 size, u64 alignment,
 				       unsigned long color,
 				       u64 start, u64 end,
 				       enum drm_mm_allocator_flags flags)
@@ -423,10 +422,9 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
 		adj_start = adj_end - size;
 
 	if (alignment) {
-		u64 tmp = adj_start;
-		unsigned rem;
+		u64 rem;
 
-		rem = do_div(tmp, alignment);
+		div64_u64_rem(adj_start, alignment, &rem);
 		if (rem) {
 			if (flags & DRM_MM_CREATE_TOP)
 				adj_start -= rem;
@@ -482,7 +480,7 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
  * 0 on success, -ENOSPC if there's no suitable hole.
  */
 int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node,
-					u64 size, unsigned alignment,
+					u64 size, u64 alignment,
 					unsigned long color,
 					u64 start, u64 end,
 					enum drm_mm_search_flags sflags,
@@ -549,16 +547,15 @@ void drm_mm_remove_node(struct drm_mm_node *node)
 }
 EXPORT_SYMBOL(drm_mm_remove_node);
 
-static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment)
+static int check_free_hole(u64 start, u64 end, u64 size, u64 alignment)
 {
 	if (end - start < size)
 		return 0;
 
 	if (alignment) {
-		u64 tmp = start;
-		unsigned rem;
+		u64 rem;
 
-		rem = do_div(tmp, alignment);
+		div64_u64_rem(start, alignment, &rem);
 		if (rem)
 			start += alignment - rem;
 	}
@@ -568,7 +565,7 @@ static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment)
 
 static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
 						      u64 size,
-						      unsigned alignment,
+						      u64 alignment,
 						      unsigned long color,
 						      enum drm_mm_search_flags flags)
 {
@@ -610,7 +607,7 @@ static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
 
 static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
 							u64 size,
-							unsigned alignment,
+							u64 alignment,
 							unsigned long color,
 							u64 start,
 							u64 end,
@@ -728,7 +725,7 @@ EXPORT_SYMBOL(drm_mm_replace_node);
  */
 void drm_mm_init_scan(struct drm_mm *mm,
 		      u64 size,
-		      unsigned alignment,
+		      u64 alignment,
 		      unsigned long color)
 {
 	mm->scan_color = color;
@@ -761,7 +758,7 @@ EXPORT_SYMBOL(drm_mm_init_scan);
  */
 void drm_mm_init_scan_with_range(struct drm_mm *mm,
 				 u64 size,
-				 unsigned alignment,
+				 u64 alignment,
 				 unsigned long color,
 				 u64 start,
 				 u64 end)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 02f10d9d1396..cad42919f15f 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -775,7 +775,7 @@ static int igt_align64(void *ignored)
 
 static void show_scan(const struct drm_mm *scan)
 {
-	pr_info("scan: hit [%llx, %llx], size=%lld, align=%d, color=%ld\n",
+	pr_info("scan: hit [%llx, %llx], size=%lld, align=%lld, color=%ld\n",
 		scan->scan_hit_start, scan->scan_hit_end,
 		scan->scan_size, scan->scan_alignment, scan->scan_color);
 }
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 8faa28ad97b3..e3322f92633e 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -92,12 +92,12 @@ struct drm_mm {
 	struct rb_root interval_tree;
 
 	unsigned int scan_check_range : 1;
-	unsigned scan_alignment;
+	unsigned int scanned_blocks;
 	unsigned long scan_color;
+	u64 scan_alignment;
 	u64 scan_size;
 	u64 scan_hit_start;
 	u64 scan_hit_end;
-	unsigned scanned_blocks;
 	u64 scan_start;
 	u64 scan_end;
 	struct drm_mm_node *prev_scanned_node;
@@ -242,7 +242,7 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node);
 int drm_mm_insert_node_generic(struct drm_mm *mm,
 			       struct drm_mm_node *node,
 			       u64 size,
-			       unsigned alignment,
+			       u64 alignment,
 			       unsigned long color,
 			       enum drm_mm_search_flags sflags,
 			       enum drm_mm_allocator_flags aflags);
@@ -265,7 +265,7 @@ int drm_mm_insert_node_generic(struct drm_mm *mm,
 static inline int drm_mm_insert_node(struct drm_mm *mm,
 				     struct drm_mm_node *node,
 				     u64 size,
-				     unsigned alignment,
+				     u64 alignment,
 				     enum drm_mm_search_flags flags)
 {
 	return drm_mm_insert_node_generic(mm, node, size, alignment, 0, flags,
@@ -275,7 +275,7 @@ static inline int drm_mm_insert_node(struct drm_mm *mm,
 int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,
 					struct drm_mm_node *node,
 					u64 size,
-					unsigned alignment,
+					u64 alignment,
 					unsigned long color,
 					u64 start,
 					u64 end,
@@ -302,7 +302,7 @@ int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,
 static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
 					      struct drm_mm_node *node,
 					      u64 size,
-					      unsigned alignment,
+					      u64 alignment,
 					      u64 start,
 					      u64 end,
 					      enum drm_mm_search_flags flags)
@@ -344,11 +344,11 @@ __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last);
 
 void drm_mm_init_scan(struct drm_mm *mm,
 		      u64 size,
-		      unsigned alignment,
+		      u64 alignment,
 		      unsigned long color);
 void drm_mm_init_scan_with_range(struct drm_mm *mm,
 				 u64 size,
-				 unsigned alignment,
+				 u64 alignment,
 				 unsigned long color,
 				 u64 start,
 				 u64 end);
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 22/34] drm: Constify the drm_mm API
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (20 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 21/34] drm: Promote drm_mm alignment to u64 Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-12 11:53 ` [PATCH 23/34] drm: Simplify drm_mm_clean() Chris Wilson
                   ` (12 subsequent siblings)
  34 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Mark up the pointers as constant through the API where appropriate.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c                | 20 ++++++++++----------
 drivers/gpu/drm/i915/i915_gem_gtt.c     |  2 +-
 drivers/gpu/drm/selftests/test-drm_mm.c |  2 +-
 include/drm/drm_mm.h                    | 25 ++++++++++++-------------
 4 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 621a5792a0dd..381b4cf75da5 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -878,9 +878,9 @@ EXPORT_SYMBOL(drm_mm_scan_remove_block);
  * True if the allocator is completely free, false if there's still a node
  * allocated in it.
  */
-bool drm_mm_clean(struct drm_mm * mm)
+bool drm_mm_clean(const struct drm_mm *mm)
 {
-	struct list_head *head = __drm_mm_nodes(mm);
+	const struct list_head *head = __drm_mm_nodes(mm);
 
 	return (head->next->next == head);
 }
@@ -894,7 +894,7 @@ EXPORT_SYMBOL(drm_mm_clean);
  *
  * Note that @mm must be cleared to 0 before calling this function.
  */
-void drm_mm_init(struct drm_mm * mm, u64 start, u64 size)
+void drm_mm_init(struct drm_mm *mm, u64 start, u64 size)
 {
 	INIT_LIST_HEAD(&mm->hole_stack);
 	mm->scanned_blocks = 0;
@@ -933,8 +933,8 @@ void drm_mm_takedown(struct drm_mm *mm)
 }
 EXPORT_SYMBOL(drm_mm_takedown);
 
-static u64 drm_mm_debug_hole(struct drm_mm_node *entry,
-				     const char *prefix)
+static u64 drm_mm_debug_hole(const struct drm_mm_node *entry,
+			     const char *prefix)
 {
 	u64 hole_start, hole_end, hole_size;
 
@@ -955,9 +955,9 @@ static u64 drm_mm_debug_hole(struct drm_mm_node *entry,
  * @mm: drm_mm allocator to dump
  * @prefix: prefix to use for dumping to dmesg
  */
-void drm_mm_debug_table(struct drm_mm *mm, const char *prefix)
+void drm_mm_debug_table(const struct drm_mm *mm, const char *prefix)
 {
-	struct drm_mm_node *entry;
+	const struct drm_mm_node *entry;
 	u64 total_used = 0, total_free = 0, total = 0;
 
 	total_free += drm_mm_debug_hole(&mm->head_node, prefix);
@@ -976,7 +976,7 @@ void drm_mm_debug_table(struct drm_mm *mm, const char *prefix)
 EXPORT_SYMBOL(drm_mm_debug_table);
 
 #if defined(CONFIG_DEBUG_FS)
-static u64 drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry)
+static u64 drm_mm_dump_hole(struct seq_file *m, const struct drm_mm_node *entry)
 {
 	u64 hole_start, hole_end, hole_size;
 
@@ -997,9 +997,9 @@ static u64 drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry)
  * @m: seq_file to dump to
  * @mm: drm_mm allocator to dump
  */
-int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm)
+int drm_mm_dump_table(struct seq_file *m, const struct drm_mm *mm)
 {
-	struct drm_mm_node *entry;
+	const struct drm_mm_node *entry;
 	u64 total_used = 0, total_free = 0, total = 0;
 
 	total_free += drm_mm_dump_hole(m, &mm->head_node);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index b5d4a6357d8a..2083c899ab78 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2715,7 +2715,7 @@ void i915_gem_gtt_finish_pages(struct drm_i915_gem_object *obj,
 	dma_unmap_sg(kdev, pages->sgl, pages->nents, PCI_DMA_BIDIRECTIONAL);
 }
 
-static void i915_gtt_color_adjust(struct drm_mm_node *node,
+static void i915_gtt_color_adjust(const struct drm_mm_node *node,
 				  unsigned long color,
 				  u64 *start,
 				  u64 *end)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index cad42919f15f..4758062fee17 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1403,7 +1403,7 @@ static int igt_topdown_align(void *ignored)
 	return ret;
 }
 
-static void no_color_touching(struct drm_mm_node *node,
+static void no_color_touching(const struct drm_mm_node *node,
 			      unsigned long color,
 			      u64 *start,
 			      u64 *end)
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index e3322f92633e..66140af0b5c9 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -102,7 +102,8 @@ struct drm_mm {
 	u64 scan_end;
 	struct drm_mm_node *prev_scanned_node;
 
-	void (*color_adjust)(struct drm_mm_node *node, unsigned long color,
+	void (*color_adjust)(const struct drm_mm_node *node,
+			     unsigned long color,
 			     u64 *start, u64 *end);
 };
 
@@ -116,7 +117,7 @@ struct drm_mm {
  * Returns:
  * True if the @node is allocated.
  */
-static inline bool drm_mm_node_allocated(struct drm_mm_node *node)
+static inline bool drm_mm_node_allocated(const struct drm_mm_node *node)
 {
 	return node->allocated;
 }
@@ -131,12 +132,12 @@ static inline bool drm_mm_node_allocated(struct drm_mm_node *node)
  * Returns:
  * True if the @mm is initialized.
  */
-static inline bool drm_mm_initialized(struct drm_mm *mm)
+static inline bool drm_mm_initialized(const struct drm_mm *mm)
 {
 	return mm->hole_stack.next;
 }
 
-static inline u64 __drm_mm_hole_node_start(struct drm_mm_node *hole_node)
+static inline u64 __drm_mm_hole_node_start(const struct drm_mm_node *hole_node)
 {
 	return hole_node->start + hole_node->size;
 }
@@ -152,13 +153,13 @@ static inline u64 __drm_mm_hole_node_start(struct drm_mm_node *hole_node)
  * Returns:
  * Start of the subsequent hole.
  */
-static inline u64 drm_mm_hole_node_start(struct drm_mm_node *hole_node)
+static inline u64 drm_mm_hole_node_start(const struct drm_mm_node *hole_node)
 {
 	BUG_ON(!hole_node->hole_follows);
 	return __drm_mm_hole_node_start(hole_node);
 }
 
-static inline u64 __drm_mm_hole_node_end(struct drm_mm_node *hole_node)
+static inline u64 __drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
 {
 	return list_next_entry(hole_node, node_list)->start;
 }
@@ -174,7 +175,7 @@ static inline u64 __drm_mm_hole_node_end(struct drm_mm_node *hole_node)
  * Returns:
  * End of the subsequent hole.
  */
-static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node)
+static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
 {
 	return __drm_mm_hole_node_end(hole_node);
 }
@@ -314,11 +315,9 @@ static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
 
 void drm_mm_remove_node(struct drm_mm_node *node);
 void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new);
-void drm_mm_init(struct drm_mm *mm,
-		 u64 start,
-		 u64 size);
+void drm_mm_init(struct drm_mm *mm, u64 start, u64 size);
 void drm_mm_takedown(struct drm_mm *mm);
-bool drm_mm_clean(struct drm_mm *mm);
+bool drm_mm_clean(const struct drm_mm *mm);
 
 struct drm_mm_node *
 __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last);
@@ -355,9 +354,9 @@ void drm_mm_init_scan_with_range(struct drm_mm *mm,
 bool drm_mm_scan_add_block(struct drm_mm_node *node);
 bool drm_mm_scan_remove_block(struct drm_mm_node *node);
 
-void drm_mm_debug_table(struct drm_mm *mm, const char *prefix);
+void drm_mm_debug_table(const struct drm_mm *mm, const char *prefix);
 #ifdef CONFIG_DEBUG_FS
-int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm);
+int drm_mm_dump_table(struct seq_file *m, const struct drm_mm *mm);
 #endif
 
 #endif
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 23/34] drm: Simplify drm_mm_clean()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (21 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 22/34] drm: Constify the drm_mm API Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 12:30   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 24/34] drm: Compile time enabling for asserts in drm_mm Chris Wilson
                   ` (11 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

To test whether there are any nodes allocated within the range manager,
we merely have to ask whether the node_list is empty.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c | 19 +------------------
 include/drm/drm_mm.h     | 14 +++++++++++++-
 2 files changed, 14 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 381b4cf75da5..7479b908dd08 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -871,22 +871,6 @@ bool drm_mm_scan_remove_block(struct drm_mm_node *node)
 EXPORT_SYMBOL(drm_mm_scan_remove_block);
 
 /**
- * drm_mm_clean - checks whether an allocator is clean
- * @mm: drm_mm allocator to check
- *
- * Returns:
- * True if the allocator is completely free, false if there's still a node
- * allocated in it.
- */
-bool drm_mm_clean(const struct drm_mm *mm)
-{
-	const struct list_head *head = __drm_mm_nodes(mm);
-
-	return (head->next->next == head);
-}
-EXPORT_SYMBOL(drm_mm_clean);
-
-/**
  * drm_mm_init - initialize a drm-mm allocator
  * @mm: the drm_mm structure to initialize
  * @start: start of the range managed by @mm
@@ -926,10 +910,9 @@ EXPORT_SYMBOL(drm_mm_init);
  */
 void drm_mm_takedown(struct drm_mm *mm)
 {
-	if (WARN(!list_empty(__drm_mm_nodes(mm)),
+	if (WARN(!drm_mm_clean(mm),
 		 "Memory manager not clean during takedown.\n"))
 		show_leaks(mm);
-
 }
 EXPORT_SYMBOL(drm_mm_takedown);
 
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 66140af0b5c9..4e65a69c3dc4 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -317,7 +317,19 @@ void drm_mm_remove_node(struct drm_mm_node *node);
 void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new);
 void drm_mm_init(struct drm_mm *mm, u64 start, u64 size);
 void drm_mm_takedown(struct drm_mm *mm);
-bool drm_mm_clean(const struct drm_mm *mm);
+
+/**
+ * drm_mm_clean - checks whether an allocator is clean
+ * @mm: drm_mm allocator to check
+ *
+ * Returns:
+ * True if the allocator is completely free, false if there's still a node
+ * allocated in it.
+ */
+static inline bool drm_mm_clean(const struct drm_mm *mm)
+{
+	return list_empty(__drm_mm_nodes(mm));
+}
 
 struct drm_mm_node *
 __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last);
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 24/34] drm: Compile time enabling for asserts in drm_mm
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (22 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 23/34] drm: Simplify drm_mm_clean() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 15:04   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 25/34] drm: Extract struct drm_mm_scan from struct drm_mm Chris Wilson
                   ` (10 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Use CONFIG_DRM_DEBUG_MM to conditionally enable the internal and
validation checking using BUG_ON. Ideally these paths should all be
exercised by CI selftests (with the asserts enabled).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c | 45 +++++++++++++++++++++++----------------------
 include/drm/drm_mm.h     |  8 +++++++-
 2 files changed, 30 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 7479b908dd08..38b680b68e47 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -237,7 +237,7 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
 	u64 adj_start = hole_start;
 	u64 adj_end = hole_end;
 
-	BUG_ON(node->allocated);
+	DRM_MM_BUG_ON(node->allocated);
 
 	if (mm->color_adjust)
 		mm->color_adjust(hole_node, color, &adj_start, &adj_end);
@@ -257,8 +257,8 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
 		}
 	}
 
-	BUG_ON(adj_start < hole_start);
-	BUG_ON(adj_end > hole_end);
+	DRM_MM_BUG_ON(adj_start < hole_start);
+	DRM_MM_BUG_ON(adj_end > hole_end);
 
 	if (adj_start == hole_start) {
 		hole_node->hole_follows = 0;
@@ -275,7 +275,7 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
 
 	drm_mm_interval_tree_add_node(hole_node, node);
 
-	BUG_ON(node->start + node->size > adj_end);
+	DRM_MM_BUG_ON(node->start + node->size > adj_end);
 
 	node->hole_follows = 0;
 	if (__drm_mm_hole_node_start(node) < hole_end) {
@@ -408,7 +408,7 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
 	u64 adj_start = hole_start;
 	u64 adj_end = hole_end;
 
-	BUG_ON(!hole_node->hole_follows || node->allocated);
+	DRM_MM_BUG_ON(!hole_node->hole_follows || node->allocated);
 
 	if (adj_start < start)
 		adj_start = start;
@@ -448,10 +448,10 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
 
 	drm_mm_interval_tree_add_node(hole_node, node);
 
-	BUG_ON(node->start < start);
-	BUG_ON(node->start < adj_start);
-	BUG_ON(node->start + node->size > adj_end);
-	BUG_ON(node->start + node->size > end);
+	DRM_MM_BUG_ON(node->start < start);
+	DRM_MM_BUG_ON(node->start < adj_start);
+	DRM_MM_BUG_ON(node->start + node->size > adj_end);
+	DRM_MM_BUG_ON(node->start + node->size > end);
 
 	node->hole_follows = 0;
 	if (__drm_mm_hole_node_start(node) < hole_end) {
@@ -517,22 +517,21 @@ void drm_mm_remove_node(struct drm_mm_node *node)
 	struct drm_mm *mm = node->mm;
 	struct drm_mm_node *prev_node;
 
-	if (WARN_ON(!node->allocated))
-		return;
-
-	BUG_ON(node->scanned_block || node->scanned_prev_free
-				   || node->scanned_next_free);
+	DRM_MM_BUG_ON(!node->allocated);
+	DRM_MM_BUG_ON(node->scanned_block ||
+		      node->scanned_prev_free ||
+		      node->scanned_next_free);
 
 	prev_node =
 	    list_entry(node->node_list.prev, struct drm_mm_node, node_list);
 
 	if (node->hole_follows) {
-		BUG_ON(__drm_mm_hole_node_start(node) ==
-		       __drm_mm_hole_node_end(node));
+		DRM_MM_BUG_ON(__drm_mm_hole_node_start(node) ==
+			      __drm_mm_hole_node_end(node));
 		list_del(&node->hole_stack);
 	} else
-		BUG_ON(__drm_mm_hole_node_start(node) !=
-		       __drm_mm_hole_node_end(node));
+		DRM_MM_BUG_ON(__drm_mm_hole_node_start(node) !=
+			      __drm_mm_hole_node_end(node));
 
 
 	if (!prev_node->hole_follows) {
@@ -575,7 +574,7 @@ static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
 	u64 adj_end;
 	u64 best_size;
 
-	BUG_ON(mm->scanned_blocks);
+	DRM_MM_BUG_ON(mm->scanned_blocks);
 
 	best = NULL;
 	best_size = ~0UL;
@@ -619,7 +618,7 @@ static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_
 	u64 adj_end;
 	u64 best_size;
 
-	BUG_ON(mm->scanned_blocks);
+	DRM_MM_BUG_ON(mm->scanned_blocks);
 
 	best = NULL;
 	best_size = ~0UL;
@@ -665,6 +664,8 @@ static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_
  */
 void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
 {
+	DRM_MM_BUG_ON(!old->allocated);
+
 	list_replace(&old->node_list, &new->node_list);
 	list_replace(&old->hole_stack, &new->hole_stack);
 	rb_replace_node(&old->rb, &new->rb, &old->mm->interval_tree);
@@ -795,7 +796,7 @@ bool drm_mm_scan_add_block(struct drm_mm_node *node)
 
 	mm->scanned_blocks++;
 
-	BUG_ON(node->scanned_block);
+	DRM_MM_BUG_ON(node->scanned_block);
 	node->scanned_block = 1;
 
 	prev_node = list_entry(node->node_list.prev, struct drm_mm_node,
@@ -856,7 +857,7 @@ bool drm_mm_scan_remove_block(struct drm_mm_node *node)
 
 	mm->scanned_blocks--;
 
-	BUG_ON(!node->scanned_block);
+	DRM_MM_BUG_ON(!node->scanned_block);
 	node->scanned_block = 0;
 
 	prev_node = list_entry(node->node_list.prev, struct drm_mm_node,
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 4e65a69c3dc4..1fa43c0f5936 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -48,6 +48,12 @@
 #include <linux/stackdepot.h>
 #endif
 
+#ifdef CONFIG_DRM_DEBUG_MM
+#define DRM_MM_BUG_ON(expr) BUG_ON(expr)
+#else
+#define DRM_MM_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr)
+#endif
+
 enum drm_mm_search_flags {
 	DRM_MM_SEARCH_DEFAULT =		0,
 	DRM_MM_SEARCH_BEST =		1 << 0,
@@ -155,7 +161,7 @@ static inline u64 __drm_mm_hole_node_start(const struct drm_mm_node *hole_node)
  */
 static inline u64 drm_mm_hole_node_start(const struct drm_mm_node *hole_node)
 {
-	BUG_ON(!hole_node->hole_follows);
+	DRM_MM_BUG_ON(!hole_node->hole_follows);
 	return __drm_mm_hole_node_start(hole_node);
 }
 
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 25/34] drm: Extract struct drm_mm_scan from struct drm_mm
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (23 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 24/34] drm: Compile time enabling for asserts in drm_mm Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 15:17   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 26/34] drm: Rename prev_node to hole in drm_mm_scan_add_block() Chris Wilson
                   ` (9 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

The scan state occupies a large proportion of the struct drm_mm and is
rarely used and only contains temporary state. That makes it suitable to
moving to its struct and onto the stack of the callers.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c                | 128 ++++++++++++++++++--------------
 drivers/gpu/drm/etnaviv/etnaviv_mmu.c   |   7 +-
 drivers/gpu/drm/i915/i915_gem_evict.c   |  20 +++--
 drivers/gpu/drm/selftests/test-drm_mm.c |  85 +++++++++++----------
 include/drm/drm_mm.h                    |  43 +++++++----
 5 files changed, 166 insertions(+), 117 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 38b680b68e47..2a636e838bad 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -574,7 +574,7 @@ static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
 	u64 adj_end;
 	u64 best_size;
 
-	DRM_MM_BUG_ON(mm->scanned_blocks);
+	DRM_MM_BUG_ON(mm->scan_active);
 
 	best = NULL;
 	best_size = ~0UL;
@@ -618,7 +618,7 @@ static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_
 	u64 adj_end;
 	u64 best_size;
 
-	DRM_MM_BUG_ON(mm->scanned_blocks);
+	DRM_MM_BUG_ON(mm->scan_active);
 
 	best = NULL;
 	best_size = ~0UL;
@@ -693,7 +693,7 @@ EXPORT_SYMBOL(drm_mm_replace_node);
  *
  * The DRM range allocator supports this use-case through the scanning
  * interfaces. First a scan operation needs to be initialized with
- * drm_mm_init_scan() or drm_mm_init_scan_with_range(). The the driver adds
+ * drm_mm_scan_init() or drm_mm_scan_init_with_range(). The the driver adds
  * objects to the roaster (probably by walking an LRU list, but this can be
  * freely implemented) until a suitable hole is found or there's no further
  * evitable object.
@@ -710,7 +710,8 @@ EXPORT_SYMBOL(drm_mm_replace_node);
  */
 
 /**
- * drm_mm_init_scan - initialize lru scanning
+ * drm_mm_scan_init - initialize lru scanning
+ * @scan: scan state
  * @mm: drm_mm to scan
  * @size: size of the allocation
  * @alignment: alignment of the allocation
@@ -724,24 +725,33 @@ EXPORT_SYMBOL(drm_mm_replace_node);
  * As long as the scan list is non-empty, no other operations than
  * adding/removing nodes to/from the scan list are allowed.
  */
-void drm_mm_init_scan(struct drm_mm *mm,
+void drm_mm_scan_init(struct drm_mm_scan *scan,
+		      struct drm_mm *mm,
 		      u64 size,
 		      u64 alignment,
 		      unsigned long color)
 {
-	mm->scan_color = color;
-	mm->scan_alignment = alignment;
-	mm->scan_size = size;
-	mm->scanned_blocks = 0;
-	mm->scan_hit_start = 0;
-	mm->scan_hit_end = 0;
-	mm->scan_check_range = 0;
-	mm->prev_scanned_node = NULL;
+	DRM_MM_BUG_ON(size == 0);
+	DRM_MM_BUG_ON(mm->scan_active);
+
+	scan->mm = mm;
+
+	scan->color = color;
+	scan->alignment = alignment;
+	scan->size = size;
+
+	scan->check_range = 0;
+
+	scan->hit_start = 0;
+	scan->hit_end = 0;
+
+	scan->prev_scanned_node = NULL;
 }
-EXPORT_SYMBOL(drm_mm_init_scan);
+EXPORT_SYMBOL(drm_mm_scan_init);
 
 /**
- * drm_mm_init_scan - initialize range-restricted lru scanning
+ * drm_mm_scan_init_with_range - initialize range-restricted lru scanning
+ * @scan: scan state
  * @mm: drm_mm to scan
  * @size: size of the allocation
  * @alignment: alignment of the allocation
@@ -757,25 +767,34 @@ EXPORT_SYMBOL(drm_mm_init_scan);
  * As long as the scan list is non-empty, no other operations than
  * adding/removing nodes to/from the scan list are allowed.
  */
-void drm_mm_init_scan_with_range(struct drm_mm *mm,
+void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
+				 struct drm_mm *mm,
 				 u64 size,
 				 u64 alignment,
 				 unsigned long color,
 				 u64 start,
 				 u64 end)
 {
-	mm->scan_color = color;
-	mm->scan_alignment = alignment;
-	mm->scan_size = size;
-	mm->scanned_blocks = 0;
-	mm->scan_hit_start = 0;
-	mm->scan_hit_end = 0;
-	mm->scan_start = start;
-	mm->scan_end = end;
-	mm->scan_check_range = 1;
-	mm->prev_scanned_node = NULL;
+	DRM_MM_BUG_ON(size == 0);
+	DRM_MM_BUG_ON(mm->scan_active);
+
+	scan->mm = mm;
+
+	scan->color = color;
+	scan->alignment = alignment;
+	scan->size = size;
+
+	DRM_MM_BUG_ON(end <= start);
+	scan->range_start = start;
+	scan->range_end = end;
+	scan->check_range = 1;
+
+	scan->hit_start = 0;
+	scan->hit_end = 0;
+
+	scan->prev_scanned_node = NULL;
 }
-EXPORT_SYMBOL(drm_mm_init_scan_with_range);
+EXPORT_SYMBOL(drm_mm_scan_init_with_range);
 
 /**
  * drm_mm_scan_add_block - add a node to the scan list
@@ -787,46 +806,46 @@ EXPORT_SYMBOL(drm_mm_init_scan_with_range);
  * Returns:
  * True if a hole has been found, false otherwise.
  */
-bool drm_mm_scan_add_block(struct drm_mm_node *node)
+bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
+			   struct drm_mm_node *node)
 {
-	struct drm_mm *mm = node->mm;
+	struct drm_mm *mm = scan->mm;
 	struct drm_mm_node *prev_node;
 	u64 hole_start, hole_end;
 	u64 adj_start, adj_end;
 
-	mm->scanned_blocks++;
-
+	DRM_MM_BUG_ON(node->mm != mm);
+	DRM_MM_BUG_ON(!node->allocated);
 	DRM_MM_BUG_ON(node->scanned_block);
 	node->scanned_block = 1;
+	mm->scan_active++;
 
-	prev_node = list_entry(node->node_list.prev, struct drm_mm_node,
-			       node_list);
+	prev_node = list_prev_entry(node, node_list);
 
 	node->scanned_preceeds_hole = prev_node->hole_follows;
 	prev_node->hole_follows = 1;
 	list_del(&node->node_list);
 	node->node_list.prev = &prev_node->node_list;
-	node->node_list.next = &mm->prev_scanned_node->node_list;
-	mm->prev_scanned_node = node;
+	node->node_list.next = &scan->prev_scanned_node->node_list;
+	scan->prev_scanned_node = node;
 
 	adj_start = hole_start = drm_mm_hole_node_start(prev_node);
 	adj_end = hole_end = drm_mm_hole_node_end(prev_node);
 
-	if (mm->scan_check_range) {
-		if (adj_start < mm->scan_start)
-			adj_start = mm->scan_start;
-		if (adj_end > mm->scan_end)
-			adj_end = mm->scan_end;
+	if (scan->check_range) {
+		if (adj_start < scan->range_start)
+			adj_start = scan->range_start;
+		if (adj_end > scan->range_end)
+			adj_end = scan->range_end;
 	}
 
 	if (mm->color_adjust)
-		mm->color_adjust(prev_node, mm->scan_color,
-				 &adj_start, &adj_end);
+		mm->color_adjust(prev_node, scan->color, &adj_start, &adj_end);
 
 	if (check_free_hole(adj_start, adj_end,
-			    mm->scan_size, mm->scan_alignment)) {
-		mm->scan_hit_start = hole_start;
-		mm->scan_hit_end = hole_end;
+			    scan->size, scan->alignment)) {
+		scan->hit_start = hole_start;
+		scan->hit_end = hole_end;
 		return true;
 	}
 
@@ -850,24 +869,25 @@ EXPORT_SYMBOL(drm_mm_scan_add_block);
  * True if this block should be evicted, false otherwise. Will always
  * return false when no hole has been found.
  */
-bool drm_mm_scan_remove_block(struct drm_mm_node *node)
+bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
+			      struct drm_mm_node *node)
 {
-	struct drm_mm *mm = node->mm;
 	struct drm_mm_node *prev_node;
 
-	mm->scanned_blocks--;
-
+	DRM_MM_BUG_ON(node->mm != scan->mm);
 	DRM_MM_BUG_ON(!node->scanned_block);
 	node->scanned_block = 0;
 
-	prev_node = list_entry(node->node_list.prev, struct drm_mm_node,
-			       node_list);
+	DRM_MM_BUG_ON(!node->mm->scan_active);
+	node->mm->scan_active--;
+
+	prev_node = list_prev_entry(node, node_list);
 
 	prev_node->hole_follows = node->scanned_preceeds_hole;
 	list_add(&node->node_list, &prev_node->node_list);
 
-	 return (drm_mm_hole_node_end(node) > mm->scan_hit_start &&
-		 node->start < mm->scan_hit_end);
+	return (drm_mm_hole_node_end(node) > scan->hit_start &&
+		node->start < scan->hit_end);
 }
 EXPORT_SYMBOL(drm_mm_scan_remove_block);
 
@@ -882,7 +902,7 @@ EXPORT_SYMBOL(drm_mm_scan_remove_block);
 void drm_mm_init(struct drm_mm *mm, u64 start, u64 size)
 {
 	INIT_LIST_HEAD(&mm->hole_stack);
-	mm->scanned_blocks = 0;
+	mm->scan_active = 0;
 
 	/* Clever trick to avoid a special case in the free hole tracking. */
 	INIT_LIST_HEAD(&mm->head_node.node_list);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
index 169ac96e8f08..fe1e886dcabb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
@@ -113,6 +113,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
 
 	while (1) {
 		struct etnaviv_vram_mapping *m, *n;
+		struct drm_mm_scan scan;
 		struct list_head list;
 		bool found;
 
@@ -134,7 +135,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
 		}
 
 		/* Try to retire some entries */
-		drm_mm_init_scan(&mmu->mm, size, 0, 0);
+		drm_mm_scan_init(&scan, &mmu->mm, size, 0, 0);
 
 		found = 0;
 		INIT_LIST_HEAD(&list);
@@ -151,7 +152,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
 				continue;
 
 			list_add(&free->scan_node, &list);
-			if (drm_mm_scan_add_block(&free->vram_node)) {
+			if (drm_mm_scan_add_block(&scan, &free->vram_node)) {
 				found = true;
 				break;
 			}
@@ -171,7 +172,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
 		 * can leave the block pinned.
 		 */
 		list_for_each_entry_safe(m, n, &list, scan_node)
-			if (!drm_mm_scan_remove_block(&m->vram_node))
+			if (!drm_mm_scan_remove_block(&scan, &m->vram_node))
 				list_del_init(&m->scan_node);
 
 		/*
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index fa40100146ea..096038ef0033 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -51,7 +51,10 @@ static bool ggtt_is_idle(struct drm_i915_private *dev_priv)
 }
 
 static bool
-mark_free(struct i915_vma *vma, unsigned int flags, struct list_head *unwind)
+mark_free(struct drm_mm_scan *scan,
+	  struct i915_vma *vma,
+	  unsigned int flags,
+	  struct list_head *unwind)
 {
 	if (i915_vma_is_pinned(vma))
 		return false;
@@ -63,7 +66,7 @@ mark_free(struct i915_vma *vma, unsigned int flags, struct list_head *unwind)
 		return false;
 
 	list_add(&vma->exec_list, unwind);
-	return drm_mm_scan_add_block(&vma->node);
+	return drm_mm_scan_add_block(scan, &vma->node);
 }
 
 /**
@@ -97,6 +100,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
 			 unsigned flags)
 {
 	struct drm_i915_private *dev_priv = vm->i915;
+	struct drm_mm_scan scan;
 	struct list_head eviction_list;
 	struct list_head *phases[] = {
 		&vm->inactive_list,
@@ -123,11 +127,12 @@ i915_gem_evict_something(struct i915_address_space *vm,
 	 * object on the TAIL.
 	 */
 	if (start != 0 || end != vm->total) {
-		drm_mm_init_scan_with_range(&vm->mm, min_size,
+		drm_mm_scan_init_with_range(&scan, &vm->mm, min_size,
 					    alignment, cache_level,
 					    start, end);
 	} else
-		drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level);
+		drm_mm_scan_init(&scan, &vm->mm, min_size,
+				 alignment, cache_level);
 
 	if (flags & PIN_NONBLOCK)
 		phases[1] = NULL;
@@ -137,13 +142,13 @@ i915_gem_evict_something(struct i915_address_space *vm,
 	phase = phases;
 	do {
 		list_for_each_entry(vma, *phase, vm_link)
-			if (mark_free(vma, flags, &eviction_list))
+			if (mark_free(&scan, vma, flags, &eviction_list))
 				goto found;
 	} while (*++phase);
 
 	/* Nothing found, clean up and bail out! */
 	list_for_each_entry_safe(vma, next, &eviction_list, exec_list) {
-		ret = drm_mm_scan_remove_block(&vma->node);
+		ret = drm_mm_scan_remove_block(&scan, &vma->node);
 		BUG_ON(ret);
 
 		INIT_LIST_HEAD(&vma->exec_list);
@@ -192,7 +197,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
 	 * of any of our objects, thus corrupting the list).
 	 */
 	list_for_each_entry_safe(vma, next, &eviction_list, exec_list) {
-		if (drm_mm_scan_remove_block(&vma->node))
+		if (drm_mm_scan_remove_block(&scan, &vma->node))
 			__i915_vma_pin(vma);
 		else
 			list_del_init(&vma->exec_list);
@@ -209,6 +214,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
 		if (ret == 0)
 			ret = i915_vma_unbind(vma);
 	}
+
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 4758062fee17..1180033f8a65 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -773,11 +773,11 @@ static int igt_align64(void *ignored)
 	return igt_align_pot(64);
 }
 
-static void show_scan(const struct drm_mm *scan)
+static void show_scan(const struct drm_mm_scan *scan)
 {
 	pr_info("scan: hit [%llx, %llx], size=%lld, align=%lld, color=%ld\n",
-		scan->scan_hit_start, scan->scan_hit_end,
-		scan->scan_size, scan->scan_alignment, scan->scan_color);
+		scan->hit_start, scan->hit_end,
+		scan->size, scan->alignment, scan->color);
 }
 
 static void show_holes(const struct drm_mm *mm, int count)
@@ -852,15 +852,16 @@ static int igt_evict(void *ignored)
 	if (1) {
 		LIST_HEAD(evict_list);
 		struct evict_node *e;
+		struct drm_mm_scan scan;
 
-		drm_mm_init_scan(&mm, 1, 0, 0);
+		drm_mm_scan_init(&scan, &mm, 1, 0, 0);
 		for (m = 0; m < size; m++) {
 			e = &nodes[m];
 			list_add(&e->link, &evict_list);
-			drm_mm_scan_add_block(&e->node);
+			drm_mm_scan_add_block(&scan, &e->node);
 		}
 		list_for_each_entry(e, &evict_list, link)
-			drm_mm_scan_remove_block(&e->node);
+			drm_mm_scan_remove_block(&scan, &e->node);
 
 		for (m = 0; m < size; m++) {
 			e = &nodes[m];
@@ -897,19 +898,20 @@ static int igt_evict(void *ignored)
 		const int nsize = size / 2;
 		LIST_HEAD(evict_list);
 		struct evict_node *e, *en;
+		struct drm_mm_scan scan;
 		struct drm_mm_node tmp;
 		int err;
 
-		drm_mm_init_scan(&mm, nsize, n, 0);
+		drm_mm_scan_init(&scan, &mm, nsize, n, 0);
 		drm_random_reorder(order, size, &lcg_state);
 		for (m = 0; m < size; m++) {
 			e = &nodes[order[m]];
 			list_add(&e->link, &evict_list);
-			if (drm_mm_scan_add_block(&e->node))
+			if (drm_mm_scan_add_block(&scan, &e->node))
 				break;
 		}
 		list_for_each_entry_safe(e, en, &evict_list, link) {
-			if (!drm_mm_scan_remove_block(&e->node))
+			if (!drm_mm_scan_remove_block(&scan, &e->node))
 				list_del(&e->link);
 		}
 		if (list_empty(&evict_list)) {
@@ -927,7 +929,7 @@ static int igt_evict(void *ignored)
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d\n",
 			       nsize, n);
-			show_scan(&mm);
+			show_scan(&scan);
 			show_holes(&mm, 3);
 			ret = err;
 			goto out;
@@ -956,20 +958,21 @@ static int igt_evict(void *ignored)
 	drm_for_each_prime(n, size) {
 		LIST_HEAD(evict_list);
 		struct evict_node *e, *en;
+		struct drm_mm_scan scan;
 		struct drm_mm_node tmp;
 		int nsize = (size - n + 1) / 2;
 		int err;
 
-		drm_mm_init_scan(&mm, nsize, n, 0);
+		drm_mm_scan_init(&scan, &mm, nsize, n, 0);
 		drm_random_reorder(order, size, &lcg_state);
 		for (m = 0; m < size; m++) {
 			e = &nodes[order[m]];
 			list_add(&e->link, &evict_list);
-			if (drm_mm_scan_add_block(&e->node))
+			if (drm_mm_scan_add_block(&scan, &e->node))
 				break;
 		}
 		list_for_each_entry_safe(e, en, &evict_list, link) {
-			if (!drm_mm_scan_remove_block(&e->node))
+			if (!drm_mm_scan_remove_block(&scan, &e->node))
 				list_del(&e->link);
 		}
 		if (list_empty(&evict_list)) {
@@ -987,7 +990,7 @@ static int igt_evict(void *ignored)
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime)\n",
 			       nsize, n);
-			show_scan(&mm);
+			show_scan(&scan);
 			show_holes(&mm, 3);
 			ret = err;
 			goto out;
@@ -1068,20 +1071,21 @@ static int igt_evict_range(void *ignored)
 		const int nsize = range_size / 2;
 		LIST_HEAD(evict_list);
 		struct evict_node *e, *en;
+		struct drm_mm_scan scan;
 		struct drm_mm_node tmp;
 		int err;
 
-		drm_mm_init_scan_with_range(&mm, nsize, n, 0,
+		drm_mm_scan_init_with_range(&scan, &mm, nsize, n, 0,
 					    range_start, range_end);
 		drm_random_reorder(order, size, &lcg_state);
 		for (m = 0; m < size; m++) {
 			e = &nodes[order[m]];
 			list_add(&e->link, &evict_list);
-			if (drm_mm_scan_add_block(&e->node))
+			if (drm_mm_scan_add_block(&scan, &e->node))
 				break;
 		}
 		list_for_each_entry_safe(e, en, &evict_list, link) {
-			if (!drm_mm_scan_remove_block(&e->node))
+			if (!drm_mm_scan_remove_block(&scan, &e->node))
 				list_del(&e->link);
 		}
 		if (list_empty(&evict_list)) {
@@ -1100,7 +1104,7 @@ static int igt_evict_range(void *ignored)
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d\n",
 			       nsize, n);
-			show_scan(&mm);
+			show_scan(&scan);
 			show_holes(&mm, 3);
 			ret = err;
 			goto out;
@@ -1136,21 +1140,22 @@ static int igt_evict_range(void *ignored)
 	drm_for_each_prime(n, range_size) {
 		LIST_HEAD(evict_list);
 		struct evict_node *e, *en;
+		struct drm_mm_scan scan;
 		struct drm_mm_node tmp;
 		int nsize = (range_size - n + 1) / 2;
 		int err;
 
-		drm_mm_init_scan_with_range(&mm, nsize, n, 0,
+		drm_mm_scan_init_with_range(&scan, &mm, nsize, n, 0,
 					    range_start, range_end);
 		drm_random_reorder(order, size, &lcg_state);
 		for (m = 0; m < size; m++) {
 			e = &nodes[order[m]];
 			list_add(&e->link, &evict_list);
-			if (drm_mm_scan_add_block(&e->node))
+			if (drm_mm_scan_add_block(&scan, &e->node))
 				break;
 		}
 		list_for_each_entry_safe(e, en, &evict_list, link) {
-			if (!drm_mm_scan_remove_block(&e->node))
+			if (!drm_mm_scan_remove_block(&scan, &e->node))
 				list_del(&e->link);
 		}
 		if (list_empty(&evict_list)) {
@@ -1169,7 +1174,7 @@ static int igt_evict_range(void *ignored)
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime)\n",
 			       nsize, n);
-			show_scan(&mm);
+			show_scan(&scan);
 			show_holes(&mm, 3);
 			ret = err;
 			goto out;
@@ -1631,19 +1636,20 @@ static int igt_color_evict(void *ignored)
 		unsigned int c = color++;
 		LIST_HEAD(evict_list);
 		struct evict_node *e, *en;
+		struct drm_mm_scan scan;
 		struct drm_mm_node tmp;
 		int err;
 
-		drm_mm_init_scan(&mm, nsize, n, c);
+		drm_mm_scan_init(&scan, &mm, nsize, n, c);
 		drm_random_reorder(order, total_size, &lcg_state);
 		for (m = 0; m < total_size; m++) {
 			e = &nodes[order[m]];
 			list_add(&e->link, &evict_list);
-			if (drm_mm_scan_add_block(&e->node))
+			if (drm_mm_scan_add_block(&scan, &e->node))
 				break;
 		}
 		list_for_each_entry_safe(e, en, &evict_list, link) {
-			if (!drm_mm_scan_remove_block(&e->node))
+			if (!drm_mm_scan_remove_block(&scan, &e->node))
 				list_del(&e->link);
 		}
 		if (list_empty(&evict_list)) {
@@ -1662,7 +1668,7 @@ static int igt_color_evict(void *ignored)
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d, color=%d, err=%d\n",
 			       nsize, n, c, err);
-			show_scan(&mm);
+			show_scan(&scan);
 			show_holes(&mm, 3);
 			ret = err;
 			goto out;
@@ -1702,20 +1708,21 @@ static int igt_color_evict(void *ignored)
 		LIST_HEAD(evict_list);
 		unsigned int c = color++;
 		struct evict_node *e, *en;
+		struct drm_mm_scan scan;
 		struct drm_mm_node tmp;
 		int nsize = (total_size - n + 1) / 2;
 		int err;
 
-		drm_mm_init_scan(&mm, nsize, n, c);
+		drm_mm_scan_init(&scan, &mm, nsize, n, c);
 		drm_random_reorder(order, total_size, &lcg_state);
 		for (m = 0; m < total_size; m++) {
 			e = &nodes[order[m]];
 			list_add(&e->link, &evict_list);
-			if (drm_mm_scan_add_block(&e->node))
+			if (drm_mm_scan_add_block(&scan, &e->node))
 				break;
 		}
 		list_for_each_entry_safe(e, en, &evict_list, link) {
-			if (!drm_mm_scan_remove_block(&e->node))
+			if (!drm_mm_scan_remove_block(&scan, &e->node))
 				list_del(&e->link);
 		}
 		if (list_empty(&evict_list)) {
@@ -1734,7 +1741,7 @@ static int igt_color_evict(void *ignored)
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime), color=%d, err=%d\n",
 			       nsize, n, c, err);
-			show_scan(&mm);
+			show_scan(&scan);
 			show_holes(&mm, 3);
 			ret = err;
 			goto out;
@@ -1832,20 +1839,21 @@ static int igt_color_evict_range(void *ignored)
 		unsigned int c = color++;
 		LIST_HEAD(evict_list);
 		struct evict_node *e, *en;
+		struct drm_mm_scan scan;
 		struct drm_mm_node tmp;
 		int err;
 
-		drm_mm_init_scan_with_range(&mm, nsize, n, c,
+		drm_mm_scan_init_with_range(&scan, &mm, nsize, n, c,
 					    range_start, range_end);
 		drm_random_reorder(order, total_size, &lcg_state);
 		for (m = 0; m < total_size; m++) {
 			e = &nodes[order[m]];
 			list_add(&e->link, &evict_list);
-			if (drm_mm_scan_add_block(&e->node))
+			if (drm_mm_scan_add_block(&scan, &e->node))
 				break;
 		}
 		list_for_each_entry_safe(e, en, &evict_list, link) {
-			if (!drm_mm_scan_remove_block(&e->node))
+			if (!drm_mm_scan_remove_block(&scan, &e->node))
 				list_del(&e->link);
 		}
 		if (list_empty(&evict_list)) {
@@ -1865,7 +1873,7 @@ static int igt_color_evict_range(void *ignored)
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d, color=%d, err=%d\n",
 			       nsize, n, c, err);
-			show_scan(&mm);
+			show_scan(&scan);
 			show_holes(&mm, 3);
 			ret = err;
 			goto out;
@@ -1905,21 +1913,22 @@ static int igt_color_evict_range(void *ignored)
 		LIST_HEAD(evict_list);
 		unsigned int c = color++;
 		struct evict_node *e, *en;
+		struct drm_mm_scan scan;
 		struct drm_mm_node tmp;
 		int nsize = (range_size - n + 1) / 2;
 		int err;
 
-		drm_mm_init_scan_with_range(&mm, nsize, n, c,
+		drm_mm_scan_init_with_range(&scan, &mm, nsize, n, c,
 					    range_start, range_end);
 		drm_random_reorder(order, total_size, &lcg_state);
 		for (m = 0; m < total_size; m++) {
 			e = &nodes[order[m]];
 			list_add(&e->link, &evict_list);
-			if (drm_mm_scan_add_block(&e->node))
+			if (drm_mm_scan_add_block(&scan, &e->node))
 				break;
 		}
 		list_for_each_entry_safe(e, en, &evict_list, link) {
-			if (!drm_mm_scan_remove_block(&e->node))
+			if (!drm_mm_scan_remove_block(&scan, &e->node))
 				list_del(&e->link);
 		}
 		if (list_empty(&evict_list)) {
@@ -1939,7 +1948,7 @@ static int igt_color_evict_range(void *ignored)
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime), color=%d, err=%d\n",
 			       nsize, n, c, err);
-			show_scan(&mm);
+			show_scan(&scan);
 			show_holes(&mm, 3);
 			ret = err;
 			goto out;
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 1fa43c0f5936..571370352fe7 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -97,20 +97,29 @@ struct drm_mm {
 	/* Keep an interval_tree for fast lookup of drm_mm_nodes by address. */
 	struct rb_root interval_tree;
 
-	unsigned int scan_check_range : 1;
-	unsigned int scanned_blocks;
-	unsigned long scan_color;
-	u64 scan_alignment;
-	u64 scan_size;
-	u64 scan_hit_start;
-	u64 scan_hit_end;
-	u64 scan_start;
-	u64 scan_end;
-	struct drm_mm_node *prev_scanned_node;
-
 	void (*color_adjust)(const struct drm_mm_node *node,
 			     unsigned long color,
 			     u64 *start, u64 *end);
+
+	unsigned long scan_active;
+};
+
+struct drm_mm_scan {
+	struct drm_mm *mm;
+
+	u64 size;
+	u64 alignment;
+
+	u64 range_start;
+	u64 range_end;
+
+	u64 hit_start;
+	u64 hit_end;
+
+	struct drm_mm_node *prev_scanned_node;
+
+	unsigned long color;
+	bool check_range : 1;
 };
 
 /**
@@ -359,18 +368,22 @@ __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last);
 	     node__ && node__->start < (end__);				\
 	     node__ = list_next_entry(node__, node_list))
 
-void drm_mm_init_scan(struct drm_mm *mm,
+void drm_mm_scan_init(struct drm_mm_scan *scan,
+		      struct drm_mm *mm,
 		      u64 size,
 		      u64 alignment,
 		      unsigned long color);
-void drm_mm_init_scan_with_range(struct drm_mm *mm,
+void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
+				 struct drm_mm *mm,
 				 u64 size,
 				 u64 alignment,
 				 unsigned long color,
 				 u64 start,
 				 u64 end);
-bool drm_mm_scan_add_block(struct drm_mm_node *node);
-bool drm_mm_scan_remove_block(struct drm_mm_node *node);
+bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
+			   struct drm_mm_node *node);
+bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
+			      struct drm_mm_node *node);
 
 void drm_mm_debug_table(const struct drm_mm *mm, const char *prefix);
 #ifdef CONFIG_DEBUG_FS
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 26/34] drm: Rename prev_node to hole in drm_mm_scan_add_block()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (24 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 25/34] drm: Extract struct drm_mm_scan from struct drm_mm Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 15:23   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 27/34] drm: Unconditionally do the range check " Chris Wilson
                   ` (8 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Acknowledging that we were building up the hole was more useful to me
when reading the code, than knowing the relationship between this node
and the previous node.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 2a636e838bad..684e610f404a 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -810,7 +810,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 			   struct drm_mm_node *node)
 {
 	struct drm_mm *mm = scan->mm;
-	struct drm_mm_node *prev_node;
+	struct drm_mm_node *hole;
 	u64 hole_start, hole_end;
 	u64 adj_start, adj_end;
 
@@ -820,17 +820,17 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	node->scanned_block = 1;
 	mm->scan_active++;
 
-	prev_node = list_prev_entry(node, node_list);
+	hole = list_prev_entry(node, node_list);
 
-	node->scanned_preceeds_hole = prev_node->hole_follows;
-	prev_node->hole_follows = 1;
+	node->scanned_preceeds_hole = hole->hole_follows;
+	hole->hole_follows = 1;
 	list_del(&node->node_list);
-	node->node_list.prev = &prev_node->node_list;
+	node->node_list.prev = &hole->node_list;
 	node->node_list.next = &scan->prev_scanned_node->node_list;
 	scan->prev_scanned_node = node;
 
-	adj_start = hole_start = drm_mm_hole_node_start(prev_node);
-	adj_end = hole_end = drm_mm_hole_node_end(prev_node);
+	adj_start = hole_start = drm_mm_hole_node_start(hole);
+	adj_end = hole_end = drm_mm_hole_node_end(hole);
 
 	if (scan->check_range) {
 		if (adj_start < scan->range_start)
@@ -840,7 +840,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	}
 
 	if (mm->color_adjust)
-		mm->color_adjust(prev_node, scan->color, &adj_start, &adj_end);
+		mm->color_adjust(hole, scan->color, &adj_start, &adj_end);
 
 	if (check_free_hole(adj_start, adj_end,
 			    scan->size, scan->alignment)) {
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 27/34] drm: Unconditionally do the range check in drm_mm_scan_add_block()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (25 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 26/34] drm: Rename prev_node to hole in drm_mm_scan_add_block() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 15:28   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 28/34] drm: Fix application of color vs range restriction when scanning drm_mm Chris Wilson
                   ` (7 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

Doing the check is trivial (low cost in comparison to overall eviction)
and helps simplify the code.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c              | 53 +++--------------------------------
 drivers/gpu/drm/i915/i915_gem_evict.c | 10 ++-----
 include/drm/drm_mm.h                  | 33 ++++++++++++++++++----
 3 files changed, 34 insertions(+), 62 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 684e610f404a..6f7850c0e400 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -710,46 +710,6 @@ EXPORT_SYMBOL(drm_mm_replace_node);
  */
 
 /**
- * drm_mm_scan_init - initialize lru scanning
- * @scan: scan state
- * @mm: drm_mm to scan
- * @size: size of the allocation
- * @alignment: alignment of the allocation
- * @color: opaque tag value to use for the allocation
- *
- * This simply sets up the scanning routines with the parameters for the desired
- * hole. Note that there's no need to specify allocation flags, since they only
- * change the place a node is allocated from within a suitable hole.
- *
- * Warning:
- * As long as the scan list is non-empty, no other operations than
- * adding/removing nodes to/from the scan list are allowed.
- */
-void drm_mm_scan_init(struct drm_mm_scan *scan,
-		      struct drm_mm *mm,
-		      u64 size,
-		      u64 alignment,
-		      unsigned long color)
-{
-	DRM_MM_BUG_ON(size == 0);
-	DRM_MM_BUG_ON(mm->scan_active);
-
-	scan->mm = mm;
-
-	scan->color = color;
-	scan->alignment = alignment;
-	scan->size = size;
-
-	scan->check_range = 0;
-
-	scan->hit_start = 0;
-	scan->hit_end = 0;
-
-	scan->prev_scanned_node = NULL;
-}
-EXPORT_SYMBOL(drm_mm_scan_init);
-
-/**
  * drm_mm_scan_init_with_range - initialize range-restricted lru scanning
  * @scan: scan state
  * @mm: drm_mm to scan
@@ -787,7 +747,6 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
 	DRM_MM_BUG_ON(end <= start);
 	scan->range_start = start;
 	scan->range_end = end;
-	scan->check_range = 1;
 
 	scan->hit_start = 0;
 	scan->hit_end = 0;
@@ -829,15 +788,11 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	node->node_list.next = &scan->prev_scanned_node->node_list;
 	scan->prev_scanned_node = node;
 
-	adj_start = hole_start = drm_mm_hole_node_start(hole);
-	adj_end = hole_end = drm_mm_hole_node_end(hole);
+	hole_start = drm_mm_hole_node_start(hole);
+	hole_end = drm_mm_hole_node_end(hole);
 
-	if (scan->check_range) {
-		if (adj_start < scan->range_start)
-			adj_start = scan->range_start;
-		if (adj_end > scan->range_end)
-			adj_end = scan->range_end;
-	}
+	adj_start = max(hole_start, scan->range_start);
+	adj_end = min(hole_end, scan->range_end);
 
 	if (mm->color_adjust)
 		mm->color_adjust(hole, scan->color, &adj_start, &adj_end);
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 096038ef0033..5415f888c63f 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -126,13 +126,9 @@ i915_gem_evict_something(struct i915_address_space *vm,
 	 * On each list, the oldest objects lie at the HEAD with the freshest
 	 * object on the TAIL.
 	 */
-	if (start != 0 || end != vm->total) {
-		drm_mm_scan_init_with_range(&scan, &vm->mm, min_size,
-					    alignment, cache_level,
-					    start, end);
-	} else
-		drm_mm_scan_init(&scan, &vm->mm, min_size,
-				 alignment, cache_level);
+	drm_mm_scan_init_with_range(&scan, &vm->mm,
+				    min_size, alignment, cache_level,
+				    start, end);
 
 	if (flags & PIN_NONBLOCK)
 		phases[1] = NULL;
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 571370352fe7..22d00515fc57 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -119,7 +119,6 @@ struct drm_mm_scan {
 	struct drm_mm_node *prev_scanned_node;
 
 	unsigned long color;
-	bool check_range : 1;
 };
 
 /**
@@ -368,11 +367,6 @@ __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last);
 	     node__ && node__->start < (end__);				\
 	     node__ = list_next_entry(node__, node_list))
 
-void drm_mm_scan_init(struct drm_mm_scan *scan,
-		      struct drm_mm *mm,
-		      u64 size,
-		      u64 alignment,
-		      unsigned long color);
 void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
 				 struct drm_mm *mm,
 				 u64 size,
@@ -380,6 +374,33 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
 				 unsigned long color,
 				 u64 start,
 				 u64 end);
+
+/**
+ * drm_mm_scan_init - initialize lru scanning
+ * @scan: scan state
+ * @mm: drm_mm to scan
+ * @size: size of the allocation
+ * @alignment: alignment of the allocation
+ * @color: opaque tag value to use for the allocation
+ *
+ * This simply sets up the scanning routines with the parameters for the desired
+ * hole. Note that there's no need to specify allocation flags, since they only
+ * change the place a node is allocated from within a suitable hole.
+ *
+ * Warning:
+ * As long as the scan list is non-empty, no other operations than
+ * adding/removing nodes to/from the scan list are allowed.
+ */
+static inline void drm_mm_scan_init(struct drm_mm_scan *scan,
+				    struct drm_mm *mm,
+				    u64 size,
+				    u64 alignment,
+				    unsigned long color)
+{
+	drm_mm_scan_init_with_range(scan, mm, size, alignment, color,
+				    0, U64_MAX);
+}
+
 bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 			   struct drm_mm_node *node);
 bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 28/34] drm: Fix application of color vs range restriction when scanning drm_mm
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (26 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 27/34] drm: Unconditionally do the range check " Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-12 14:57   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 29/34] drm: Compute tight evictions for drm_mm_scan Chris Wilson
                   ` (6 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx

The range restriction should be applied after the color adjustment, or
else we may inadvertently apply the color adjustment to the restricted
hole (and not against its neighbours).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 6f7850c0e400..a1467a43eda2 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -771,6 +771,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	struct drm_mm *mm = scan->mm;
 	struct drm_mm_node *hole;
 	u64 hole_start, hole_end;
+	u64 col_start, col_end;
 	u64 adj_start, adj_end;
 
 	DRM_MM_BUG_ON(node->mm != mm);
@@ -788,14 +789,16 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	node->node_list.next = &scan->prev_scanned_node->node_list;
 	scan->prev_scanned_node = node;
 
-	hole_start = drm_mm_hole_node_start(hole);
-	hole_end = drm_mm_hole_node_end(hole);
-
-	adj_start = max(hole_start, scan->range_start);
-	adj_end = min(hole_end, scan->range_end);
+	hole_start = __drm_mm_hole_node_start(hole);
+	hole_end = __drm_mm_hole_node_end(hole);
 
+	col_start = hole_start;
+	col_end = hole_end;
 	if (mm->color_adjust)
-		mm->color_adjust(hole, scan->color, &adj_start, &adj_end);
+		mm->color_adjust(hole, scan->color, &col_start, &col_end);
+
+	adj_start = max(col_start, scan->range_start);
+	adj_end = min(col_end, scan->range_end);
 
 	if (check_free_hole(adj_start, adj_end,
 			    scan->size, scan->alignment)) {
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 29/34] drm: Compute tight evictions for drm_mm_scan
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (27 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 28/34] drm: Fix application of color vs range restriction when scanning drm_mm Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 15:48   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 30/34] drm: Optimise power-of-two alignments in drm_mm_scan_add_block() Chris Wilson
                   ` (5 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Compute the minimal required hole during scan and only evict those nodes
that overlap. This enables us to reduce the number of nodes we need to
evict to the bare minimum.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c                | 60 +++++++++++++++++++++++++++------
 drivers/gpu/drm/etnaviv/etnaviv_mmu.c   |  2 +-
 drivers/gpu/drm/i915/i915_gem_evict.c   |  3 +-
 drivers/gpu/drm/selftests/test-drm_mm.c | 22 +++++++-----
 include/drm/drm_mm.h                    | 22 ++++++------
 5 files changed, 78 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index a1467a43eda2..eec0a46f5b38 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -718,10 +718,10 @@ EXPORT_SYMBOL(drm_mm_replace_node);
  * @color: opaque tag value to use for the allocation
  * @start: start of the allowed range for the allocation
  * @end: end of the allowed range for the allocation
+ * @flags: flags to specify how the allocation will be performed afterwards
  *
  * This simply sets up the scanning routines with the parameters for the desired
- * hole. Note that there's no need to specify allocation flags, since they only
- * change the place a node is allocated from within a suitable hole.
+ * hole.
  *
  * Warning:
  * As long as the scan list is non-empty, no other operations than
@@ -733,7 +733,8 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
 				 u64 alignment,
 				 unsigned long color,
 				 u64 start,
-				 u64 end)
+				 u64 end,
+				 unsigned int flags)
 {
 	DRM_MM_BUG_ON(size == 0);
 	DRM_MM_BUG_ON(mm->scan_active);
@@ -743,6 +744,7 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
 	scan->color = color;
 	scan->alignment = alignment;
 	scan->size = size;
+	scan->flags = flags;
 
 	DRM_MM_BUG_ON(end <= start);
 	scan->range_start = start;
@@ -777,7 +779,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	DRM_MM_BUG_ON(node->mm != mm);
 	DRM_MM_BUG_ON(!node->allocated);
 	DRM_MM_BUG_ON(node->scanned_block);
-	node->scanned_block = 1;
+	node->scanned_block = true;
 	mm->scan_active++;
 
 	hole = list_prev_entry(node, node_list);
@@ -799,15 +801,53 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 
 	adj_start = max(col_start, scan->range_start);
 	adj_end = min(col_end, scan->range_end);
+	if (adj_end <= adj_start || adj_end - adj_start < scan->size)
+		return false;
+
+	if (scan->flags == DRM_MM_CREATE_TOP)
+		adj_start = adj_end - scan->size;
+
+	if (scan->alignment) {
+		u64 rem;
+
+		div64_u64_rem(adj_start, scan->alignment, &rem);
+		if (rem) {
+			adj_start -= rem;
+			if (scan->flags != DRM_MM_CREATE_TOP)
+				adj_start += scan->alignment;
+			if (adj_start < max(col_start, scan->range_start) ||
+			    max(col_end, scan->range_end) - adj_start < scan->size)
+				return false;
+
+			if (adj_end <= adj_start ||
+			    adj_end - adj_start < scan->size)
+				return false;
+		}
+	}
 
-	if (check_free_hole(adj_start, adj_end,
-			    scan->size, scan->alignment)) {
+	if (mm->color_adjust) {
+		/* If allocations need adjusting due to neighbouring colours,
+		 * we do not have enough information to decide if we need
+		 * to evict nodes on either side of [adj_start, adj_end].
+		 * What almost works is
+		 * hit_start = adj_start + (hole_start - col_start);
+		 * hit_end = adj_start + scan->size + (hole_end - col_end);
+		 * but because the decision is only made on the final hole,
+		 * we may underestimate the required adjustments for an
+		 * interior allocation.
+		 */
 		scan->hit_start = hole_start;
 		scan->hit_end = hole_end;
-		return true;
+	} else {
+		scan->hit_start = adj_start;
+		scan->hit_end = adj_start + scan->size;
 	}
 
-	return false;
+	DRM_MM_BUG_ON(scan->hit_start >= scan->hit_end);
+	DRM_MM_BUG_ON(scan->hit_start < hole_start);
+	DRM_MM_BUG_ON(scan->hit_end > hole_end);
+
+	return true;
 }
 EXPORT_SYMBOL(drm_mm_scan_add_block);
 
@@ -834,7 +874,7 @@ bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
 
 	DRM_MM_BUG_ON(node->mm != scan->mm);
 	DRM_MM_BUG_ON(!node->scanned_block);
-	node->scanned_block = 0;
+	node->scanned_block = false;
 
 	DRM_MM_BUG_ON(!node->mm->scan_active);
 	node->mm->scan_active--;
@@ -844,7 +884,7 @@ bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
 	prev_node->hole_follows = node->scanned_preceeds_hole;
 	list_add(&node->node_list, &prev_node->node_list);
 
-	return (drm_mm_hole_node_end(node) > scan->hit_start &&
+	return (node->start + node->size > scan->hit_start &&
 		node->start < scan->hit_end);
 }
 EXPORT_SYMBOL(drm_mm_scan_remove_block);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
index fe1e886dcabb..2dae3169ce48 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
@@ -135,7 +135,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
 		}
 
 		/* Try to retire some entries */
-		drm_mm_scan_init(&scan, &mmu->mm, size, 0, 0);
+		drm_mm_scan_init(&scan, &mmu->mm, size, 0, 0, 0);
 
 		found = 0;
 		INIT_LIST_HEAD(&list);
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 5415f888c63f..89da9225043f 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -128,7 +128,8 @@ i915_gem_evict_something(struct i915_address_space *vm,
 	 */
 	drm_mm_scan_init_with_range(&scan, &vm->mm,
 				    min_size, alignment, cache_level,
-				    start, end);
+				    start, end,
+				    flags & PIN_HIGH ? DRM_MM_CREATE_TOP : 0);
 
 	if (flags & PIN_NONBLOCK)
 		phases[1] = NULL;
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 1180033f8a65..09ead31a094d 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -854,7 +854,7 @@ static int igt_evict(void *ignored)
 		struct evict_node *e;
 		struct drm_mm_scan scan;
 
-		drm_mm_scan_init(&scan, &mm, 1, 0, 0);
+		drm_mm_scan_init(&scan, &mm, 1, 0, 0, 0);
 		for (m = 0; m < size; m++) {
 			e = &nodes[m];
 			list_add(&e->link, &evict_list);
@@ -902,7 +902,7 @@ static int igt_evict(void *ignored)
 		struct drm_mm_node tmp;
 		int err;
 
-		drm_mm_scan_init(&scan, &mm, nsize, n, 0);
+		drm_mm_scan_init(&scan, &mm, nsize, n, 0, 0);
 		drm_random_reorder(order, size, &lcg_state);
 		for (m = 0; m < size; m++) {
 			e = &nodes[order[m]];
@@ -963,7 +963,7 @@ static int igt_evict(void *ignored)
 		int nsize = (size - n + 1) / 2;
 		int err;
 
-		drm_mm_scan_init(&scan, &mm, nsize, n, 0);
+		drm_mm_scan_init(&scan, &mm, nsize, n, 0, 0);
 		drm_random_reorder(order, size, &lcg_state);
 		for (m = 0; m < size; m++) {
 			e = &nodes[order[m]];
@@ -1076,7 +1076,8 @@ static int igt_evict_range(void *ignored)
 		int err;
 
 		drm_mm_scan_init_with_range(&scan, &mm, nsize, n, 0,
-					    range_start, range_end);
+					    range_start, range_end,
+					    0);
 		drm_random_reorder(order, size, &lcg_state);
 		for (m = 0; m < size; m++) {
 			e = &nodes[order[m]];
@@ -1146,7 +1147,8 @@ static int igt_evict_range(void *ignored)
 		int err;
 
 		drm_mm_scan_init_with_range(&scan, &mm, nsize, n, 0,
-					    range_start, range_end);
+					    range_start, range_end,
+					    0);
 		drm_random_reorder(order, size, &lcg_state);
 		for (m = 0; m < size; m++) {
 			e = &nodes[order[m]];
@@ -1640,7 +1642,7 @@ static int igt_color_evict(void *ignored)
 		struct drm_mm_node tmp;
 		int err;
 
-		drm_mm_scan_init(&scan, &mm, nsize, n, c);
+		drm_mm_scan_init(&scan, &mm, nsize, n, c, 0);
 		drm_random_reorder(order, total_size, &lcg_state);
 		for (m = 0; m < total_size; m++) {
 			e = &nodes[order[m]];
@@ -1713,7 +1715,7 @@ static int igt_color_evict(void *ignored)
 		int nsize = (total_size - n + 1) / 2;
 		int err;
 
-		drm_mm_scan_init(&scan, &mm, nsize, n, c);
+		drm_mm_scan_init(&scan, &mm, nsize, n, c, 0);
 		drm_random_reorder(order, total_size, &lcg_state);
 		for (m = 0; m < total_size; m++) {
 			e = &nodes[order[m]];
@@ -1844,7 +1846,8 @@ static int igt_color_evict_range(void *ignored)
 		int err;
 
 		drm_mm_scan_init_with_range(&scan, &mm, nsize, n, c,
-					    range_start, range_end);
+					    range_start, range_end,
+					    0);
 		drm_random_reorder(order, total_size, &lcg_state);
 		for (m = 0; m < total_size; m++) {
 			e = &nodes[order[m]];
@@ -1919,7 +1922,8 @@ static int igt_color_evict_range(void *ignored)
 		int err;
 
 		drm_mm_scan_init_with_range(&scan, &mm, nsize, n, c,
-					    range_start, range_end);
+					    range_start, range_end,
+					    0);
 		drm_random_reorder(order, total_size, &lcg_state);
 		for (m = 0; m < total_size; m++) {
 			e = &nodes[order[m]];
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 22d00515fc57..5e8350b91fcf 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -119,6 +119,7 @@ struct drm_mm_scan {
 	struct drm_mm_node *prev_scanned_node;
 
 	unsigned long color;
+	unsigned int flags;
 };
 
 /**
@@ -369,11 +370,9 @@ __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last);
 
 void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
 				 struct drm_mm *mm,
-				 u64 size,
-				 u64 alignment,
-				 unsigned long color,
-				 u64 start,
-				 u64 end);
+				 u64 size, u64 alignment, unsigned long color,
+				 u64 start, u64 end,
+				 unsigned int flags);
 
 /**
  * drm_mm_scan_init - initialize lru scanning
@@ -382,10 +381,10 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
  * @size: size of the allocation
  * @alignment: alignment of the allocation
  * @color: opaque tag value to use for the allocation
+ * @flags: flags to specify how the allocation will be performed afterwards
  *
  * This simply sets up the scanning routines with the parameters for the desired
- * hole. Note that there's no need to specify allocation flags, since they only
- * change the place a node is allocated from within a suitable hole.
+ * hole.
  *
  * Warning:
  * As long as the scan list is non-empty, no other operations than
@@ -395,10 +394,13 @@ static inline void drm_mm_scan_init(struct drm_mm_scan *scan,
 				    struct drm_mm *mm,
 				    u64 size,
 				    u64 alignment,
-				    unsigned long color)
+				    unsigned long color,
+				    unsigned int flags)
 {
-	drm_mm_scan_init_with_range(scan, mm, size, alignment, color,
-				    0, U64_MAX);
+	drm_mm_scan_init_with_range(scan, mm,
+				    size, alignment, color,
+				    0, U64_MAX,
+				    flags);
 }
 
 bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 30/34] drm: Optimise power-of-two alignments in drm_mm_scan_add_block()
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (28 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 29/34] drm: Compute tight evictions for drm_mm_scan Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 15:55   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 31/34] drm: Simplify drm_mm scan-list manipulation Chris Wilson
                   ` (4 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

For power-of-two alignments, we can avoid the 64bit divide and do a
simple bitwise add instead.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c | 9 ++++++++-
 include/drm/drm_mm.h     | 1 +
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index eec0a46f5b38..7245483f1111 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -741,8 +741,12 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
 
 	scan->mm = mm;
 
+	if (alignment <= 1)
+		alignment = 0;
+
 	scan->color = color;
 	scan->alignment = alignment;
+	scan->alignment_mask = is_power_of_2(alignment) ? alignment - 1 : 0;
 	scan->size = size;
 	scan->flags = flags;
 
@@ -810,7 +814,10 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	if (scan->alignment) {
 		u64 rem;
 
-		div64_u64_rem(adj_start, scan->alignment, &rem);
+		if (scan->alignment_mask)
+			rem = adj_start & scan->alignment_mask;
+		else
+			div64_u64_rem(adj_start, scan->alignment, &rem);
 		if (rem) {
 			adj_start -= rem;
 			if (scan->flags != DRM_MM_CREATE_TOP)
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 5e8350b91fcf..90d607e31301 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -109,6 +109,7 @@ struct drm_mm_scan {
 
 	u64 size;
 	u64 alignment;
+	u64 alignment_mask;
 
 	u64 range_start;
 	u64 range_end;
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 31/34] drm: Simplify drm_mm scan-list manipulation
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (29 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 30/34] drm: Optimise power-of-two alignments in drm_mm_scan_add_block() Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-14  8:27   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 32/34] drm: Apply tight eviction scanning to color_adjust Chris Wilson
                   ` (3 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Since we mandate a strict reverse-order of drm_mm_scan_remove_block()
after drm_mm_scan_add_block() we can further simplify the list
manipulations when generating the temporary scan-hole.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c | 22 +++++-----------------
 include/drm/drm_mm.h     |  7 +------
 2 files changed, 6 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 7245483f1111..ba1a244fe6e5 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -518,9 +518,7 @@ void drm_mm_remove_node(struct drm_mm_node *node)
 	struct drm_mm_node *prev_node;
 
 	DRM_MM_BUG_ON(!node->allocated);
-	DRM_MM_BUG_ON(node->scanned_block ||
-		      node->scanned_prev_free ||
-		      node->scanned_next_free);
+	DRM_MM_BUG_ON(node->scanned_block);
 
 	prev_node =
 	    list_entry(node->node_list.prev, struct drm_mm_node, node_list);
@@ -756,8 +754,6 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
 
 	scan->hit_start = 0;
 	scan->hit_end = 0;
-
-	scan->prev_scanned_node = NULL;
 }
 EXPORT_SYMBOL(drm_mm_scan_init_with_range);
 
@@ -787,13 +783,8 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	mm->scan_active++;
 
 	hole = list_prev_entry(node, node_list);
-
-	node->scanned_preceeds_hole = hole->hole_follows;
-	hole->hole_follows = 1;
-	list_del(&node->node_list);
-	node->node_list.prev = &hole->node_list;
-	node->node_list.next = &scan->prev_scanned_node->node_list;
-	scan->prev_scanned_node = node;
+	DRM_MM_BUG_ON(list_next_entry(hole, node_list) != node);
+	__list_del_entry(&node->node_list);
 
 	hole_start = __drm_mm_hole_node_start(hole);
 	hole_end = __drm_mm_hole_node_end(hole);
@@ -887,8 +878,8 @@ bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
 	node->mm->scan_active--;
 
 	prev_node = list_prev_entry(node, node_list);
-
-	prev_node->hole_follows = node->scanned_preceeds_hole;
+	DRM_MM_BUG_ON(list_next_entry(prev_node, node_list) !=
+		      list_next_entry(node, node_list));
 	list_add(&node->node_list, &prev_node->node_list);
 
 	return (node->start + node->size > scan->hit_start &&
@@ -913,9 +904,6 @@ void drm_mm_init(struct drm_mm *mm, u64 start, u64 size)
 	INIT_LIST_HEAD(&mm->head_node.node_list);
 	mm->head_node.allocated = 0;
 	mm->head_node.hole_follows = 1;
-	mm->head_node.scanned_block = 0;
-	mm->head_node.scanned_prev_free = 0;
-	mm->head_node.scanned_next_free = 0;
 	mm->head_node.mm = mm;
 	mm->head_node.start = start + size;
 	mm->head_node.size = start - mm->head_node.start;
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 90d607e31301..4ff76d0ab849 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -73,11 +73,8 @@ struct drm_mm_node {
 	struct list_head hole_stack;
 	struct rb_node rb;
 	unsigned hole_follows : 1;
-	unsigned scanned_block : 1;
-	unsigned scanned_prev_free : 1;
-	unsigned scanned_next_free : 1;
-	unsigned scanned_preceeds_hole : 1;
 	unsigned allocated : 1;
+	bool scanned_block : 1;
 	unsigned long color;
 	u64 start;
 	u64 size;
@@ -117,8 +114,6 @@ struct drm_mm_scan {
 	u64 hit_start;
 	u64 hit_end;
 
-	struct drm_mm_node *prev_scanned_node;
-
 	unsigned long color;
 	unsigned int flags;
 };
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 32/34] drm: Apply tight eviction scanning to color_adjust
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (30 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 31/34] drm: Simplify drm_mm scan-list manipulation Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-13 16:03   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 33/34] drm: Fix drm_mm search and insertion Chris Wilson
                   ` (2 subsequent siblings)
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Using mm->color_adjust makes the eviction scanner much tricker since we
don't know the actual neighbours of the target hole until after it is
created (after scanning is complete). To work out whether we need to
evict the neighbours because they impact upon the hole, we have to then
check the hole afterwards - requiring an extra step in the user of the
eviction scanner when they apply color_adjust.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_mm.c                | 46 +++++++++++++++++++++------------
 drivers/gpu/drm/i915/i915_gem_evict.c   |  6 +++++
 drivers/gpu/drm/selftests/test-drm_mm.c | 20 ++++++++++++++
 include/drm/drm_mm.h                    |  1 +
 4 files changed, 56 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index ba1a244fe6e5..b061066066d8 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -823,23 +823,8 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 		}
 	}
 
-	if (mm->color_adjust) {
-		/* If allocations need adjusting due to neighbouring colours,
-		 * we do not have enough information to decide if we need
-		 * to evict nodes on either side of [adj_start, adj_end].
-		 * What almost works is
-		 * hit_start = adj_start + (hole_start - col_start);
-		 * hit_end = adj_start + scan->size + (hole_end - col_end);
-		 * but because the decision is only made on the final hole,
-		 * we may underestimate the required adjustments for an
-		 * interior allocation.
-		 */
-		scan->hit_start = hole_start;
-		scan->hit_end = hole_end;
-	} else {
-		scan->hit_start = adj_start;
-		scan->hit_end = adj_start + scan->size;
-	}
+	scan->hit_start = adj_start;
+	scan->hit_end = adj_start + scan->size;
 
 	DRM_MM_BUG_ON(scan->hit_start >= scan->hit_end);
 	DRM_MM_BUG_ON(scan->hit_start < hole_start);
@@ -887,6 +872,33 @@ bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
 }
 EXPORT_SYMBOL(drm_mm_scan_remove_block);
 
+struct drm_mm_node *drm_mm_scan_color_evict(struct drm_mm_scan *scan)
+{
+	struct drm_mm *mm = scan->mm;
+	struct drm_mm_node *hole;
+	u64 hole_start, hole_end;
+
+	if (!mm->color_adjust)
+		return NULL;
+
+	hole = list_first_entry(&mm->hole_stack, typeof(*hole), hole_stack);
+
+	hole_start = __drm_mm_hole_node_start(hole);
+	hole_end = __drm_mm_hole_node_end(hole);
+
+	DRM_MM_BUG_ON(hole_start > scan->hit_start);
+	DRM_MM_BUG_ON(hole_end < scan->hit_end);
+
+	mm->color_adjust(hole, scan->color, &hole_start, &hole_end);
+	if (hole_start > scan->hit_start)
+		return hole;
+	if (hole_end < scan->hit_end)
+		return list_next_entry(hole, node_list);
+
+	return NULL;
+}
+EXPORT_SYMBOL(drm_mm_scan_color_evict);
+
 /**
  * drm_mm_init - initialize a drm-mm allocator
  * @mm: the drm_mm structure to initialize
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 89da9225043f..e4987e354311 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -108,6 +108,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
 		NULL,
 	}, **phase;
 	struct i915_vma *vma, *next;
+	struct drm_mm_node *node;
 	int ret;
 
 	lockdep_assert_held(&vm->i915->drm.struct_mutex);
@@ -212,6 +213,11 @@ i915_gem_evict_something(struct i915_address_space *vm,
 			ret = i915_vma_unbind(vma);
 	}
 
+	while (ret == 0 && (node = drm_mm_scan_color_evict(&scan))) {
+		vma = container_of(node, struct i915_vma, node);
+		ret = i915_vma_unbind(vma);
+	}
+
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 09ead31a094d..73353f87f46a 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1662,6 +1662,11 @@ static int igt_color_evict(void *ignored)
 
 		list_for_each_entry(e, &evict_list, link)
 			drm_mm_remove_node(&e->node);
+		while ((node = drm_mm_scan_color_evict(&scan))) {
+			e = container_of(node, struct evict_node, node);
+			drm_mm_remove_node(&e->node);
+			list_add(&e->link, &evict_list);
+		}
 
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_generic(&mm, &tmp, nsize, n, c,
@@ -1735,6 +1740,11 @@ static int igt_color_evict(void *ignored)
 
 		list_for_each_entry(e, &evict_list, link)
 			drm_mm_remove_node(&e->node);
+		while ((node = drm_mm_scan_color_evict(&scan))) {
+			e = container_of(node, struct evict_node, node);
+			drm_mm_remove_node(&e->node);
+			list_add(&e->link, &evict_list);
+		}
 
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_generic(&mm, &tmp, nsize, n, c,
@@ -1867,6 +1877,11 @@ static int igt_color_evict_range(void *ignored)
 
 		list_for_each_entry(e, &evict_list, link)
 			drm_mm_remove_node(&e->node);
+		while ((node = drm_mm_scan_color_evict(&scan))) {
+			e = container_of(node, struct evict_node, node);
+			drm_mm_remove_node(&e->node);
+			list_add(&e->link, &evict_list);
+		}
 
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_in_range_generic(&mm, &tmp, nsize, n, c,
@@ -1943,6 +1958,11 @@ static int igt_color_evict_range(void *ignored)
 
 		list_for_each_entry(e, &evict_list, link)
 			drm_mm_remove_node(&e->node);
+		while ((node = drm_mm_scan_color_evict(&scan))) {
+			e = container_of(node, struct evict_node, node);
+			drm_mm_remove_node(&e->node);
+			list_add(&e->link, &evict_list);
+		}
 
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_in_range_generic(&mm, &tmp, nsize, n, c,
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 4ff76d0ab849..884166b91e90 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -403,6 +403,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 			   struct drm_mm_node *node);
 bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
 			      struct drm_mm_node *node);
+struct drm_mm_node *drm_mm_scan_color_evict(struct drm_mm_scan *scan);
 
 void drm_mm_debug_table(const struct drm_mm *mm, const char *prefix);
 #ifdef CONFIG_DEBUG_FS
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 33/34] drm: Fix drm_mm search and insertion
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (31 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 32/34] drm: Apply tight eviction scanning to color_adjust Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-15 12:28   ` Joonas Lahtinen
  2016-12-12 11:53 ` [PATCH 34/34] drm: kselftest for drm_mm and bottom-up allocation Chris Wilson
  2016-12-12 12:15 ` ✓ Fi.CI.BAT: success for series starting with [01/34] drm/i915: Use the MRU stack search after evicting Patchwork
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

The drm_mm range manager claimed to support top-down insertion, but it
was neither searching for the top-most hole that could fit the
allocation request nor fitting the request to the hole correctly.

In order to search the range efficiently, we create a secondary index
for the holes using either their size or their address. This index
allows us to find the smallest hole or the hole at the bottom or top of
the range efficiently, whilst keeping the hole stack to rapidly service
evictions.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c  |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c |  14 +-
 drivers/gpu/drm/armada/armada_gem.c          |   4 +-
 drivers/gpu/drm/drm_mm.c                     | 558 +++++++++++----------------
 drivers/gpu/drm/drm_vma_manager.c            |   3 +-
 drivers/gpu/drm/etnaviv/etnaviv_mmu.c        |   9 +-
 drivers/gpu/drm/i915/gvt/aperture_gm.c       |  11 +-
 drivers/gpu/drm/i915/i915_gem.c              |   3 +-
 drivers/gpu/drm/i915/i915_gem_evict.c        |   9 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c   |   3 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c          |   4 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c       |   3 +-
 drivers/gpu/drm/i915/i915_vma.c              |  25 +-
 drivers/gpu/drm/msm/msm_gem.c                |   3 +-
 drivers/gpu/drm/msm/msm_gem_vma.c            |   3 +-
 drivers/gpu/drm/selftests/test-drm_mm.c      | 137 +++----
 drivers/gpu/drm/sis/sis_mm.c                 |   6 +-
 drivers/gpu/drm/tegra/gem.c                  |   4 +-
 drivers/gpu/drm/ttm/ttm_bo_manager.c         |  15 +-
 drivers/gpu/drm/vc4/vc4_crtc.c               |   2 +-
 drivers/gpu/drm/vc4/vc4_hvs.c                |   3 +-
 drivers/gpu/drm/vc4/vc4_plane.c              |   6 +-
 drivers/gpu/drm/via/via_mm.c                 |   4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c       |  10 +-
 include/drm/drm_mm.h                         | 113 +++---
 25 files changed, 412 insertions(+), 552 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
index 00f46b0e076d..ce4f06ea0be2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
@@ -97,8 +97,7 @@ int amdgpu_gtt_mgr_alloc(struct ttm_mem_type_manager *man,
 {
 	struct amdgpu_gtt_mgr *mgr = man->priv;
 	struct drm_mm_node *node = mem->mm_node;
-	enum drm_mm_search_flags sflags = DRM_MM_SEARCH_BEST;
-	enum drm_mm_allocator_flags aflags = DRM_MM_CREATE_DEFAULT;
+	unsigned int flags;
 	unsigned long fpfn, lpfn;
 	int r;
 
@@ -115,15 +114,14 @@ int amdgpu_gtt_mgr_alloc(struct ttm_mem_type_manager *man,
 	else
 		lpfn = man->size;
 
-	if (place && place->flags & TTM_PL_FLAG_TOPDOWN) {
-		sflags = DRM_MM_SEARCH_BELOW;
-		aflags = DRM_MM_CREATE_TOP;
-	}
+	flags = 0;
+	if (place && place->flags & TTM_PL_FLAG_TOPDOWN)
+		flags = DRM_MM_INSERT_HIGH;
 
 	spin_lock(&mgr->lock);
 	r = drm_mm_insert_node_in_range_generic(&mgr->mm, node, mem->num_pages,
 						mem->page_alignment, 0,
-						fpfn, lpfn, sflags, aflags);
+						fpfn, lpfn, flags);
 	spin_unlock(&mgr->lock);
 
 	if (!r) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
index d710226a0fff..3278d53c7473 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
@@ -97,8 +97,7 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man,
 	struct amdgpu_vram_mgr *mgr = man->priv;
 	struct drm_mm *mm = &mgr->mm;
 	struct drm_mm_node *nodes;
-	enum drm_mm_search_flags sflags = DRM_MM_SEARCH_DEFAULT;
-	enum drm_mm_allocator_flags aflags = DRM_MM_CREATE_DEFAULT;
+	unsigned int flags;
 	unsigned long lpfn, num_nodes, pages_per_node, pages_left;
 	unsigned i;
 	int r;
@@ -121,10 +120,9 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man,
 	if (!nodes)
 		return -ENOMEM;
 
-	if (place->flags & TTM_PL_FLAG_TOPDOWN) {
-		sflags = DRM_MM_SEARCH_BELOW;
-		aflags = DRM_MM_CREATE_TOP;
-	}
+	flags = 0;
+	if (place->flags & TTM_PL_FLAG_TOPDOWN)
+		flags = DRM_MM_INSERT_HIGH;
 
 	pages_left = mem->num_pages;
 
@@ -135,13 +133,11 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man,
 
 		if (pages == pages_per_node)
 			alignment = pages_per_node;
-		else
-			sflags |= DRM_MM_SEARCH_BEST;
 
 		r = drm_mm_insert_node_in_range_generic(mm, &nodes[i], pages,
 							alignment, 0,
 							place->fpfn, lpfn,
-							sflags, aflags);
+							flags);
 		if (unlikely(r))
 			goto error;
 
diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c
index 768087ddb046..ddf11ffba7a1 100644
--- a/drivers/gpu/drm/armada/armada_gem.c
+++ b/drivers/gpu/drm/armada/armada_gem.c
@@ -149,8 +149,8 @@ armada_gem_linear_back(struct drm_device *dev, struct armada_gem_object *obj)
 			return -ENOSPC;
 
 		mutex_lock(&priv->linear_lock);
-		ret = drm_mm_insert_node(&priv->linear, node, size, align,
-					 DRM_MM_SEARCH_DEFAULT);
+		ret = drm_mm_insert_node_generic(&priv->linear,
+						 node, size, align, 0, 0);
 		mutex_unlock(&priv->linear_lock);
 		if (ret) {
 			kfree(node);
diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index b061066066d8..3c3f09d88f87 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -91,19 +91,6 @@
  * some basic allocator dumpers for debugging.
  */
 
-static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
-						u64 size,
-						u64 alignment,
-						unsigned long color,
-						enum drm_mm_search_flags flags);
-static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
-						u64 size,
-						u64 alignment,
-						unsigned long color,
-						u64 start,
-						u64 end,
-						enum drm_mm_search_flags flags);
-
 #ifdef CONFIG_DRM_DEBUG_MM
 #include <linux/stackdepot.h>
 
@@ -225,65 +212,46 @@ static void drm_mm_interval_tree_add_node(struct drm_mm_node *hole_node,
 			    &drm_mm_interval_tree_augment);
 }
 
-static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
-				 struct drm_mm_node *node,
-				 u64 size, u64 alignment,
-				 unsigned long color,
-				 enum drm_mm_allocator_flags flags)
+#define RB_INSERT(root, member, expr) do { \
+	struct rb_node **link = &root.rb_node, *rb = NULL; \
+	u64 x = expr(node); \
+	while (*link) { \
+		rb = *link; \
+		if (x < expr(rb_entry(rb, struct drm_mm_node, member))) \
+			link = &rb->rb_left; \
+		else \
+			link = &rb->rb_right; \
+	} \
+	rb_link_node(&node->member, rb, link); \
+	rb_insert_color(&node->member, &root); \
+} while (0)
+
+#define HOLE_SIZE(NODE) ((NODE)->hole_size)
+#define HOLE_ADDR(NODE) (__drm_mm_hole_node_start(NODE))
+
+static void add_hole(struct drm_mm_node *node)
 {
-	struct drm_mm *mm = hole_node->mm;
-	u64 hole_start = drm_mm_hole_node_start(hole_node);
-	u64 hole_end = drm_mm_hole_node_end(hole_node);
-	u64 adj_start = hole_start;
-	u64 adj_end = hole_end;
-
-	DRM_MM_BUG_ON(node->allocated);
-
-	if (mm->color_adjust)
-		mm->color_adjust(hole_node, color, &adj_start, &adj_end);
-
-	if (flags & DRM_MM_CREATE_TOP)
-		adj_start = adj_end - size;
-
-	if (alignment) {
-		u64 rem;
-
-		div64_u64_rem(adj_start, alignment, &rem);
-		if (rem) {
-			if (flags & DRM_MM_CREATE_TOP)
-				adj_start -= rem;
-			else
-				adj_start += alignment - rem;
-		}
-	}
-
-	DRM_MM_BUG_ON(adj_start < hole_start);
-	DRM_MM_BUG_ON(adj_end > hole_end);
-
-	if (adj_start == hole_start) {
-		hole_node->hole_follows = 0;
-		list_del(&hole_node->hole_stack);
-	}
-
-	node->start = adj_start;
-	node->size = size;
-	node->mm = mm;
-	node->color = color;
-	node->allocated = 1;
+	struct drm_mm *mm = node->mm;
 
-	list_add(&node->node_list, &hole_node->node_list);
+	node->hole_size =
+		__drm_mm_hole_node_end(node) - __drm_mm_hole_node_start(node);
+	DRM_MM_BUG_ON(!node->hole_size);
 
-	drm_mm_interval_tree_add_node(hole_node, node);
+	RB_INSERT(mm->holes_size, rb_hole_size, HOLE_SIZE);
+	RB_INSERT(mm->holes_addr, rb_hole_addr, HOLE_ADDR);
 
-	DRM_MM_BUG_ON(node->start + node->size > adj_end);
+	list_add(&node->hole_stack, &mm->hole_stack);
+}
 
-	node->hole_follows = 0;
-	if (__drm_mm_hole_node_start(node) < hole_end) {
-		list_add(&node->hole_stack, &mm->hole_stack);
-		node->hole_follows = 1;
-	}
+static void rm_hole(struct drm_mm_node *node)
+{
+	if (!node->hole_size)
+		return;
 
-	save_stack(node);
+	list_del(&node->hole_stack);
+	rb_erase(&node->rb_hole_size, &node->mm->holes_size);
+	rb_erase(&node->rb_hole_addr, &node->mm->holes_addr);
+	node->hole_size = 0;
 }
 
 /**
@@ -322,8 +290,8 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
 		hole = list_entry(__drm_mm_nodes(mm), typeof(*hole), node_list);
 	}
 
-	hole = list_last_entry(&hole->node_list, typeof(*hole), node_list);
-	if (!hole->hole_follows)
+	hole = list_prev_entry(hole, node_list);
+	if (!hole->hole_size)
 		return -ENOSPC;
 
 	adj_start = hole_start = __drm_mm_hole_node_start(hole);
@@ -336,22 +304,17 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
 		return -ENOSPC;
 
 	node->mm = mm;
-	node->allocated = 1;
 
 	list_add(&node->node_list, &hole->node_list);
-
 	drm_mm_interval_tree_add_node(hole, node);
+	node->allocated = true;
+	node->hole_size = 0;
 
-	if (node->start == hole_start) {
-		hole->hole_follows = 0;
-		list_del(&hole->hole_stack);
-	}
-
-	node->hole_follows = 0;
-	if (end != hole_end) {
-		list_add(&node->hole_stack, &mm->hole_stack);
-		node->hole_follows = 1;
-	}
+	rm_hole(hole);
+	if (node->start > hole_start)
+		add_hole(hole);
+	if (end < hole_end)
+		add_hole(node);
 
 	save_stack(node);
 
@@ -359,104 +322,93 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
 }
 EXPORT_SYMBOL(drm_mm_reserve_node);
 
-/**
- * drm_mm_insert_node_generic - search for space and insert @node
- * @mm: drm_mm to allocate from
- * @node: preallocate node to insert
- * @size: size of the allocation
- * @alignment: alignment of the allocation
- * @color: opaque tag value to use for this node
- * @sflags: flags to fine-tune the allocation search
- * @aflags: flags to fine-tune the allocation behavior
- *
- * The preallocated node must be cleared to 0.
- *
- * Returns:
- * 0 on success, -ENOSPC if there's no suitable hole.
- */
-int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node,
-			       u64 size, u64 alignment,
-			       unsigned long color,
-			       enum drm_mm_search_flags sflags,
-			       enum drm_mm_allocator_flags aflags)
+static inline struct drm_mm_node *rb_hole_size_to_node(struct rb_node *rb)
 {
-	struct drm_mm_node *hole_node;
-
-	if (WARN_ON(size == 0))
-		return -EINVAL;
+	return rb ? rb_entry(rb, struct drm_mm_node, rb_hole_size) : NULL;
+}
 
-	hole_node = drm_mm_search_free_generic(mm, size, alignment,
-					       color, sflags);
-	if (!hole_node)
-		return -ENOSPC;
+static inline struct drm_mm_node *rb_hole_addr_to_node(struct rb_node *rb)
+{
+	return rb ? rb_entry(rb, struct drm_mm_node, rb_hole_addr) : NULL;
+}
 
-	drm_mm_insert_helper(hole_node, node, size, alignment, color, aflags);
-	return 0;
+static inline u64 rb_hole_size(struct rb_node *rb)
+{
+	return rb_entry(rb, struct drm_mm_node, rb_hole_size)->hole_size;
 }
-EXPORT_SYMBOL(drm_mm_insert_node_generic);
-
-static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
-				       struct drm_mm_node *node,
-				       u64 size, u64 alignment,
-				       unsigned long color,
-				       u64 start, u64 end,
-				       enum drm_mm_allocator_flags flags)
+
+static struct drm_mm_node *best_hole(struct drm_mm *mm, u64 size)
 {
-	struct drm_mm *mm = hole_node->mm;
-	u64 hole_start = drm_mm_hole_node_start(hole_node);
-	u64 hole_end = drm_mm_hole_node_end(hole_node);
-	u64 adj_start = hole_start;
-	u64 adj_end = hole_end;
+	struct rb_node *best = NULL;
+	struct rb_node **link = &mm->holes_size.rb_node;
+	while (*link) {
+		struct rb_node *rb = *link;
+		if (size <= rb_hole_size(rb))
+			link = &rb->rb_left, best = rb;
+		else
+			link = &rb->rb_right;
+	}
+	return rb_hole_size_to_node(best);
+}
 
-	DRM_MM_BUG_ON(!hole_node->hole_follows || node->allocated);
+static struct drm_mm_node *low_hole(struct drm_mm *mm, u64 addr)
+{
+	struct drm_mm_node *node = NULL;
+	struct rb_node **link = &mm->holes_addr.rb_node;
+	while (*link) {
+		node = rb_hole_addr_to_node(*link);
+		if (addr == __drm_mm_hole_node_start(node))
+			return node;
 
-	if (adj_start < start)
-		adj_start = start;
-	if (adj_end > end)
-		adj_end = end;
+		if (addr < __drm_mm_hole_node_start(node))
+			link = &node->rb_hole_addr.rb_left;
+		else
+			link = &node->rb_hole_addr.rb_right;
+	}
+	return node;
+}
 
-	if (mm->color_adjust)
-		mm->color_adjust(hole_node, color, &adj_start, &adj_end);
+static struct drm_mm_node *
+first_hole(struct drm_mm *mm, u64 start, u64 end, u64 size, unsigned int flags)
+{
+	if (RB_EMPTY_ROOT(&mm->holes_size))
+		return NULL;
 
-	if (flags & DRM_MM_CREATE_TOP)
-		adj_start = adj_end - size;
+	switch (flags) {
+	default:
+	case DRM_MM_INSERT_BEST:
+		return best_hole(mm, size);
 
-	if (alignment) {
-		u64 rem;
+	case DRM_MM_INSERT_LOW:
+		return low_hole(mm, start);
 
-		div64_u64_rem(adj_start, alignment, &rem);
-		if (rem) {
-			if (flags & DRM_MM_CREATE_TOP)
-				adj_start -= rem;
-			else
-				adj_start += alignment - rem;
-		}
-	}
+	case DRM_MM_INSERT_HIGH:
+		return rb_hole_addr_to_node(rb_last(&mm->holes_addr));
 
-	if (adj_start == hole_start) {
-		hole_node->hole_follows = 0;
-		list_del(&hole_node->hole_stack);
+	case DRM_MM_INSERT_EVICT:
+		return list_first_entry_or_null(&mm->hole_stack,
+						struct drm_mm_node,
+						hole_stack);
 	}
+}
 
-	node->start = adj_start;
-	node->size = size;
-	node->mm = mm;
-	node->color = color;
-	node->allocated = 1;
-
-	list_add(&node->node_list, &hole_node->node_list);
+static struct drm_mm_node *
+next_hole(struct drm_mm *mm, struct drm_mm_node *node, unsigned int flags)
+{
+	switch (flags) {
+	default:
+	case DRM_MM_INSERT_BEST:
+		return rb_hole_size_to_node(rb_next(&node->rb_hole_size));
 
-	drm_mm_interval_tree_add_node(hole_node, node);
+	case DRM_MM_INSERT_LOW:
+		return rb_hole_addr_to_node(rb_next(&node->rb_hole_addr));
 
-	DRM_MM_BUG_ON(node->start < start);
-	DRM_MM_BUG_ON(node->start < adj_start);
-	DRM_MM_BUG_ON(node->start + node->size > adj_end);
-	DRM_MM_BUG_ON(node->start + node->size > end);
+	case DRM_MM_INSERT_HIGH:
+		return rb_hole_addr_to_node(rb_prev(&node->rb_hole_addr));
 
-	node->hole_follows = 0;
-	if (__drm_mm_hole_node_start(node) < hole_end) {
-		list_add(&node->hole_stack, &mm->hole_stack);
-		node->hole_follows = 1;
+	case DRM_MM_INSERT_EVICT:
+		node = list_next_entry(node, hole_stack);
+		return &node->hole_stack == &mm->hole_stack ? NULL : node;
 	}
 
 	save_stack(node);
@@ -479,177 +431,127 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
  * Returns:
  * 0 on success, -ENOSPC if there's no suitable hole.
  */
-int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node,
+int drm_mm_insert_node_in_range_generic(struct drm_mm * const mm,
+					struct drm_mm_node * const node,
 					u64 size, u64 alignment,
 					unsigned long color,
 					u64 start, u64 end,
-					enum drm_mm_search_flags sflags,
-					enum drm_mm_allocator_flags aflags)
+					unsigned int flags)
 {
-	struct drm_mm_node *hole_node;
+	struct drm_mm_node *hole;
+	u64 alignment_mask;
 
 	if (WARN_ON(size == 0))
 		return -EINVAL;
 
-	hole_node = drm_mm_search_free_in_range_generic(mm,
-							size, alignment, color,
-							start, end, sflags);
-	if (!hole_node)
+	if (end - start < size)
 		return -ENOSPC;
 
-	drm_mm_insert_helper_range(hole_node, node,
-				   size, alignment, color,
-				   start, end, aflags);
-	return 0;
-}
-EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic);
-
-/**
- * drm_mm_remove_node - Remove a memory node from the allocator.
- * @node: drm_mm_node to remove
- *
- * This just removes a node from its drm_mm allocator. The node does not need to
- * be cleared again before it can be re-inserted into this or any other drm_mm
- * allocator. It is a bug to call this function on a un-allocated node.
- */
-void drm_mm_remove_node(struct drm_mm_node *node)
-{
-	struct drm_mm *mm = node->mm;
-	struct drm_mm_node *prev_node;
-
-	DRM_MM_BUG_ON(!node->allocated);
-	DRM_MM_BUG_ON(node->scanned_block);
-
-	prev_node =
-	    list_entry(node->node_list.prev, struct drm_mm_node, node_list);
-
-	if (node->hole_follows) {
-		DRM_MM_BUG_ON(__drm_mm_hole_node_start(node) ==
-			      __drm_mm_hole_node_end(node));
-		list_del(&node->hole_stack);
-	} else
-		DRM_MM_BUG_ON(__drm_mm_hole_node_start(node) !=
-			      __drm_mm_hole_node_end(node));
-
+	if (alignment <= 1)
+		alignment = 0;
 
-	if (!prev_node->hole_follows) {
-		prev_node->hole_follows = 1;
-		list_add(&prev_node->hole_stack, &mm->hole_stack);
-	} else
-		list_move(&prev_node->hole_stack, &mm->hole_stack);
+	alignment_mask = is_power_of_2(alignment) ? alignment - 1 : 0;
+	for (hole = first_hole(mm, start, end, size, flags); hole;
+	     hole = next_hole(mm, hole, flags)) {
+		u64 hole_start = __drm_mm_hole_node_start(hole);
+		u64 hole_end = hole_start + hole->hole_size;
+		u64 adj_start, adj_end;
+		u64 col_start, col_end;
 
-	drm_mm_interval_tree_remove(node, &mm->interval_tree);
-	list_del(&node->node_list);
-	node->allocated = 0;
-}
-EXPORT_SYMBOL(drm_mm_remove_node);
+		if (flags == DRM_MM_INSERT_LOW && hole_start >= end)
+			break;
 
-static int check_free_hole(u64 start, u64 end, u64 size, u64 alignment)
-{
-	if (end - start < size)
-		return 0;
+		if (flags == DRM_MM_INSERT_HIGH && hole_end <= start)
+			break;
 
-	if (alignment) {
-		u64 rem;
+		col_start = hole_start;
+		col_end = hole_end;
+		if (mm->color_adjust)
+			mm->color_adjust(hole, color, &col_start, &col_end);
 
-		div64_u64_rem(start, alignment, &rem);
-		if (rem)
-			start += alignment - rem;
-	}
+		adj_start = max(col_start, start);
+		adj_end = min(col_end, end);
 
-	return end >= start + size;
-}
+		if (adj_end <= adj_start || adj_end - adj_start < size)
+			continue;
 
-static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
-						      u64 size,
-						      u64 alignment,
-						      unsigned long color,
-						      enum drm_mm_search_flags flags)
-{
-	struct drm_mm_node *entry;
-	struct drm_mm_node *best;
-	u64 adj_start;
-	u64 adj_end;
-	u64 best_size;
+		if (flags == DRM_MM_INSERT_HIGH)
+			adj_start = adj_end - size;
 
-	DRM_MM_BUG_ON(mm->scan_active);
+		if (alignment) {
+			u64 rem;
 
-	best = NULL;
-	best_size = ~0UL;
+			if (alignment_mask)
+				rem = adj_start & alignment_mask;
+			else
+				div64_u64_rem(adj_start, alignment, &rem);
+			if (rem) {
+				adj_start -= rem;
+				if (flags != DRM_MM_INSERT_HIGH)
+					adj_start += alignment;
 
-	__drm_mm_for_each_hole(entry, mm, adj_start, adj_end,
-			       flags & DRM_MM_SEARCH_BELOW) {
-		u64 hole_size = adj_end - adj_start;
+				if (adj_start < col_start ||
+				    col_end - adj_start < size)
+					continue;
 
-		if (mm->color_adjust) {
-			mm->color_adjust(entry, color, &adj_start, &adj_end);
-			if (adj_end <= adj_start)
-				continue;
+				if (adj_end <= adj_start ||
+				    adj_end - adj_start < size)
+					continue;
+			}
 		}
 
-		if (!check_free_hole(adj_start, adj_end, size, alignment))
-			continue;
+		node->mm = mm;
+		node->size = size;
+		node->start = adj_start;
+		node->color = color;
+		node->hole_size = 0;
 
-		if (!(flags & DRM_MM_SEARCH_BEST))
-			return entry;
+		list_add(&node->node_list, &hole->node_list);
+		drm_mm_interval_tree_add_node(hole, node);
+		node->allocated = true;
 
-		if (hole_size < best_size) {
-			best = entry;
-			best_size = hole_size;
-		}
+		rm_hole(hole);
+		if (adj_start > hole_start)
+			add_hole(hole);
+		if (adj_start + size < hole_end)
+			add_hole(node);
+
+		save_stack(node);
+		return 0;
 	}
 
-	return best;
+	return -ENOSPC;
 }
+EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic);
 
-static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
-							u64 size,
-							u64 alignment,
-							unsigned long color,
-							u64 start,
-							u64 end,
-							enum drm_mm_search_flags flags)
+/**
+ * drm_mm_remove_node - Remove a memory node from the allocator.
+ * @node: drm_mm_node to remove
+ *
+ * This just removes a node from its drm_mm allocator. The node does not need to
+ * be cleared again before it can be re-inserted into this or any other drm_mm
+ * allocator. It is a bug to call this function on a un-allocated node.
+ */
+void drm_mm_remove_node(struct drm_mm_node *node)
 {
-	struct drm_mm_node *entry;
-	struct drm_mm_node *best;
-	u64 adj_start;
-	u64 adj_end;
-	u64 best_size;
-
-	DRM_MM_BUG_ON(mm->scan_active);
-
-	best = NULL;
-	best_size = ~0UL;
-
-	__drm_mm_for_each_hole(entry, mm, adj_start, adj_end,
-			       flags & DRM_MM_SEARCH_BELOW) {
-		u64 hole_size = adj_end - adj_start;
-
-		if (adj_start < start)
-			adj_start = start;
-		if (adj_end > end)
-			adj_end = end;
+	struct drm_mm *mm = node->mm;
+	struct drm_mm_node *prev_node;
 
-		if (mm->color_adjust) {
-			mm->color_adjust(entry, color, &adj_start, &adj_end);
-			if (adj_end <= adj_start)
-				continue;
-		}
+	DRM_MM_BUG_ON(!node->allocated);
+	DRM_MM_BUG_ON(node->scanned_block);
 
-		if (!check_free_hole(adj_start, adj_end, size, alignment))
-			continue;
+	prev_node = list_prev_entry(node, node_list);
 
-		if (!(flags & DRM_MM_SEARCH_BEST))
-			return entry;
+	rm_hole(node);
 
-		if (hole_size < best_size) {
-			best = entry;
-			best_size = hole_size;
-		}
-	}
+	drm_mm_interval_tree_remove(node, &mm->interval_tree);
+	list_del(&node->node_list);
+	node->allocated = false;
 
-	return best;
+	rm_hole(prev_node);
+	add_hole(prev_node);
 }
+EXPORT_SYMBOL(drm_mm_remove_node);
 
 /**
  * drm_mm_replace_node - move an allocation from @old to @new
@@ -664,18 +566,23 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
 {
 	DRM_MM_BUG_ON(!old->allocated);
 
+	*new = *old;
+
 	list_replace(&old->node_list, &new->node_list);
-	list_replace(&old->hole_stack, &new->hole_stack);
 	rb_replace_node(&old->rb, &new->rb, &old->mm->interval_tree);
-	new->hole_follows = old->hole_follows;
-	new->mm = old->mm;
-	new->start = old->start;
-	new->size = old->size;
-	new->color = old->color;
-	new->__subtree_last = old->__subtree_last;
-
-	old->allocated = 0;
-	new->allocated = 1;
+
+	if (old->hole_size) {
+		list_replace(&old->hole_stack, &new->hole_stack);
+		rb_replace_node(&old->rb_hole_size,
+				&new->rb_hole_size,
+				&old->mm->holes_size);
+		rb_replace_node(&old->rb_hole_addr,
+				&new->rb_hole_addr,
+				&old->mm->holes_addr);
+	}
+
+	old->allocated = false;
+	new->allocated = true;
 }
 EXPORT_SYMBOL(drm_mm_replace_node);
 
@@ -799,7 +706,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 	if (adj_end <= adj_start || adj_end - adj_start < scan->size)
 		return false;
 
-	if (scan->flags == DRM_MM_CREATE_TOP)
+	if (scan->flags == DRM_MM_INSERT_HIGH)
 		adj_start = adj_end - scan->size;
 
 	if (scan->alignment) {
@@ -811,7 +718,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
 			div64_u64_rem(adj_start, scan->alignment, &rem);
 		if (rem) {
 			adj_start -= rem;
-			if (scan->flags != DRM_MM_CREATE_TOP)
+			if (scan->flags != DRM_MM_INSERT_HIGH)
 				adj_start += scan->alignment;
 			if (adj_start < max(col_start, scan->range_start) ||
 			    max(col_end, scan->range_end) - adj_start < scan->size)
@@ -909,21 +816,22 @@ EXPORT_SYMBOL(drm_mm_scan_color_evict);
  */
 void drm_mm_init(struct drm_mm *mm, u64 start, u64 size)
 {
+	mm->color_adjust = NULL;
+
 	INIT_LIST_HEAD(&mm->hole_stack);
-	mm->scan_active = 0;
+	mm->interval_tree = RB_ROOT;
+	mm->holes_size = RB_ROOT;
+	mm->holes_addr = RB_ROOT;
 
 	/* Clever trick to avoid a special case in the free hole tracking. */
 	INIT_LIST_HEAD(&mm->head_node.node_list);
-	mm->head_node.allocated = 0;
-	mm->head_node.hole_follows = 1;
+	mm->head_node.allocated = false;
 	mm->head_node.mm = mm;
 	mm->head_node.start = start + size;
-	mm->head_node.size = start - mm->head_node.start;
-	list_add_tail(&mm->head_node.hole_stack, &mm->hole_stack);
+	mm->head_node.size = -size;
+	add_hole(&mm->head_node);
 
-	mm->interval_tree = RB_ROOT;
-
-	mm->color_adjust = NULL;
+	mm->scan_active = 0;
 }
 EXPORT_SYMBOL(drm_mm_init);
 
@@ -945,18 +853,16 @@ EXPORT_SYMBOL(drm_mm_takedown);
 static u64 drm_mm_debug_hole(const struct drm_mm_node *entry,
 			     const char *prefix)
 {
-	u64 hole_start, hole_end, hole_size;
+	u64 start, size;
 
-	if (entry->hole_follows) {
-		hole_start = drm_mm_hole_node_start(entry);
-		hole_end = drm_mm_hole_node_end(entry);
-		hole_size = hole_end - hole_start;
-		pr_debug("%s %#llx-%#llx: %llu: free\n", prefix, hole_start,
-			 hole_end, hole_size);
-		return hole_size;
+	size = entry->hole_size;
+	if (size) {
+		start = drm_mm_hole_node_start(entry);
+		pr_debug("%s %#llx-%#llx: %llu: free\n",
+			 prefix, start, start + size, size);
 	}
 
-	return 0;
+	return size;
 }
 
 /**
@@ -989,7 +895,7 @@ static u64 drm_mm_dump_hole(struct seq_file *m, const struct drm_mm_node *entry)
 {
 	u64 hole_start, hole_end, hole_size;
 
-	if (entry->hole_follows) {
+	if (entry->hole_size) {
 		hole_start = drm_mm_hole_node_start(entry);
 		hole_end = drm_mm_hole_node_end(entry);
 		hole_size = hole_end - hole_start;
diff --git a/drivers/gpu/drm/drm_vma_manager.c b/drivers/gpu/drm/drm_vma_manager.c
index 20cc33d1bfc1..d9100b565198 100644
--- a/drivers/gpu/drm/drm_vma_manager.c
+++ b/drivers/gpu/drm/drm_vma_manager.c
@@ -212,8 +212,7 @@ int drm_vma_offset_add(struct drm_vma_offset_manager *mgr,
 		goto out_unlock;
 	}
 
-	ret = drm_mm_insert_node(&mgr->vm_addr_space_mm, &node->vm_node,
-				 pages, 0, DRM_MM_SEARCH_DEFAULT);
+	ret = drm_mm_insert_node(&mgr->vm_addr_space_mm, &node->vm_node, pages);
 	if (ret)
 		goto out_unlock;
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
index 2dae3169ce48..2052fe990e26 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
@@ -107,6 +107,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
 				   struct drm_mm_node *node, size_t size)
 {
 	struct etnaviv_vram_mapping *free = NULL;
+	unsigned int flags = DRM_MM_INSERT_LOW;
 	int ret;
 
 	lockdep_assert_held(&mmu->lock);
@@ -117,9 +118,9 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
 		struct list_head list;
 		bool found;
 
-		ret = drm_mm_insert_node_in_range(&mmu->mm, node,
-			size, 0, mmu->last_iova, ~0UL,
-			DRM_MM_SEARCH_DEFAULT);
+		ret = drm_mm_insert_node_in_range(&mmu->mm, node, size, 0,
+						  mmu->last_iova, U64_MAX,
+						  flags);
 
 		if (ret != -ENOSPC)
 			break;
@@ -187,6 +188,8 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
 			list_del_init(&m->scan_node);
 		}
 
+		flags = DRM_MM_INSERT_EVICT;
+
 		/*
 		 * We removed enough mappings so that the new allocation will
 		 * succeed.  Ensure that the MMU will be flushed before the
diff --git a/drivers/gpu/drm/i915/gvt/aperture_gm.c b/drivers/gpu/drm/i915/gvt/aperture_gm.c
index 7d33b607bc89..beccc4396a8a 100644
--- a/drivers/gpu/drm/i915/gvt/aperture_gm.c
+++ b/drivers/gpu/drm/i915/gvt/aperture_gm.c
@@ -48,22 +48,20 @@ static int alloc_gm(struct intel_vgpu *vgpu, bool high_gm)
 {
 	struct intel_gvt *gvt = vgpu->gvt;
 	struct drm_i915_private *dev_priv = gvt->dev_priv;
-	u32 alloc_flag, search_flag;
 	u64 start, end, size;
+	unsigned int flags;
 	struct drm_mm_node *node;
 	int retried = 0;
 	int ret;
 
 	if (high_gm) {
-		search_flag = DRM_MM_SEARCH_BELOW;
-		alloc_flag = DRM_MM_CREATE_TOP;
+		flags = DRM_MM_INSERT_HIGH;
 		node = &vgpu->gm.high_gm_node;
 		size = vgpu_hidden_sz(vgpu);
 		start = gvt_hidden_gmadr_base(gvt);
 		end = gvt_hidden_gmadr_end(gvt);
 	} else {
-		search_flag = DRM_MM_SEARCH_DEFAULT;
-		alloc_flag = DRM_MM_CREATE_DEFAULT;
+		flags = DRM_MM_INSERT_LOW;
 		node = &vgpu->gm.low_gm_node;
 		size = vgpu_aperture_sz(vgpu);
 		start = gvt_aperture_gmadr_base(gvt);
@@ -75,8 +73,7 @@ static int alloc_gm(struct intel_vgpu *vgpu, bool high_gm)
 	ret = drm_mm_insert_node_in_range_generic(&dev_priv->ggtt.base.mm,
 						  node, size, 4096,
 						  I915_COLOR_UNEVICTABLE,
-						  start, end, search_flag,
-						  alloc_flag);
+						  start, end, flags);
 	if (ret) {
 		ret = i915_gem_evict_something(&dev_priv->ggtt.base,
 					       size, 4096,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 36183945e61a..ab92cab44bb0 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -73,8 +73,7 @@ insert_mappable_node(struct i915_ggtt *ggtt,
 						   size, 0,
 						   I915_COLOR_UNEVICTABLE,
 						   0, ggtt->mappable_end,
-						   DRM_MM_SEARCH_DEFAULT,
-						   DRM_MM_CREATE_DEFAULT);
+						   DRM_MM_INSERT_LOW);
 }
 
 static void
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index e4987e354311..49a2c492e003 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -109,6 +109,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
 	}, **phase;
 	struct i915_vma *vma, *next;
 	struct drm_mm_node *node;
+	unsigned int mmflags;
 	int ret;
 
 	lockdep_assert_held(&vm->i915->drm.struct_mutex);
@@ -127,10 +128,14 @@ i915_gem_evict_something(struct i915_address_space *vm,
 	 * On each list, the oldest objects lie at the HEAD with the freshest
 	 * object on the TAIL.
 	 */
+	mmflags = 0;
+	if (flags & PIN_HIGH)
+		mmflags = DRM_MM_INSERT_HIGH;
+	if (flags & PIN_MAPPABLE)
+		mmflags = DRM_MM_INSERT_LOW;
 	drm_mm_scan_init_with_range(&scan, &vm->mm,
 				    min_size, alignment, cache_level,
-				    start, end,
-				    flags & PIN_HIGH ? DRM_MM_CREATE_TOP : 0);
+				    start, end, mmflags);
 
 	if (flags & PIN_NONBLOCK)
 		phases[1] = NULL;
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index d665a33229bd..530f7b49ebc3 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -440,8 +440,7 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
 				(&ggtt->base.mm, &cache->node,
 				 4096, 0, I915_COLOR_UNEVICTABLE,
 				 0, ggtt->mappable_end,
-				 DRM_MM_SEARCH_DEFAULT,
-				 DRM_MM_CREATE_DEFAULT);
+				 DRM_MM_INSERT_LOW);
 			if (ret) /* no inactive aperture space, use cpu reloc */
 				return NULL;
 		} else {
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 2083c899ab78..0005a5e7fcd4 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2074,7 +2074,7 @@ static int gen6_ppgtt_allocate_page_directories(struct i915_hw_ppgtt *ppgtt)
 						  GEN6_PD_SIZE, GEN6_PD_ALIGN,
 						  I915_COLOR_UNEVICTABLE,
 						  0, ggtt->base.total,
-						  DRM_MM_TOPDOWN);
+						  DRM_MM_INSERT_HIGH);
 	if (ret == -ENOSPC && !retried) {
 		ret = i915_gem_evict_something(&ggtt->base,
 					       GEN6_PD_SIZE, GEN6_PD_ALIGN,
@@ -2755,7 +2755,7 @@ int i915_gem_init_ggtt(struct drm_i915_private *dev_priv)
 						  4096, 0,
 						  I915_COLOR_UNEVICTABLE,
 						  0, ggtt->mappable_end,
-						  0, 0);
+						  0);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index efc0e748ef89..37eb514ecec8 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -63,8 +63,7 @@ int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv,
 
 	mutex_lock(&dev_priv->mm.stolen_lock);
 	ret = drm_mm_insert_node_in_range(&dev_priv->mm.stolen, node, size,
-					  alignment, start, end,
-					  DRM_MM_SEARCH_DEFAULT);
+					  alignment, start, end, 0);
 	mutex_unlock(&dev_priv->mm.stolen_lock);
 
 	return ret;
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index ca62a3371d94..97e8f2a8ab03 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -322,11 +322,11 @@ bool i915_gem_valid_gtt_space(struct i915_vma *vma, unsigned long cache_level)
 	GEM_BUG_ON(list_empty(&node->node_list));
 
 	other = list_prev_entry(node, node_list);
-	if (color_differs(other, cache_level) && !other->hole_follows)
+	if (color_differs(other, cache_level) && !other->hole_size)
 		return false;
 
 	other = list_next_entry(node, node_list);
-	if (color_differs(other, cache_level) && !node->hole_follows)
+	if (color_differs(other, cache_level) && !node->hole_size)
 		return false;
 
 	return true;
@@ -410,15 +410,13 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 				goto err_unpin;
 		}
 	} else {
-		u32 search_flag, alloc_flag;
-
-		if (flags & PIN_HIGH) {
-			search_flag = DRM_MM_SEARCH_BELOW;
-			alloc_flag = DRM_MM_CREATE_TOP;
-		} else {
-			search_flag = DRM_MM_SEARCH_DEFAULT;
-			alloc_flag = DRM_MM_CREATE_DEFAULT;
-		}
+		unsigned int mmflags;
+
+		mmflags = 0;
+		if (flags & PIN_HIGH)
+			mmflags = DRM_MM_INSERT_HIGH;
+		if (flags & PIN_MAPPABLE)
+			mmflags = DRM_MM_INSERT_LOW;
 
 		/* We only allocate in PAGE_SIZE/GTT_PAGE_SIZE (4096) chunks,
 		 * so we know that we always have a minimum alignment of 4096.
@@ -435,15 +433,14 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 							  size, alignment,
 							  obj->cache_level,
 							  start, end,
-							  search_flag,
-							  alloc_flag);
+							  mmflags);
 		if (ret) {
 			ret = i915_gem_evict_something(vma->vm, size, alignment,
 						       obj->cache_level,
 						       start, end,
 						       flags);
 			if (ret == 0) {
-				search_flag = DRM_MM_SEARCH_DEFAULT;
+				mmflags = DRM_MM_INSERT_EVICT;
 				goto search_free;
 			}
 
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index cd06cfd94687..412669062cb7 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -54,8 +54,7 @@ static struct page **get_pages_vram(struct drm_gem_object *obj,
 	if (!p)
 		return ERR_PTR(-ENOMEM);
 
-	ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node,
-			npages, 0, DRM_MM_SEARCH_DEFAULT);
+	ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages);
 	if (ret) {
 		drm_free_large(p);
 		return ERR_PTR(ret);
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index a311d26ccb21..b654eca7636a 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -45,8 +45,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace,
 	if (WARN_ON(drm_mm_node_allocated(&vma->node)))
 		return 0;
 
-	ret = drm_mm_insert_node(&aspace->mm, &vma->node, npages,
-			0, DRM_MM_SEARCH_DEFAULT);
+	ret = drm_mm_insert_node(&aspace->mm, &vma->node, npages);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index 73353f87f46a..f2f802c60e4c 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -268,8 +268,7 @@ static int __igt_insert(int count, u64 size, bool replace)
 		int err;
 
 		node = memset(replace ? &tmp : &nodes[n], 0, sizeof(*node));
-		err = drm_mm_insert_node(&mm, node, size, 0,
-					 DRM_MM_SEARCH_DEFAULT);
+		err = drm_mm_insert_node(&mm, node, size);
 		if (err) {
 			pr_err("insert failed, step %d, start %llu\n",
 			       n, nodes[n].start);
@@ -303,8 +302,7 @@ static int __igt_insert(int count, u64 size, bool replace)
 		struct drm_mm_node tmp;
 
 		memset(&tmp, 0, sizeof(tmp));
-		if (!drm_mm_insert_node(&mm, &tmp, size, 0,
-					DRM_MM_SEARCH_DEFAULT)) {
+		if (!drm_mm_insert_node(&mm, &tmp, size)) {
 			drm_mm_remove_node(&tmp);
 			pr_err("impossible insert succeeded, step %d, start %llu\n",
 			       n, tmp.start);
@@ -326,7 +324,7 @@ static int __igt_insert(int count, u64 size, bool replace)
 			goto out;
 		}
 
-		if (node->hole_follows) {
+		if (node->hole_size) {
 			pr_err("node %d is followed by a hole!\n", n);
 			goto out;
 		}
@@ -349,8 +347,7 @@ static int __igt_insert(int count, u64 size, bool replace)
 		int err;
 
 		drm_mm_remove_node(&nodes[n]);
-		err = drm_mm_insert_node(&mm, &nodes[n], size, 0,
-					 DRM_MM_SEARCH_DEFAULT);
+		err = drm_mm_insert_node(&mm, &nodes[n], size);
 		if (err) {
 			pr_err("reinsert failed, step %d\n", n);
 			ret = err;
@@ -377,8 +374,7 @@ static int __igt_insert(int count, u64 size, bool replace)
 			int err;
 
 			node = &nodes[order[(o + m) % count]];
-			err = drm_mm_insert_node(&mm, node, size, 0,
-						 DRM_MM_SEARCH_DEFAULT);
+			err = drm_mm_insert_node(&mm, node, size);
 			if (err) {
 				pr_err("insert failed, step %d, start %llu\n",
 				       n, node->start);
@@ -393,8 +389,7 @@ static int __igt_insert(int count, u64 size, bool replace)
 			struct drm_mm_node tmp;
 
 			memset(&tmp, 0, sizeof(tmp));
-			if (!drm_mm_insert_node(&mm, &tmp, size, 0,
-						DRM_MM_SEARCH_DEFAULT)) {
+			if (!drm_mm_insert_node(&mm, &tmp, size)) {
 				drm_mm_remove_node(&tmp);
 				pr_err("impossible insert succeeded, start %llu\n",
 				       tmp.start);
@@ -416,7 +411,7 @@ static int __igt_insert(int count, u64 size, bool replace)
 				goto out;
 			}
 
-			if (node->hole_follows) {
+			if (node->hole_size) {
 				pr_err("node %d is followed by a hole!\n", m);
 				goto out;
 			}
@@ -505,8 +500,7 @@ static int __igt_insert_range(int count, u64 size, u64 start, u64 end)
 	}
 
 	for (n = 0; n < count; n++) {
-		err = drm_mm_insert_node(&mm, &nodes[n], size, 0,
-					 DRM_MM_SEARCH_DEFAULT);
+		err = drm_mm_insert_node(&mm, &nodes[n], size);
 		if (err) {
 			pr_err("insert failed, step %d, start %llu\n",
 			       n, nodes[n].start);
@@ -523,7 +517,7 @@ static int __igt_insert_range(int count, u64 size, u64 start, u64 end)
 		if (!drm_mm_insert_node_in_range(&mm, &tmp,
 						 size, 0,
 						 start, end,
-						 DRM_MM_SEARCH_DEFAULT)) {
+						 0)) {
 			drm_mm_remove_node(&tmp);
 			pr_err("impossible insert succeeded, step %d, start %llu\n",
 			       n, tmp.start);
@@ -553,7 +547,7 @@ static int __igt_insert_range(int count, u64 size, u64 start, u64 end)
 			goto out;
 		}
 
-		if (node->hole_follows) {
+		if (node->hole_size) {
 			pr_err("node %d is followed by a hole!\n", n);
 			goto out;
 		}
@@ -568,7 +562,7 @@ static int __igt_insert_range(int count, u64 size, u64 start, u64 end)
 		drm_mm_remove_node(&nodes[n]);
 		err = drm_mm_insert_node_in_range(&mm, &nodes[n], size, 0,
 						  start, end,
-						  DRM_MM_SEARCH_DEFAULT);
+						  0);
 		if (err) {
 			pr_err("reinsert failed, step %d\n", n);
 			ret = err;
@@ -589,7 +583,7 @@ static int __igt_insert_range(int count, u64 size, u64 start, u64 end)
 	for (n = start_n; n <= end_n; n++) {
 		err = drm_mm_insert_node_in_range(&mm, &nodes[n], size, 0,
 						  start, end,
-						  DRM_MM_SEARCH_DEFAULT);
+						  0);
 		if (err) {
 			pr_err("reinsert failed, step %d\n", n);
 			ret = err;
@@ -619,7 +613,7 @@ static int __igt_insert_range(int count, u64 size, u64 start, u64 end)
 			goto out;
 		}
 
-		if (node->hole_follows) {
+		if (node->hole_size) {
 			pr_err("node %d is followed by a hole!\n", n);
 			goto out;
 		}
@@ -689,9 +683,7 @@ static int igt_align(void *ignored)
 		}
 
 		size = drm_next_prime_number(prime);
-		err = drm_mm_insert_node_generic(&mm, node, size, prime, 0,
-						 DRM_MM_SEARCH_DEFAULT,
-						 DRM_MM_CREATE_DEFAULT);
+		err = drm_mm_insert_node_generic(&mm, node, size, prime, 0, 0);
 		if (err) {
 			pr_err("insert failed with alignment=%d", prime);
 			ret = err;
@@ -736,9 +728,7 @@ static int igt_align_pot(int max)
 
 		align = BIT_ULL(bit);
 		size = BIT_ULL(bit-1) + 1;
-		err = drm_mm_insert_node_generic(&mm, node, size, align, 0,
-						 DRM_MM_SEARCH_DEFAULT,
-						 DRM_MM_CREATE_DEFAULT);
+		err = drm_mm_insert_node_generic(&mm, node, size, align, 0, 0);
 		if (err) {
 			pr_err("insert failed with alignment=%llx [%d]",
 			       align, bit);
@@ -839,8 +829,7 @@ static int igt_evict(void *ignored)
 	for (n = 0; n < size; n++) {
 		int err;
 
-		err = drm_mm_insert_node(&mm, &nodes[n].node, 1, 0,
-					 DRM_MM_SEARCH_DEFAULT);
+		err = drm_mm_insert_node(&mm, &nodes[n].node, 1);
 		if (err) {
 			pr_err("insert failed, step %d\n", n);
 			ret = err;
@@ -924,8 +913,8 @@ static int igt_evict(void *ignored)
 			drm_mm_remove_node(&e->node);
 
 		memset(&tmp, 0, sizeof(tmp));
-		err = drm_mm_insert_node(&mm, &tmp, nsize, n,
-					 DRM_MM_SEARCH_DEFAULT);
+		err = drm_mm_insert_node_generic(&mm, &tmp, nsize, n, 0,
+						 DRM_MM_INSERT_EVICT);
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d\n",
 			       nsize, n);
@@ -935,9 +924,9 @@ static int igt_evict(void *ignored)
 			goto out;
 		}
 
-		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
+		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_size) {
 			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d], start=%llx, hole-follows?=%d\n",
-			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, !!tmp.hole_size);
 
 			drm_mm_remove_node(&tmp);
 			goto out;
@@ -985,8 +974,8 @@ static int igt_evict(void *ignored)
 			drm_mm_remove_node(&e->node);
 
 		memset(&tmp, 0, sizeof(tmp));
-		err = drm_mm_insert_node(&mm, &tmp, nsize, n,
-					 DRM_MM_SEARCH_DEFAULT);
+		err = drm_mm_insert_node_generic(&mm, &tmp, nsize, n, 0,
+						 DRM_MM_INSERT_EVICT);
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime)\n",
 			       nsize, n);
@@ -996,9 +985,9 @@ static int igt_evict(void *ignored)
 			goto out;
 		}
 
-		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
+		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_size) {
 			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d] (prime), start=%llx, hole-follows?=%d\n",
-			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, !!tmp.hole_size);
 
 			drm_mm_remove_node(&tmp);
 			goto out;
@@ -1058,8 +1047,7 @@ static int igt_evict_range(void *ignored)
 	for (n = 0; n < size; n++) {
 		int err;
 
-		err = drm_mm_insert_node(&mm, &nodes[n].node, 1, 0,
-					 DRM_MM_SEARCH_DEFAULT);
+		err = drm_mm_insert_node(&mm, &nodes[n].node, 1);
 		if (err) {
 			pr_err("insert failed, step %d\n", n);
 			ret = err;
@@ -1101,7 +1089,7 @@ static int igt_evict_range(void *ignored)
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_in_range(&mm, &tmp, nsize, n,
 						  range_start, range_end,
-						  DRM_MM_SEARCH_DEFAULT);
+						  DRM_MM_INSERT_EVICT);
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d\n",
 			       nsize, n);
@@ -1118,9 +1106,9 @@ static int igt_evict_range(void *ignored)
 			goto out;
 		}
 
-		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
+		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_size) {
 			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d], start=%llx, hole-follows?=%d\n",
-			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, !!tmp.hole_size);
 
 			drm_mm_remove_node(&tmp);
 			goto out;
@@ -1172,7 +1160,7 @@ static int igt_evict_range(void *ignored)
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_in_range(&mm, &tmp, nsize, n,
 						  range_start, range_end,
-						  DRM_MM_SEARCH_DEFAULT);
+						  DRM_MM_INSERT_EVICT);
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime)\n",
 			       nsize, n);
@@ -1189,9 +1177,9 @@ static int igt_evict_range(void *ignored)
 			goto out;
 		}
 
-		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
+		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_size) {
 			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d] (prime), start=%llx, hole-follows?=%d\n",
-			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
+			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, !!tmp.hole_size);
 
 			drm_mm_remove_node(&tmp);
 			goto out;
@@ -1251,15 +1239,14 @@ static int igt_topdown(void *ignored)
 		int err;
 
 		err = drm_mm_insert_node_generic(&mm, &nodes[n], 1, 0, 0,
-						 DRM_MM_SEARCH_BELOW,
-						 DRM_MM_CREATE_TOP);
+						 DRM_MM_INSERT_HIGH);
 		if (err) {
 			pr_err("insert failed, step %d\n", n);
 			ret = err;
 			goto out;
 		}
 
-		if (nodes[n].hole_follows) {
+		if (nodes[n].hole_size) {
 			pr_err("hole after topdown insert %d, start=%llx\n",
 			       n, nodes[n].start);
 			goto out;
@@ -1278,15 +1265,14 @@ static int igt_topdown(void *ignored)
 
 			node = &nodes[order[(o + m) % size]];
 			err = drm_mm_insert_node_generic(&mm, node, 1, 0, 0,
-							 DRM_MM_SEARCH_BELOW,
-							 DRM_MM_CREATE_TOP);
+							 DRM_MM_INSERT_HIGH);
 			if (err) {
 				pr_err("insert failed, step %d/%d\n", m, n);
 				ret = err;
 				goto out;
 			}
 
-			if (node->hole_follows) {
+			if (node->hole_size) {
 				pr_err("hole after topdown insert %d/%d, start=%llx\n",
 				       m, n, node->start);
 				goto out;
@@ -1349,8 +1335,7 @@ static int igt_topdown_align(void *ignored)
 			u64 align = BIT_ULL(n);
 
 			err = drm_mm_insert_node_generic(&mm, &tmp, 1, align, 0,
-							 DRM_MM_SEARCH_BELOW,
-							 DRM_MM_CREATE_TOP);
+							 DRM_MM_INSERT_HIGH);
 			drm_mm_remove_node(&tmp);
 			if (err) {
 				pr_err("insert failed, ret=%d\n", err);
@@ -1375,8 +1360,7 @@ static int igt_topdown_align(void *ignored)
 			u64 rem;
 
 			err = drm_mm_insert_node_generic(&mm, &tmp, 1, n, 0,
-							 DRM_MM_SEARCH_BELOW,
-							 DRM_MM_CREATE_TOP);
+							 DRM_MM_INSERT_HIGH);
 			drm_mm_remove_node(&tmp);
 			if (err) {
 				pr_err("insert failed, ret=%d\n", err);
@@ -1430,12 +1414,12 @@ static int igt_color(void *ignored)
 	struct drm_mm_node *node, *nn;
 	const struct modes {
 		const char *name;
-		unsigned int search;
-		unsigned int create;
+		unsigned int flags;
 	} modes[] = {
-		{ "default", DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT },
-		{ "best", DRM_MM_SEARCH_BEST, DRM_MM_CREATE_DEFAULT },
-		{ "top-down", DRM_MM_SEARCH_BELOW, DRM_MM_CREATE_TOP },
+		{ "best", 0 },
+		{ "evict", DRM_MM_INSERT_EVICT },
+		{ "top-down", DRM_MM_INSERT_HIGH },
+		{ "bottom-up", DRM_MM_INSERT_LOW },
 	};
 	int ret = -EINVAL;
 	int n, m;
@@ -1451,9 +1435,7 @@ static int igt_color(void *ignored)
 			goto out;
 		}
 
-		err = drm_mm_insert_node_generic(&mm, node, n, 0, n,
-						 DRM_MM_SEARCH_DEFAULT,
-						 DRM_MM_CREATE_DEFAULT);
+		err = drm_mm_insert_node_generic(&mm, node, n, 0, n, 0);
 		if (err) {
 			pr_err("insert failed, step %d\n", n);
 			kfree(node);
@@ -1539,8 +1521,7 @@ static int igt_color(void *ignored)
 			}
 
 			err = drm_mm_insert_node_generic(&mm, node, n, n, n,
-							 modes[m].search,
-							 modes[m].create);
+							 modes[m].flags);
 			if (err) {
 				pr_err("%s insert failed, step %d, err=%d\n",
 				       modes[m].name, n, err);
@@ -1560,7 +1541,7 @@ static int igt_color(void *ignored)
 				goto out;
 			}
 
-			if (!node->hole_follows &&
+			if (!node->hole_size &&
 			    list_next_entry(node, node_list)->allocated) {
 				pr_err("%s colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
 				       modes[m].name,
@@ -1623,9 +1604,7 @@ static int igt_color_evict(void *ignored)
 		int err;
 
 		err = drm_mm_insert_node_generic(&mm, &nodes[n].node,
-						 1, 0, color++,
-						 DRM_MM_SEARCH_DEFAULT,
-						 DRM_MM_CREATE_DEFAULT);
+						 1, 0, color++, 0);
 		if (err) {
 			pr_err("insert failed, step %d\n", n);
 			ret = err;
@@ -1670,8 +1649,7 @@ static int igt_color_evict(void *ignored)
 
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_generic(&mm, &tmp, nsize, n, c,
-						 DRM_MM_SEARCH_DEFAULT,
-						 DRM_MM_CREATE_DEFAULT);
+						 DRM_MM_INSERT_EVICT);
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d, color=%d, err=%d\n",
 			       nsize, n, c, err);
@@ -1689,7 +1667,7 @@ static int igt_color_evict(void *ignored)
 			goto out;
 		}
 
-		if (!tmp.hole_follows &&
+		if (!tmp.hole_size &&
 		    list_next_entry(&tmp, node_list)->allocated) {
 			pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
 			       tmp.color, tmp.start, tmp.size,
@@ -1748,8 +1726,7 @@ static int igt_color_evict(void *ignored)
 
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_generic(&mm, &tmp, nsize, n, c,
-						 DRM_MM_SEARCH_DEFAULT,
-						 DRM_MM_CREATE_DEFAULT);
+						 DRM_MM_INSERT_EVICT);
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime), color=%d, err=%d\n",
 			       nsize, n, c, err);
@@ -1767,7 +1744,7 @@ static int igt_color_evict(void *ignored)
 			goto out;
 		}
 
-		if (!tmp.hole_follows &&
+		if (!tmp.hole_size &&
 		    list_next_entry(&tmp, node_list)->allocated) {
 			pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
 			       tmp.color, tmp.start, tmp.size,
@@ -1836,9 +1813,7 @@ static int igt_color_evict_range(void *ignored)
 		int err;
 
 		err = drm_mm_insert_node_generic(&mm, &nodes[n].node,
-						 1, 0, color++,
-						 DRM_MM_SEARCH_DEFAULT,
-						 DRM_MM_CREATE_DEFAULT);
+						 1, 0, color++, 0);
 		if (err) {
 			pr_err("insert failed, step %d\n", n);
 			ret = err;
@@ -1886,8 +1861,7 @@ static int igt_color_evict_range(void *ignored)
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_in_range_generic(&mm, &tmp, nsize, n, c,
 							  range_start, range_end,
-							  DRM_MM_SEARCH_DEFAULT,
-							  DRM_MM_CREATE_DEFAULT);
+							  DRM_MM_INSERT_EVICT);
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d, color=%d, err=%d\n",
 			       nsize, n, c, err);
@@ -1905,7 +1879,7 @@ static int igt_color_evict_range(void *ignored)
 			goto out;
 		}
 
-		if (!tmp.hole_follows &&
+		if (!tmp.hole_size &&
 		    list_next_entry(&tmp, node_list)->allocated) {
 			pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
 			       tmp.color, tmp.start, tmp.size,
@@ -1967,8 +1941,7 @@ static int igt_color_evict_range(void *ignored)
 		memset(&tmp, 0, sizeof(tmp));
 		err = drm_mm_insert_node_in_range_generic(&mm, &tmp, nsize, n, c,
 							  range_start, range_end,
-							  DRM_MM_SEARCH_DEFAULT,
-							  DRM_MM_CREATE_DEFAULT);
+							  DRM_MM_INSERT_EVICT);
 		if (err) {
 			pr_err("Failed to insert into eviction hole: size=%d, align=%d (prime), color=%d, err=%d\n",
 			       nsize, n, c, err);
@@ -1986,7 +1959,7 @@ static int igt_color_evict_range(void *ignored)
 			goto out;
 		}
 
-		if (!tmp.hole_follows &&
+		if (!tmp.hole_size &&
 		    list_next_entry(&tmp, node_list)->allocated) {
 			pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n",
 			       tmp.color, tmp.start, tmp.size,
diff --git a/drivers/gpu/drm/sis/sis_mm.c b/drivers/gpu/drm/sis/sis_mm.c
index 03defda77766..1622db24cd39 100644
--- a/drivers/gpu/drm/sis/sis_mm.c
+++ b/drivers/gpu/drm/sis/sis_mm.c
@@ -109,8 +109,7 @@ static int sis_drm_alloc(struct drm_device *dev, struct drm_file *file,
 	if (pool == AGP_TYPE) {
 		retval = drm_mm_insert_node(&dev_priv->agp_mm,
 					    &item->mm_node,
-					    mem->size, 0,
-					    DRM_MM_SEARCH_DEFAULT);
+					    mem->size);
 		offset = item->mm_node.start;
 	} else {
 #if defined(CONFIG_FB_SIS) || defined(CONFIG_FB_SIS_MODULE)
@@ -122,8 +121,7 @@ static int sis_drm_alloc(struct drm_device *dev, struct drm_file *file,
 #else
 		retval = drm_mm_insert_node(&dev_priv->vram_mm,
 					    &item->mm_node,
-					    mem->size, 0,
-					    DRM_MM_SEARCH_DEFAULT);
+					    mem->size);
 		offset = item->mm_node.start;
 #endif
 	}
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 95e622e31931..96e76244c571 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -98,8 +98,8 @@ static int tegra_bo_iommu_map(struct tegra_drm *tegra, struct tegra_bo *bo)
 	if (!bo->mm)
 		return -ENOMEM;
 
-	err = drm_mm_insert_node_generic(&tegra->mm, bo->mm, bo->gem.size,
-					 PAGE_SIZE, 0, 0, 0);
+	err = drm_mm_insert_node_generic(&tegra->mm,
+					 bo->mm, bo->gem.size, PAGE_SIZE, 0, 0);
 	if (err < 0) {
 		dev_err(tegra->drm->dev, "out of I/O virtual memory: %zd\n",
 			err);
diff --git a/drivers/gpu/drm/ttm/ttm_bo_manager.c b/drivers/gpu/drm/ttm/ttm_bo_manager.c
index aa0bd054d3e9..a3ddc95825f7 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_manager.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_manager.c
@@ -54,9 +54,8 @@ static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man,
 {
 	struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv;
 	struct drm_mm *mm = &rman->mm;
-	struct drm_mm_node *node = NULL;
-	enum drm_mm_search_flags sflags = DRM_MM_SEARCH_BEST;
-	enum drm_mm_allocator_flags aflags = DRM_MM_CREATE_DEFAULT;
+	struct drm_mm_node *node;
+	enum drm_mm_flags flags;
 	unsigned long lpfn;
 	int ret;
 
@@ -68,16 +67,14 @@ static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man,
 	if (!node)
 		return -ENOMEM;
 
-	if (place->flags & TTM_PL_FLAG_TOPDOWN) {
-		sflags = DRM_MM_SEARCH_BELOW;
-		aflags = DRM_MM_CREATE_TOP;
-	}
+	flags = 0;
+	if (place->flags & TTM_PL_FLAG_TOPDOWN)
+		flags = DRM_MM_INSERT_HIGH;
 
 	spin_lock(&rman->lock);
 	ret = drm_mm_insert_node_in_range_generic(mm, node, mem->num_pages,
 					  mem->page_alignment, 0,
-					  place->fpfn, lpfn,
-					  sflags, aflags);
+					  place->fpfn, lpfn, flags);
 	spin_unlock(&rman->lock);
 
 	if (unlikely(ret)) {
diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
index 7f08d681a74b..6af654c013a4 100644
--- a/drivers/gpu/drm/vc4/vc4_crtc.c
+++ b/drivers/gpu/drm/vc4/vc4_crtc.c
@@ -590,7 +590,7 @@ static int vc4_crtc_atomic_check(struct drm_crtc *crtc,
 
 	spin_lock_irqsave(&vc4->hvs->mm_lock, flags);
 	ret = drm_mm_insert_node(&vc4->hvs->dlist_mm, &vc4_state->mm,
-				 dlist_count, 1, 0);
+				 dlist_count);
 	spin_unlock_irqrestore(&vc4->hvs->mm_lock, flags);
 	if (ret)
 		return ret;
diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
index 6fbab1c82cb1..4aba0fa56289 100644
--- a/drivers/gpu/drm/vc4/vc4_hvs.c
+++ b/drivers/gpu/drm/vc4/vc4_hvs.c
@@ -141,8 +141,7 @@ static int vc4_hvs_upload_linear_kernel(struct vc4_hvs *hvs,
 	int ret, i;
 	u32 __iomem *dst_kernel;
 
-	ret = drm_mm_insert_node(&hvs->dlist_mm, space, VC4_KERNEL_DWORDS, 1,
-				 0);
+	ret = drm_mm_insert_node(&hvs->dlist_mm, space, VC4_KERNEL_DWORDS);
 	if (ret) {
 		DRM_ERROR("Failed to allocate space for filter kernel: %d\n",
 			  ret);
diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
index 881bf489478b..03068ab9bdc1 100644
--- a/drivers/gpu/drm/vc4/vc4_plane.c
+++ b/drivers/gpu/drm/vc4/vc4_plane.c
@@ -514,9 +514,9 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
 	if (lbm_size) {
 		if (!vc4_state->lbm.allocated) {
 			spin_lock_irqsave(&vc4->hvs->mm_lock, irqflags);
-			ret = drm_mm_insert_node(&vc4->hvs->lbm_mm,
-						 &vc4_state->lbm,
-						 lbm_size, 32, 0);
+			ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm,
+							 &vc4_state->lbm,
+							 lbm_size, 32, 0, 0);
 			spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags);
 		} else {
 			WARN_ON_ONCE(lbm_size != vc4_state->lbm.size);
diff --git a/drivers/gpu/drm/via/via_mm.c b/drivers/gpu/drm/via/via_mm.c
index a04ef1c992d9..4217d66a5cc6 100644
--- a/drivers/gpu/drm/via/via_mm.c
+++ b/drivers/gpu/drm/via/via_mm.c
@@ -140,11 +140,11 @@ int via_mem_alloc(struct drm_device *dev, void *data,
 	if (mem->type == VIA_MEM_AGP)
 		retval = drm_mm_insert_node(&dev_priv->agp_mm,
 					    &item->mm_node,
-					    tmpSize, 0, DRM_MM_SEARCH_DEFAULT);
+					    tmpSize);
 	else
 		retval = drm_mm_insert_node(&dev_priv->vram_mm,
 					    &item->mm_node,
-					    tmpSize, 0, DRM_MM_SEARCH_DEFAULT);
+					    tmpSize);
 	if (retval)
 		goto fail_alloc;
 
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
index aa04fb0159a7..77cb7c627e09 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
@@ -673,16 +673,10 @@ static bool vmw_cmdbuf_try_alloc(struct vmw_cmdbuf_man *man,
  
 	memset(info->node, 0, sizeof(*info->node));
 	spin_lock_bh(&man->lock);
-	ret = drm_mm_insert_node_generic(&man->mm, info->node, info->page_size,
-					 0, 0,
-					 DRM_MM_SEARCH_DEFAULT,
-					 DRM_MM_CREATE_DEFAULT);
+	ret = drm_mm_insert_node(&man->mm, info->node, info->page_size);
 	if (ret) {
 		vmw_cmdbuf_man_process(man);
-		ret = drm_mm_insert_node_generic(&man->mm, info->node,
-						 info->page_size, 0, 0,
-						 DRM_MM_SEARCH_DEFAULT,
-						 DRM_MM_CREATE_DEFAULT);
+		ret = drm_mm_insert_node(&man->mm, info->node, info->page_size);
 	}
 
 	spin_unlock_bh(&man->lock);
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 884166b91e90..511a790e05ed 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -54,32 +54,27 @@
 #define DRM_MM_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr)
 #endif
 
-enum drm_mm_search_flags {
-	DRM_MM_SEARCH_DEFAULT =		0,
-	DRM_MM_SEARCH_BEST =		1 << 0,
-	DRM_MM_SEARCH_BELOW =		1 << 1,
+enum drm_mm_flags {
+	DRM_MM_INSERT_BEST = 0,
+	DRM_MM_INSERT_LOW,
+	DRM_MM_INSERT_HIGH,
+	DRM_MM_INSERT_EVICT,
 };
 
-enum drm_mm_allocator_flags {
-	DRM_MM_CREATE_DEFAULT =		0,
-	DRM_MM_CREATE_TOP =		1 << 0,
-};
-
-#define DRM_MM_BOTTOMUP DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT
-#define DRM_MM_TOPDOWN DRM_MM_SEARCH_BELOW, DRM_MM_CREATE_TOP
-
 struct drm_mm_node {
+	struct drm_mm *mm;
 	struct list_head node_list;
 	struct list_head hole_stack;
 	struct rb_node rb;
-	unsigned hole_follows : 1;
-	unsigned allocated : 1;
-	bool scanned_block : 1;
-	unsigned long color;
+	struct rb_node rb_hole_size;
+	struct rb_node rb_hole_addr;
 	u64 start;
 	u64 size;
 	u64 __subtree_last;
-	struct drm_mm *mm;
+	u64 hole_size;
+	unsigned long color;
+	bool allocated : 1;
+	bool scanned_block : 1;
 #ifdef CONFIG_DRM_DEBUG_MM
 	depot_stack_handle_t stack;
 #endif
@@ -93,6 +88,8 @@ struct drm_mm {
 	struct drm_mm_node head_node;
 	/* Keep an interval_tree for fast lookup of drm_mm_nodes by address. */
 	struct rb_root interval_tree;
+	struct rb_root holes_size;
+	struct rb_root holes_addr;
 
 	void (*color_adjust)(const struct drm_mm_node *node,
 			     unsigned long color,
@@ -166,7 +163,7 @@ static inline u64 __drm_mm_hole_node_start(const struct drm_mm_node *hole_node)
  */
 static inline u64 drm_mm_hole_node_start(const struct drm_mm_node *hole_node)
 {
-	DRM_MM_BUG_ON(!hole_node->hole_follows);
+	DRM_MM_BUG_ON(!hole_node->hole_size);
 	return __drm_mm_hole_node_start(hole_node);
 }
 
@@ -216,14 +213,6 @@ static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
 #define drm_mm_for_each_node_safe(entry, n, mm) \
 	list_for_each_entry_safe(entry, n, __drm_mm_nodes(mm), node_list)
 
-#define __drm_mm_for_each_hole(entry, mm, hole_start, hole_end, backwards) \
-	for (entry = list_entry((backwards) ? (mm)->hole_stack.prev : (mm)->hole_stack.next, struct drm_mm_node, hole_stack); \
-	     &entry->hole_stack != &(mm)->hole_stack ? \
-	     hole_start = drm_mm_hole_node_start(entry), \
-	     hole_end = drm_mm_hole_node_end(entry), \
-	     1 : 0; \
-	     entry = list_entry((backwards) ? entry->hole_stack.prev : entry->hole_stack.next, struct drm_mm_node, hole_stack))
-
 /**
  * drm_mm_for_each_hole - iterator to walk over all holes
  * @entry: drm_mm_node used internally to track progress
@@ -239,25 +228,52 @@ static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
  * Implementation Note:
  * We need to inline list_for_each_entry in order to be able to set hole_start
  * and hole_end on each iteration while keeping the macro sane.
- *
- * The __drm_mm_for_each_hole version is similar, but with added support for
- * going backwards.
  */
-#define drm_mm_for_each_hole(entry, mm, hole_start, hole_end) \
-	__drm_mm_for_each_hole(entry, mm, hole_start, hole_end, 0)
+#define drm_mm_for_each_hole(pos, mm, hole_start, hole_end) \
+	for (pos = list_first_entry(&(mm)->hole_stack, \
+				    typeof(*pos), hole_stack); \
+	     &pos->hole_stack != &(mm)->hole_stack ? \
+	     hole_start = drm_mm_hole_node_start(pos), \
+	     hole_end = hole_start + pos->hole_size : 0; \
+	     pos = list_next_entry(pos, hole_stack))
 
 /*
  * Basic range manager support (drm_mm.c)
  */
 int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node);
+int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,
+					struct drm_mm_node *node,
+					u64 size,
+					u64 alignment,
+					unsigned long color,
+					u64 start,
+					u64 end,
+					unsigned int flags);
+
+/**
+ * drm_mm_insert_node_generic - search for space and insert @node
+ * @mm: drm_mm to allocate from
+ * @node: preallocate node to insert
+ * @size: size of the allocation
+ * @alignment: alignment of the allocation
+ * @color: opaque tag value to use for this node
+ * @flags: flags to fine-tune the allocation search and creation
+ *
+ * The preallocated node must be cleared to 0.
+ *
+ * Returns:
+ * 0 on success, -ENOSPC if there's no suitable hole.
+ */
+static inline int
+drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node,
+			   u64 size, u64 alignment,
+			   unsigned long color,
+			   unsigned int flags)
+{
+	return drm_mm_insert_node_in_range_generic(mm, node, size, alignment,
+						   color, 0, U64_MAX, flags);
+}
 
-int drm_mm_insert_node_generic(struct drm_mm *mm,
-			       struct drm_mm_node *node,
-			       u64 size,
-			       u64 alignment,
-			       unsigned long color,
-			       enum drm_mm_search_flags sflags,
-			       enum drm_mm_allocator_flags aflags);
 /**
  * drm_mm_insert_node - search for space and insert @node
  * @mm: drm_mm to allocate from
@@ -276,23 +292,11 @@ int drm_mm_insert_node_generic(struct drm_mm *mm,
  */
 static inline int drm_mm_insert_node(struct drm_mm *mm,
 				     struct drm_mm_node *node,
-				     u64 size,
-				     u64 alignment,
-				     enum drm_mm_search_flags flags)
+				     u64 size)
 {
-	return drm_mm_insert_node_generic(mm, node, size, alignment, 0, flags,
-					  DRM_MM_CREATE_DEFAULT);
+	return drm_mm_insert_node_generic(mm, node, size, 0, 0, 0);
 }
 
-int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,
-					struct drm_mm_node *node,
-					u64 size,
-					u64 alignment,
-					unsigned long color,
-					u64 start,
-					u64 end,
-					enum drm_mm_search_flags sflags,
-					enum drm_mm_allocator_flags aflags);
 /**
  * drm_mm_insert_node_in_range - ranged search for space and insert @node
  * @mm: drm_mm to allocate from
@@ -317,11 +321,10 @@ static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
 					      u64 alignment,
 					      u64 start,
 					      u64 end,
-					      enum drm_mm_search_flags flags)
+					      unsigned int flags)
 {
 	return drm_mm_insert_node_in_range_generic(mm, node, size, alignment,
-						   0, start, end, flags,
-						   DRM_MM_CREATE_DEFAULT);
+						   0, start, end, flags);
 }
 
 void drm_mm_remove_node(struct drm_mm_node *node);
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 34/34] drm: kselftest for drm_mm and bottom-up allocation
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (32 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 33/34] drm: Fix drm_mm search and insertion Chris Wilson
@ 2016-12-12 11:53 ` Chris Wilson
  2016-12-14  9:10   ` Joonas Lahtinen
  2016-12-12 12:15 ` ✓ Fi.CI.BAT: success for series starting with [01/34] drm/i915: Use the MRU stack search after evicting Patchwork
  34 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-12 11:53 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, joonas.lahtinen

Check that if we request bottom-up allocation from drm_mm_insert_node()
we receive the next available hole from the bottom.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/selftests/drm_mm_selftests.h |  1 +
 drivers/gpu/drm/selftests/test-drm_mm.c      | 89 ++++++++++++++++++++++++++++
 2 files changed, 90 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h
index fb9d704e7943..bf0cd64264c8 100644
--- a/drivers/gpu/drm/selftests/drm_mm_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h
@@ -9,6 +9,7 @@ selftest(color_evict_range, igt_color_evict_range)
 selftest(color_evict, igt_color_evict)
 selftest(color, igt_color)
 selftest(topdown_align, igt_topdown_align)
+selftest(bottomup, igt_bottomup)
 selftest(topdown, igt_topdown)
 selftest(evict_range, igt_evict_range)
 selftest(evict, igt_evict)
diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
index f2f802c60e4c..30d04dd9daef 100644
--- a/drivers/gpu/drm/selftests/test-drm_mm.c
+++ b/drivers/gpu/drm/selftests/test-drm_mm.c
@@ -1304,6 +1304,95 @@ static int igt_topdown(void *ignored)
 	return ret;
 }
 
+static int igt_bottomup(void *ignored)
+{
+	u32 lcg_state = random_seed;
+	const int size = 8192;
+	unsigned long *bitmap;
+	struct drm_mm mm;
+	struct drm_mm_node *nodes, *node, *next;
+	int *order, n, m, o = 0;
+	int ret;
+
+	ret = -ENOMEM;
+	nodes = vzalloc(size * sizeof(*nodes));
+	if (!nodes)
+		goto err;
+
+	bitmap = kzalloc(size / BITS_PER_LONG * sizeof(unsigned long),
+			 GFP_TEMPORARY);
+	if (!bitmap)
+		goto err_nodes;
+
+	order = drm_random_order(size, &lcg_state);
+	if (!order)
+		goto err_bitmap;
+
+	ret = -EINVAL;
+	drm_mm_init(&mm, 0, size);
+	for (n = 0; n < size; n++) {
+		int err;
+
+		err = drm_mm_insert_node_generic(&mm, &nodes[n], 1, 0, 0,
+						 DRM_MM_INSERT_LOW);
+		if (err) {
+			pr_err("bottomup insert failed, step %d\n", n);
+			ret = err;
+			goto out;
+		}
+
+		if (nodes[n].hole_size) {
+			pr_err("hole after bottomup insert %d, start=%llx\n",
+			       n, nodes[n].start);
+			goto out;
+		}
+	}
+
+	drm_for_each_prime(n, size) {
+		for (m = 0; m < n; m++) {
+			node = &nodes[order[(o + m) % size]];
+			drm_mm_remove_node(node);
+			__set_bit(node->start, bitmap);
+		}
+
+		for (m = 0; m < n; m++) {
+			int err, last;
+
+			node = &nodes[order[(o + m) % size]];
+			err = drm_mm_insert_node_generic(&mm, node, 1, 0, 0,
+							 DRM_MM_INSERT_LOW);
+			if (err) {
+				pr_err("insert failed, step %d/%d\n", m, n);
+				ret = err;
+				goto out;
+			}
+
+			last = find_first_bit(bitmap, size);
+			if (node->start != last) {
+				pr_err("node %d/%d not inserted into bottom hole, expected %d, found %lld\n",
+				       m, n, last, node->start);
+				goto out;
+			}
+			__clear_bit(last, bitmap);
+		}
+
+		o += n;
+	}
+
+	ret = 0;
+out:
+	drm_mm_for_each_node_safe(node, next, &mm)
+		drm_mm_remove_node(node);
+	drm_mm_takedown(&mm);
+	kfree(order);
+err_bitmap:
+	kfree(bitmap);
+err_nodes:
+	vfree(nodes);
+err:
+	return ret;
+}
+
 static int igt_topdown_align(void *ignored)
 {
 	struct drm_mm mm;
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [01/34] drm/i915: Use the MRU stack search after evicting
  2016-12-12 11:53 struct drm_mm fixes Chris Wilson
                   ` (33 preceding siblings ...)
  2016-12-12 11:53 ` [PATCH 34/34] drm: kselftest for drm_mm and bottom-up allocation Chris Wilson
@ 2016-12-12 12:15 ` Patchwork
  34 siblings, 0 replies; 81+ messages in thread
From: Patchwork @ 2016-12-12 12:15 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/34] drm/i915: Use the MRU stack search after evicting
URL   : https://patchwork.freedesktop.org/series/16688/
State : success

== Summary ==

Series 16688v1 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/16688/revisions/1/mbox/


fi-bdw-5557u     total:247  pass:233  dwarn:0   dfail:0   fail:0   skip:14 
fi-bsw-n3050     total:247  pass:208  dwarn:0   dfail:0   fail:0   skip:39 
fi-bxt-t5700     total:247  pass:220  dwarn:0   dfail:0   fail:0   skip:27 
fi-byt-j1900     total:247  pass:220  dwarn:0   dfail:0   fail:0   skip:27 
fi-byt-n2820     total:247  pass:216  dwarn:0   dfail:0   fail:0   skip:31 
fi-hsw-4770      total:247  pass:228  dwarn:0   dfail:0   fail:0   skip:19 
fi-hsw-4770r     total:247  pass:228  dwarn:0   dfail:0   fail:0   skip:19 
fi-ilk-650       total:247  pass:195  dwarn:0   dfail:0   fail:0   skip:52 
fi-ivb-3520m     total:247  pass:226  dwarn:0   dfail:0   fail:0   skip:21 
fi-ivb-3770      total:247  pass:226  dwarn:0   dfail:0   fail:0   skip:21 
fi-kbl-7500u     total:247  pass:226  dwarn:0   dfail:0   fail:0   skip:21 
fi-skl-6260u     total:247  pass:234  dwarn:0   dfail:0   fail:0   skip:13 
fi-skl-6700hq    total:247  pass:227  dwarn:0   dfail:0   fail:0   skip:20 
fi-skl-6700k     total:247  pass:224  dwarn:3   dfail:0   fail:0   skip:20 
fi-skl-6770hq    total:247  pass:234  dwarn:0   dfail:0   fail:0   skip:13 
fi-snb-2520m     total:247  pass:216  dwarn:0   dfail:0   fail:0   skip:31 
fi-snb-2600      total:247  pass:215  dwarn:0   dfail:0   fail:0   skip:32 

f6a248e2507f98d7eb1f4fec8cfcbf685a33d289 drm-tip: 2016y-12m-10d-21h-47m-23s UTC integration manifest
9bcbd0a drm: kselftest for drm_mm and bottom-up allocation
55a60fe drm: Fix drm_mm search and insertion
d8fd7a4 drm: Apply tight eviction scanning to color_adjust
8f4381e drm: Simplify drm_mm scan-list manipulation
51a654e drm: Optimise power-of-two alignments in drm_mm_scan_add_block()
2b3eddf drm: Compute tight evictions for drm_mm_scan
7d70464 drm: Fix application of color vs range restriction when scanning drm_mm
0a9ed47 drm: Unconditionally do the range check in drm_mm_scan_add_block()
e65ea2c drm: Rename prev_node to hole in drm_mm_scan_add_block()
ca0c94d drm: Extract struct drm_mm_scan from struct drm_mm
e9e37bf drm: Compile time enabling for asserts in drm_mm
51b2f9b drm: Simplify drm_mm_clean()
65df54f drm: Constify the drm_mm API
abdedf3 drm: Promote drm_mm alignment to u64
5de5a7d drm/i915: Build DRM range manager selftests for CI
1398fe9 drm: kselftest for drm_mm and restricted color eviction
52dc8d8 drm: kselftest for drm_mm and color eviction
80d106f drm: kselftest for drm_mm and color adjustment
b7a1007 drm: kselftest for drm_mm and top-down alignment
bacf398 drm: kselftest for drm_mm and top-down allocation
ebd2416 drm: kselftest for drm_mm and range restricted eviction
b771099 drm: kselftest for drm_mm and eviction
90aac01 drm: kselftest for drm_mm and alignment
13c0d0e drm: kselftest for drm_mm_insert_node_in_range()
e6bcf17 drm: kselftest for drm_mm_replace_node()
74a4569 drm: kselftest for drm_mm_insert_node()
978e34a drm: kselftest for drm_mm_reserve_node()
f93703b drm: Add a simple prime number generator
6aa707d drm: Add a simple linear congruent generator PRNG
371730d drm: kselftest for drm_mm_init()
3193e5f drm: Add some kselftests for the DRM range manager (struct drm_mm)
fa5a232 drm: Add drm_mm_for_each_node_safe()
28b6aa0 drm/i915: Simplify i915_gtt_color_adjust()
a332b49 drm/i915: Use the MRU stack search after evicting

== Logs ==

For more details see: https://intel-gfx-ci.01.org/CI/Patchwork_3267/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 21/34] drm: Promote drm_mm alignment to u64
  2016-12-12 11:53 ` [PATCH 21/34] drm: Promote drm_mm alignment to u64 Chris Wilson
@ 2016-12-12 12:39   ` Christian König
  0 siblings, 0 replies; 81+ messages in thread
From: Christian König @ 2016-12-12 12:39 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx, joonas.lahtinen

Am 12.12.2016 um 12:53 schrieb Chris Wilson:
> In places (e.g. i915.ko), the alignment is exported to userspace as u64
> and there now exists hardware for which we can indeed utilize a u64
> alignment. As such, we need to keep 64bit integers throughout when
> handling alignment.
>
> Testcase: igt/drm_mm/align64
> Testcase: igt/gem_exec_alignment
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Nice, we have a few things which could use an alignment above 4GB as well.

Patch is Reviewed-by: Christian König <christian.koenig@amd.com>.

> ---
>   drivers/gpu/drm/drm_mm.c                | 37 +++++++++++++++------------------
>   drivers/gpu/drm/selftests/test-drm_mm.c |  2 +-
>   include/drm/drm_mm.h                    | 16 +++++++-------
>   3 files changed, 26 insertions(+), 29 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
> index 6e0735539545..621a5792a0dd 100644
> --- a/drivers/gpu/drm/drm_mm.c
> +++ b/drivers/gpu/drm/drm_mm.c
> @@ -93,12 +93,12 @@
>   
>   static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
>   						u64 size,
> -						unsigned alignment,
> +						u64 alignment,
>   						unsigned long color,
>   						enum drm_mm_search_flags flags);
>   static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
>   						u64 size,
> -						unsigned alignment,
> +						u64 alignment,
>   						unsigned long color,
>   						u64 start,
>   						u64 end,
> @@ -227,7 +227,7 @@ static void drm_mm_interval_tree_add_node(struct drm_mm_node *hole_node,
>   
>   static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
>   				 struct drm_mm_node *node,
> -				 u64 size, unsigned alignment,
> +				 u64 size, u64 alignment,
>   				 unsigned long color,
>   				 enum drm_mm_allocator_flags flags)
>   {
> @@ -246,10 +246,9 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
>   		adj_start = adj_end - size;
>   
>   	if (alignment) {
> -		u64 tmp = adj_start;
> -		unsigned rem;
> +		u64 rem;
>   
> -		rem = do_div(tmp, alignment);
> +		div64_u64_rem(adj_start, alignment, &rem);
>   		if (rem) {
>   			if (flags & DRM_MM_CREATE_TOP)
>   				adj_start -= rem;
> @@ -376,7 +375,7 @@ EXPORT_SYMBOL(drm_mm_reserve_node);
>    * 0 on success, -ENOSPC if there's no suitable hole.
>    */
>   int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node,
> -			       u64 size, unsigned alignment,
> +			       u64 size, u64 alignment,
>   			       unsigned long color,
>   			       enum drm_mm_search_flags sflags,
>   			       enum drm_mm_allocator_flags aflags)
> @@ -398,7 +397,7 @@ EXPORT_SYMBOL(drm_mm_insert_node_generic);
>   
>   static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
>   				       struct drm_mm_node *node,
> -				       u64 size, unsigned alignment,
> +				       u64 size, u64 alignment,
>   				       unsigned long color,
>   				       u64 start, u64 end,
>   				       enum drm_mm_allocator_flags flags)
> @@ -423,10 +422,9 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
>   		adj_start = adj_end - size;
>   
>   	if (alignment) {
> -		u64 tmp = adj_start;
> -		unsigned rem;
> +		u64 rem;
>   
> -		rem = do_div(tmp, alignment);
> +		div64_u64_rem(adj_start, alignment, &rem);
>   		if (rem) {
>   			if (flags & DRM_MM_CREATE_TOP)
>   				adj_start -= rem;
> @@ -482,7 +480,7 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
>    * 0 on success, -ENOSPC if there's no suitable hole.
>    */
>   int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node,
> -					u64 size, unsigned alignment,
> +					u64 size, u64 alignment,
>   					unsigned long color,
>   					u64 start, u64 end,
>   					enum drm_mm_search_flags sflags,
> @@ -549,16 +547,15 @@ void drm_mm_remove_node(struct drm_mm_node *node)
>   }
>   EXPORT_SYMBOL(drm_mm_remove_node);
>   
> -static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment)
> +static int check_free_hole(u64 start, u64 end, u64 size, u64 alignment)
>   {
>   	if (end - start < size)
>   		return 0;
>   
>   	if (alignment) {
> -		u64 tmp = start;
> -		unsigned rem;
> +		u64 rem;
>   
> -		rem = do_div(tmp, alignment);
> +		div64_u64_rem(start, alignment, &rem);
>   		if (rem)
>   			start += alignment - rem;
>   	}
> @@ -568,7 +565,7 @@ static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment)
>   
>   static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
>   						      u64 size,
> -						      unsigned alignment,
> +						      u64 alignment,
>   						      unsigned long color,
>   						      enum drm_mm_search_flags flags)
>   {
> @@ -610,7 +607,7 @@ static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
>   
>   static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
>   							u64 size,
> -							unsigned alignment,
> +							u64 alignment,
>   							unsigned long color,
>   							u64 start,
>   							u64 end,
> @@ -728,7 +725,7 @@ EXPORT_SYMBOL(drm_mm_replace_node);
>    */
>   void drm_mm_init_scan(struct drm_mm *mm,
>   		      u64 size,
> -		      unsigned alignment,
> +		      u64 alignment,
>   		      unsigned long color)
>   {
>   	mm->scan_color = color;
> @@ -761,7 +758,7 @@ EXPORT_SYMBOL(drm_mm_init_scan);
>    */
>   void drm_mm_init_scan_with_range(struct drm_mm *mm,
>   				 u64 size,
> -				 unsigned alignment,
> +				 u64 alignment,
>   				 unsigned long color,
>   				 u64 start,
>   				 u64 end)
> diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
> index 02f10d9d1396..cad42919f15f 100644
> --- a/drivers/gpu/drm/selftests/test-drm_mm.c
> +++ b/drivers/gpu/drm/selftests/test-drm_mm.c
> @@ -775,7 +775,7 @@ static int igt_align64(void *ignored)
>   
>   static void show_scan(const struct drm_mm *scan)
>   {
> -	pr_info("scan: hit [%llx, %llx], size=%lld, align=%d, color=%ld\n",
> +	pr_info("scan: hit [%llx, %llx], size=%lld, align=%lld, color=%ld\n",
>   		scan->scan_hit_start, scan->scan_hit_end,
>   		scan->scan_size, scan->scan_alignment, scan->scan_color);
>   }
> diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
> index 8faa28ad97b3..e3322f92633e 100644
> --- a/include/drm/drm_mm.h
> +++ b/include/drm/drm_mm.h
> @@ -92,12 +92,12 @@ struct drm_mm {
>   	struct rb_root interval_tree;
>   
>   	unsigned int scan_check_range : 1;
> -	unsigned scan_alignment;
> +	unsigned int scanned_blocks;
>   	unsigned long scan_color;
> +	u64 scan_alignment;
>   	u64 scan_size;
>   	u64 scan_hit_start;
>   	u64 scan_hit_end;
> -	unsigned scanned_blocks;
>   	u64 scan_start;
>   	u64 scan_end;
>   	struct drm_mm_node *prev_scanned_node;
> @@ -242,7 +242,7 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node);
>   int drm_mm_insert_node_generic(struct drm_mm *mm,
>   			       struct drm_mm_node *node,
>   			       u64 size,
> -			       unsigned alignment,
> +			       u64 alignment,
>   			       unsigned long color,
>   			       enum drm_mm_search_flags sflags,
>   			       enum drm_mm_allocator_flags aflags);
> @@ -265,7 +265,7 @@ int drm_mm_insert_node_generic(struct drm_mm *mm,
>   static inline int drm_mm_insert_node(struct drm_mm *mm,
>   				     struct drm_mm_node *node,
>   				     u64 size,
> -				     unsigned alignment,
> +				     u64 alignment,
>   				     enum drm_mm_search_flags flags)
>   {
>   	return drm_mm_insert_node_generic(mm, node, size, alignment, 0, flags,
> @@ -275,7 +275,7 @@ static inline int drm_mm_insert_node(struct drm_mm *mm,
>   int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,
>   					struct drm_mm_node *node,
>   					u64 size,
> -					unsigned alignment,
> +					u64 alignment,
>   					unsigned long color,
>   					u64 start,
>   					u64 end,
> @@ -302,7 +302,7 @@ int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,
>   static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
>   					      struct drm_mm_node *node,
>   					      u64 size,
> -					      unsigned alignment,
> +					      u64 alignment,
>   					      u64 start,
>   					      u64 end,
>   					      enum drm_mm_search_flags flags)
> @@ -344,11 +344,11 @@ __drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last);
>   
>   void drm_mm_init_scan(struct drm_mm *mm,
>   		      u64 size,
> -		      unsigned alignment,
> +		      u64 alignment,
>   		      unsigned long color);
>   void drm_mm_init_scan_with_range(struct drm_mm *mm,
>   				 u64 size,
> -				 unsigned alignment,
> +				 u64 alignment,
>   				 unsigned long color,
>   				 u64 start,
>   				 u64 end);


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 28/34] drm: Fix application of color vs range restriction when scanning drm_mm
  2016-12-12 11:53 ` [PATCH 28/34] drm: Fix application of color vs range restriction when scanning drm_mm Chris Wilson
@ 2016-12-12 14:57   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-12 14:57 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> The range restriction should be applied after the color adjustment, or
> else we may inadvertently apply the color adjustment to the restricted
> hole (and not against its neighbours).
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 01/34] drm/i915: Use the MRU stack search after evicting
  2016-12-12 11:53 ` [PATCH 01/34] drm/i915: Use the MRU stack search after evicting Chris Wilson
@ 2016-12-13  9:29   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13  9:29 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> When we evict from the GTT to make room for an object, the hole we
> create is put onto the MRU stack inside the drm_mm range manager. On the
> next search pass, we can speed up a PIN_HIGH allocation by referencing
> that stack for the new hole.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 02/34] drm/i915: Simplify i915_gtt_color_adjust()
  2016-12-12 11:53 ` [PATCH 02/34] drm/i915: Simplify i915_gtt_color_adjust() Chris Wilson
@ 2016-12-13  9:32   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13  9:32 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> If we remember that node_list is a circular list containing the fake
> head_node, we can use a simple list_next_entry() and skip the NULL check
> for the allocated check against the head_node.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

I'm not sure if the code should be sprinkled full of comments about the
oddly behaving head_node, I can see somebody forgetting it in the
future, too.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regars, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 03/34] drm: Add drm_mm_for_each_node_safe()
  2016-12-12 11:53 ` [PATCH 03/34] drm: Add drm_mm_for_each_node_safe() Chris Wilson
@ 2016-12-13  9:35   ` Joonas Lahtinen
  2016-12-14 11:39     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13  9:35 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> A complement to drm_mm_for_each_node(), wraps list_for_each_entry_safe()
> for walking the list of nodes safe against removal.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +#define __drm_mm_nodes(mm) (&(mm)->head_node.node_list)

Not static inline for type checking?

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 04/34] drm: Add some kselftests for the DRM range manager (struct drm_mm)
  2016-12-12 11:53 ` [PATCH 04/34] drm: Add some kselftests for the DRM range manager (struct drm_mm) Chris Wilson
@ 2016-12-13  9:58   ` Joonas Lahtinen
  2016-12-13 10:21     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13  9:58 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> First we introduce a smattering of infrastructure for writing selftests.
> The idea is that we have a test module that exercises a particular
> portion of the exported API, and that module provides a set of tests
> that can either be run as an ensemble via kselftest or individually via
> an igt harness (in this case igt/drm_mm). To accommodate selecting
> individual tests, we export a boolean parameter to control selection of
> each test - that is hidden inside a bunch of reusable boilerplate macros
> to keep writing the tests simple.
> 
> Testcase: igt/drm_mm
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Acked-by: Christian König <christian.koenig@amd.com>

<SNIP>

> @@ -48,6 +48,19 @@ config DRM_DEBUG_MM
>  
>  	  If in doubt, say "N".
>  
> +config DRM_DEBUG_MM_SELFTEST
> +	tristate "kselftests for DRM range manager (struct drm_mm)"
> +	depends on DRM
> +	depends on DEBUG_KERNEL
> +	default n
> +	help
> +	  This option provides a kernel module that can be used to test
> +	  the DRM range manager (drm_mm) and its API. This option is not
> +	  useful for distributions or general kernels, but only for kernel
> +	  developers working on DRM and associated drivers.
> +
> +	  Say N if you are unsure

"If in doubt" is rather de-facto, at least add a period at the end.

> +++ b/drivers/gpu/drm/selftests/test-drm_mm.c
> @@ -0,0 +1,47 @@
> +/*
> + * Test cases for the drm_mm range manager
> + */
> +
> +#define pr_fmt(fmt) "drm_mm: " fmt
> +
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/random.h>
> +#include <linux/vmalloc.h>
> +
> +#include <drm/drm_mm.h>
> +
> +#define TESTS "drm_mm_selftests.h"
> +#include "drm_selftest.h"
> +
> +static unsigned int random_seed = 0x12345678;

This could be build time, depending on the intended use.

Other than the reversed listing of tests (just for a module_param,
where the kerneldoc doesn't make any promises), can't nag of anything.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 05/34] drm: kselftest for drm_mm_init()
  2016-12-12 11:53 ` [PATCH 05/34] drm: kselftest for drm_mm_init() Chris Wilson
@ 2016-12-13 10:16   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 10:16 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Simple first test to just exercise initialisation of struct drm_mm.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +	drm_mm_for_each_hole(hole, &mm, start, end) {
> +		if (start != 0 || end != 4096) {
> +			pr_err("empty mm has incorrect hole, found (%llx, %llx), expect (%llx, %llx)\n",
> +			       start, end,
> +			       0ull, 4096ull);
> +			goto out;
> +		}
> +	}

While in paranoia mode, make sure there is just one hole?

> +
> > +	memset(&tmp, 0, sizeof(tmp));
> > +	tmp.start = 0;
> > +	tmp.size = 4096;
> > +	ret = drm_mm_reserve_node(&mm, &tmp);
> > +	if (ret) {
> > +		pr_err("failed to reserve whole drm_mm\n");
> > +		goto out;
> +	}

Should be no more holes.

drm_mm_for_each_hole() {
	pr_err();
	goto out;
}

> +	drm_mm_remove_node(&tmp);
> +

And it should again have a hole.

<SNIP>

> +static int igt_debug(void *ignored)
> +{
> +	struct drm_mm mm;
> +	struct drm_mm_node nodes[2];
> +	int ret;
> +
> +	drm_mm_init(&mm, 0, 4096);
> +
> +	memset(nodes, 0, sizeof(nodes));
> +	nodes[0].start = 512;
> +	nodes[0].size = 1024;
> +	ret = drm_mm_reserve_node(&mm, &nodes[0]);
> +	if (ret) {
> +		pr_err("failed to reserve node[0] {start=%lld, size=%lld)\n",
> +		       nodes[0].start, nodes[0].size);
> +		return ret;
> +	}
> +
> +	nodes[1].start = 512 - 1024 - 512;

Would be more clear if you used nodes[0].start. This also goes to
negative address which proves my point.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 04/34] drm: Add some kselftests for the DRM range manager (struct drm_mm)
  2016-12-13  9:58   ` Joonas Lahtinen
@ 2016-12-13 10:21     ` Chris Wilson
  2016-12-13 15:03       ` Joonas Lahtinen
  0 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-13 10:21 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx, dri-devel

On Tue, Dec 13, 2016 at 11:58:54AM +0200, Joonas Lahtinen wrote:
> On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> > +++ b/drivers/gpu/drm/selftests/test-drm_mm.c
> > @@ -0,0 +1,47 @@
> > +/*
> > + * Test cases for the drm_mm range manager
> > + */
> > +
> > +#define pr_fmt(fmt) "drm_mm: " fmt
> > +
> > +#include <linux/module.h>
> > +#include <linux/slab.h>
> > +#include <linux/random.h>
> > +#include <linux/vmalloc.h>
> > +
> > +#include <drm/drm_mm.h>
> > +
> > +#define TESTS "drm_mm_selftests.h"
> > +#include "drm_selftest.h"
> > +
> > +static unsigned int random_seed = 0x12345678;
> 
> This could be build time, depending on the intended use.

I was thinking module param for the seed, just in case we want different
patterns.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-12 11:53 ` [PATCH 06/34] drm: Add a simple linear congruent generator PRNG Chris Wilson
@ 2016-12-13 10:44   ` Joonas Lahtinen
  2016-12-13 15:16   ` David Herrmann
  1 sibling, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 10:44 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> For testing, we want a reproducible PRNG, a plain linear congruent
> generator is suitable for our very limited selftests.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +++ b/drivers/gpu/drm/lib/drm_rand.c
> @@ -0,0 +1,51 @@
> +#include <linux/kernel.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +
> +#include "drm_rand.h"
> +
> +u32 drm_lcg_random(u32 *state)
> +{
> +	u32 s = *state;
> +
> +#define rol(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
> +	s = (s ^ rol(s, 5) ^ rol(s, 24)) + 0x37798849;
> +#undef rol
> +
> +	*state = s;
> +	return s;
> +}

Do state your source for checking purposes. Code is bound to be copied
and there's no reason to have it good.

> +EXPORT_SYMBOL(drm_lcg_random);
> +
> +int *drm_random_reorder(int *order, int count, u32 *state)
> +{
> +	int n;
> +
> +	for (n = count-1; n > 1; n--) {
> +		int r = drm_lcg_random(state) % (n + 1);
> +		if (r != n) {
> +			int tmp = order[n];
> +			order[n] = order[r];
> +			order[r] = tmp;
> +		}
> +	}
> +
> +	return order;
> +}

If you have two items... So definitely add big disclaimers of not being
random, or use some more proven algorithm :)

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 23/34] drm: Simplify drm_mm_clean()
  2016-12-12 11:53 ` [PATCH 23/34] drm: Simplify drm_mm_clean() Chris Wilson
@ 2016-12-13 12:30   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 12:30 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> To test whether there are any nodes allocated within the range manager,
> we merely have to ask whether the node_list is empty.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

For documentation purposes add "Since commit ..." to the commit
message?

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 04/34] drm: Add some kselftests for the DRM range manager (struct drm_mm)
  2016-12-13 10:21     ` Chris Wilson
@ 2016-12-13 15:03       ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 15:03 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, dri-devel

On ti, 2016-12-13 at 10:21 +0000, Chris Wilson wrote:
> On Tue, Dec 13, 2016 at 11:58:54AM +0200, Joonas Lahtinen wrote:
> >
> > This could be build time, depending on the intended use.
> 
> I was thinking module param for the seed, just in case we want different
> patterns.

As discussed in IRC, good idea to have the module parameter (and print
the seed to dmesg, to allow reproducing exactly same scenario), but I
still think each kernel build should have a different seed when not
supplied on the command line, to make the testing coverage wider.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 24/34] drm: Compile time enabling for asserts in drm_mm
  2016-12-12 11:53 ` [PATCH 24/34] drm: Compile time enabling for asserts in drm_mm Chris Wilson
@ 2016-12-13 15:04   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 15:04 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Use CONFIG_DRM_DEBUG_MM to conditionally enable the internal and
> validation checking using BUG_ON. Ideally these paths should all be
> exercised by CI selftests (with the asserts enabled).
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-12 11:53 ` [PATCH 06/34] drm: Add a simple linear congruent generator PRNG Chris Wilson
  2016-12-13 10:44   ` Joonas Lahtinen
@ 2016-12-13 15:16   ` David Herrmann
  2016-12-13 15:18     ` David Herrmann
                       ` (2 more replies)
  1 sibling, 3 replies; 81+ messages in thread
From: David Herrmann @ 2016-12-13 15:16 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development, dri-devel

Hey Chris

On Mon, Dec 12, 2016 at 12:53 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> For testing, we want a reproducible PRNG, a plain linear congruent
> generator is suitable for our very limited selftests.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/Kconfig        |  5 +++++
>  drivers/gpu/drm/Makefile       |  1 +
>  drivers/gpu/drm/lib/drm_rand.c | 51 ++++++++++++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/lib/drm_rand.h |  9 ++++++++
>  4 files changed, 66 insertions(+)
>  create mode 100644 drivers/gpu/drm/lib/drm_rand.c
>  create mode 100644 drivers/gpu/drm/lib/drm_rand.h
>
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index fd341ab69c46..04d1d0a32c5c 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -48,10 +48,15 @@ config DRM_DEBUG_MM
>
>           If in doubt, say "N".
>
> +config DRM_LIB_RAND
> +       bool
> +       default n
> +
>  config DRM_DEBUG_MM_SELFTEST
>         tristate "kselftests for DRM range manager (struct drm_mm)"
>         depends on DRM
>         depends on DEBUG_KERNEL
> +       select DRM_LIB_RAND
>         default n
>         help
>           This option provides a kernel module that can be used to test
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index c8aed3688b20..363eb1a23151 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -18,6 +18,7 @@ drm-y       :=        drm_auth.o drm_bufs.o drm_cache.o \
>                 drm_plane.o drm_color_mgmt.o drm_print.o \
>                 drm_dumb_buffers.o drm_mode_config.o
>
> +drm-$(CONFIG_DRM_LIB_RAND) += lib/drm_rand.o
>  obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += selftests/test-drm_mm.o
>
>  drm-$(CONFIG_COMPAT) += drm_ioc32.o
> diff --git a/drivers/gpu/drm/lib/drm_rand.c b/drivers/gpu/drm/lib/drm_rand.c
> new file mode 100644
> index 000000000000..f203c47bb525
> --- /dev/null
> +++ b/drivers/gpu/drm/lib/drm_rand.c
> @@ -0,0 +1,51 @@
> +#include <linux/kernel.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +
> +#include "drm_rand.h"
> +
> +u32 drm_lcg_random(u32 *state)
> +{
> +       u32 s = *state;
> +
> +#define rol(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
> +       s = (s ^ rol(s, 5) ^ rol(s, 24)) + 0x37798849;
> +#undef rol

#include <linux/bitops.h>

static inline __u32 rol32(__u32 word, unsigned int shift);

Allows you to get rid of "rol()" and the temporary "u32 s;".

> +
> +       *state = s;
> +       return s;
> +}
> +EXPORT_SYMBOL(drm_lcg_random);
> +
> +int *drm_random_reorder(int *order, int count, u32 *state)
> +{
> +       int n;
> +
> +       for (n = count-1; n > 1; n--) {
> +               int r = drm_lcg_random(state) % (n + 1);
> +               if (r != n) {

Why not drop that condition? No harm in doing the self-swap, is there?
Makes the code shorter.

> +                       int tmp = order[n];
> +                       order[n] = order[r];
> +                       order[r] = tmp;

swap() in <linux/kernel.h>

> +               }
> +       }

Is there a specific reason to do it that way? How about:

for (i = 0; i < count; ++i)
        swap(order[i], order[drm_lcg_random(state) % count]);

Much simpler and I cannot see why it wouldn't work.

> +
> +       return order;

You sure that return value serves any purpose? Is that convenience so
you can use the function in a combined statement?

> +}
> +EXPORT_SYMBOL(drm_random_reorder);
> +
> +int *drm_random_order(int count, u32 *state)
> +{
> +       int *order;
> +       int n;
> +
> +       order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY);
> +       if (!order)
> +               return order;
> +
> +       for (n = 0; n < count; n++)
> +               order[n] = n;
> +
> +       return drm_random_reorder(order, count, state);

Why "signed int"? We very often use "size_t" to count things. By
making the order signed you just keep running into "comparing signed
vs unsigned" warnings, don't you?

> +}
> +EXPORT_SYMBOL(drm_random_order);
> diff --git a/drivers/gpu/drm/lib/drm_rand.h b/drivers/gpu/drm/lib/drm_rand.h
> new file mode 100644
> index 000000000000..a3f22d115aac
> --- /dev/null
> +++ b/drivers/gpu/drm/lib/drm_rand.h
> @@ -0,0 +1,9 @@
> +#ifndef __DRM_RAND_H__
> +#define __DRM_RAND_H
> +
> +u32 drm_lcg_random(u32 *state);
> +
> +int *drm_random_reorder(int *order, int count, u32 *state);
> +int *drm_random_order(int count, u32 *state);
> +
> +#endif /* __DRM_RAND_H__ */

"random.h". Why the abbreviation?

Looks good to me. Only nitpicks, so feel free to ignore.

Thanks
David

> --
> 2.11.0
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 25/34] drm: Extract struct drm_mm_scan from struct drm_mm
  2016-12-12 11:53 ` [PATCH 25/34] drm: Extract struct drm_mm_scan from struct drm_mm Chris Wilson
@ 2016-12-13 15:17   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 15:17 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> The scan state occupies a large proportion of the struct drm_mm and is
> rarely used and only contains temporary state. That makes it suitable to
> moving to its struct and onto the stack of the callers.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
> +				 struct drm_mm *mm,
>  				 u64 size,
>  				 u64 alignment,
>  				 unsigned long color,
>  				 u64 start,
>  				 u64 end)

Just call drm_mm_scan_init() here and override check_range and a few
others.

Or add __drm_mm_scan_init(), but get rid of the duplicate the code.

With that changed;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-13 15:16   ` David Herrmann
@ 2016-12-13 15:18     ` David Herrmann
  2016-12-13 15:40       ` Chris Wilson
  2016-12-13 15:26     ` Chris Wilson
  2016-12-13 19:39     ` Laurent Pinchart
  2 siblings, 1 reply; 81+ messages in thread
From: David Herrmann @ 2016-12-13 15:18 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development, dri-devel

On Tue, Dec 13, 2016 at 4:16 PM, David Herrmann <dh.herrmann@gmail.com> wrote:
> On Mon, Dec 12, 2016 at 12:53 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> for (i = 0; i < count; ++i)
>         swap(order[i], order[drm_lcg_random(state) % count]);
>
> Much simpler and I cannot see why it wouldn't work.

Hint: swap() does multiple evaluations. So this needs to be:
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 26/34] drm: Rename prev_node to hole in drm_mm_scan_add_block()
  2016-12-12 11:53 ` [PATCH 26/34] drm: Rename prev_node to hole in drm_mm_scan_add_block() Chris Wilson
@ 2016-12-13 15:23   ` Joonas Lahtinen
  2016-12-13 22:28     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 15:23 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Acknowledging that we were building up the hole was more useful to me
> when reading the code, than knowing the relationship between this node
> and the previous node.
> 

I don't really agree. I see that we have nodes and there are holes that
follow them, so prev_node makes more sense in that mindset.

node->scanned_preceeds_hole = hole->hole_follows; and
hole->hole_follows = 1; look especially quirky to me when read aloud.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-13 15:16   ` David Herrmann
  2016-12-13 15:18     ` David Herrmann
@ 2016-12-13 15:26     ` Chris Wilson
  2016-12-13 19:39     ` Laurent Pinchart
  2 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-13 15:26 UTC (permalink / raw)
  To: David Herrmann; +Cc: Intel Graphics Development, joonas.lahtinen, dri-devel

On Tue, Dec 13, 2016 at 04:16:41PM +0100, David Herrmann wrote:
> Hey Chris
> 
> On Mon, Dec 12, 2016 at 12:53 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > For testing, we want a reproducible PRNG, a plain linear congruent
> > generator is suitable for our very limited selftests.
> >
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/Kconfig        |  5 +++++
> >  drivers/gpu/drm/Makefile       |  1 +
> >  drivers/gpu/drm/lib/drm_rand.c | 51 ++++++++++++++++++++++++++++++++++++++++++
> >  drivers/gpu/drm/lib/drm_rand.h |  9 ++++++++
> >  4 files changed, 66 insertions(+)
> >  create mode 100644 drivers/gpu/drm/lib/drm_rand.c
> >  create mode 100644 drivers/gpu/drm/lib/drm_rand.h
> >
> > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> > index fd341ab69c46..04d1d0a32c5c 100644
> > --- a/drivers/gpu/drm/Kconfig
> > +++ b/drivers/gpu/drm/Kconfig
> > @@ -48,10 +48,15 @@ config DRM_DEBUG_MM
> >
> >           If in doubt, say "N".
> >
> > +config DRM_LIB_RAND
> > +       bool
> > +       default n
> > +
> >  config DRM_DEBUG_MM_SELFTEST
> >         tristate "kselftests for DRM range manager (struct drm_mm)"
> >         depends on DRM
> >         depends on DEBUG_KERNEL
> > +       select DRM_LIB_RAND
> >         default n
> >         help
> >           This option provides a kernel module that can be used to test
> > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> > index c8aed3688b20..363eb1a23151 100644
> > --- a/drivers/gpu/drm/Makefile
> > +++ b/drivers/gpu/drm/Makefile
> > @@ -18,6 +18,7 @@ drm-y       :=        drm_auth.o drm_bufs.o drm_cache.o \
> >                 drm_plane.o drm_color_mgmt.o drm_print.o \
> >                 drm_dumb_buffers.o drm_mode_config.o
> >
> > +drm-$(CONFIG_DRM_LIB_RAND) += lib/drm_rand.o
> >  obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += selftests/test-drm_mm.o
> >
> >  drm-$(CONFIG_COMPAT) += drm_ioc32.o
> > diff --git a/drivers/gpu/drm/lib/drm_rand.c b/drivers/gpu/drm/lib/drm_rand.c
> > new file mode 100644
> > index 000000000000..f203c47bb525
> > --- /dev/null
> > +++ b/drivers/gpu/drm/lib/drm_rand.c
> > @@ -0,0 +1,51 @@
> > +#include <linux/kernel.h>
> > +#include <linux/slab.h>
> > +#include <linux/types.h>
> > +
> > +#include "drm_rand.h"
> > +
> > +u32 drm_lcg_random(u32 *state)
> > +{
> > +       u32 s = *state;
> > +
> > +#define rol(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
> > +       s = (s ^ rol(s, 5) ^ rol(s, 24)) + 0x37798849;
> > +#undef rol
> 
> #include <linux/bitops.h>
> 
> static inline __u32 rol32(__u32 word, unsigned int shift);
> 
> Allows you to get rid of "rol()" and the temporary "u32 s;".

Ta.

> 
> > +
> > +       *state = s;
> > +       return s;
> > +}
> > +EXPORT_SYMBOL(drm_lcg_random);
> > +
> > +int *drm_random_reorder(int *order, int count, u32 *state)
> > +{
> > +       int n;
> > +
> > +       for (n = count-1; n > 1; n--) {
> > +               int r = drm_lcg_random(state) % (n + 1);
> > +               if (r != n) {
> 
> Why not drop that condition? No harm in doing the self-swap, is there?
> Makes the code shorter.

It makes more sense when it was calling a function to handle the swap.

> 
> > +                       int tmp = order[n];
> > +                       order[n] = order[r];
> > +                       order[r] = tmp;
> 
> swap() in <linux/kernel.h>
> 
> > +               }
> > +       }
> 
> Is there a specific reason to do it that way? How about:
> 
> for (i = 0; i < count; ++i)
>         swap(order[i], order[drm_lcg_random(state) % count]);
> 
> Much simpler and I cannot see why it wouldn't work.

Ta.
> 
> > +
> > +       return order;
> 
> You sure that return value serves any purpose? Is that convenience so
> you can use the function in a combined statement?
> 

Just convenience from splitting the function up.

> > +}
> > +EXPORT_SYMBOL(drm_random_reorder);
> > +
> > +int *drm_random_order(int count, u32 *state)
> > +{
> > +       int *order;
> > +       int n;
> > +
> > +       order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY);
> > +       if (!order)
> > +               return order;
> > +
> > +       for (n = 0; n < count; n++)
> > +               order[n] = n;
> > +
> > +       return drm_random_reorder(order, count, state);
> 
> Why "signed int"? We very often use "size_t" to count things. By
> making the order signed you just keep running into "comparing signed
> vs unsigned" warnings, don't you?

Because I only needed ints. :-p

> 
> > +}
> > +EXPORT_SYMBOL(drm_random_order);
> > diff --git a/drivers/gpu/drm/lib/drm_rand.h b/drivers/gpu/drm/lib/drm_rand.h
> > new file mode 100644
> > index 000000000000..a3f22d115aac
> > --- /dev/null
> > +++ b/drivers/gpu/drm/lib/drm_rand.h
> > @@ -0,0 +1,9 @@
> > +#ifndef __DRM_RAND_H__
> > +#define __DRM_RAND_H
> > +
> > +u32 drm_lcg_random(u32 *state);
> > +
> > +int *drm_random_reorder(int *order, int count, u32 *state);
> > +int *drm_random_order(int count, u32 *state);
> > +
> > +#endif /* __DRM_RAND_H__ */
> 
> "random.h". Why the abbreviation?

Before the drm_ prefix I had to avoid a clash and I have a history with
calling this rand.h for no good reason.

> Looks good to me. Only nitpicks, so feel free to ignore.

(Apart from size_t because I still have the burn marks from DRM using
size_t where I need u64 on 32bit kernels. However, that looks correct
for this application. ;)
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 27/34] drm: Unconditionally do the range check in drm_mm_scan_add_block()
  2016-12-12 11:53 ` [PATCH 27/34] drm: Unconditionally do the range check " Chris Wilson
@ 2016-12-13 15:28   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 15:28 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Doing the check is trivial (low cost in comparison to overall eviction)
> and helps simplify the code.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

I now see that the other function gets eliminated, so no biggie.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-13 15:18     ` David Herrmann
@ 2016-12-13 15:40       ` Chris Wilson
  2016-12-13 16:21         ` David Herrmann
  0 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-13 15:40 UTC (permalink / raw)
  To: David Herrmann; +Cc: Intel Graphics Development, joonas.lahtinen, dri-devel

On Tue, Dec 13, 2016 at 04:18:35PM +0100, David Herrmann wrote:
> On Tue, Dec 13, 2016 at 4:16 PM, David Herrmann <dh.herrmann@gmail.com> wrote:
> > On Mon, Dec 12, 2016 at 12:53 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > for (i = 0; i < count; ++i)
> >         swap(order[i], order[drm_lcg_random(state) % count]);
> >
> > Much simpler and I cannot see why it wouldn't work.
> 
> Hint: swap() does multiple evaluations. So this needs to be:

Hmm, and on switching to size_t count may be larger than u32...
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 29/34] drm: Compute tight evictions for drm_mm_scan
  2016-12-12 11:53 ` [PATCH 29/34] drm: Compute tight evictions for drm_mm_scan Chris Wilson
@ 2016-12-13 15:48   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 15:48 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Compute the minimal required hole during scan and only evict those nodes
> that overlap. This enables us to reduce the number of nodes we need to
> evict to the bare minimum.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Definitely for the better.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 30/34] drm: Optimise power-of-two alignments in drm_mm_scan_add_block()
  2016-12-12 11:53 ` [PATCH 30/34] drm: Optimise power-of-two alignments in drm_mm_scan_add_block() Chris Wilson
@ 2016-12-13 15:55   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 15:55 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> For power-of-two alignments, we can avoid the 64bit divide and do a
> simple bitwise add instead.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/drm_mm.c | 9 ++++++++-
>  include/drm/drm_mm.h     | 1 +
>  2 files changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
> index eec0a46f5b38..7245483f1111 100644
> --- a/drivers/gpu/drm/drm_mm.c
> +++ b/drivers/gpu/drm/drm_mm.c
> @@ -741,8 +741,12 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
>  
>  	scan->mm = mm;
>  
> +	if (alignment <= 1)
> +		alignment = 0;
> +
>  	scan->color = color;
>  	scan->alignment = alignment;
> +	scan->alignment_mask = is_power_of_2(alignment) ? alignment - 1 : 0;

One would assume alignment_mask to be the bit-opposite, really.
reminder_mask should describe it better.

And sprinkle a likely() for the POT case, because I think that's
something to expect.

With above comments;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 32/34] drm: Apply tight eviction scanning to color_adjust
  2016-12-12 11:53 ` [PATCH 32/34] drm: Apply tight eviction scanning to color_adjust Chris Wilson
@ 2016-12-13 16:03   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-13 16:03 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Using mm->color_adjust makes the eviction scanner much tricker since we
> don't know the actual neighbours of the target hole until after it is
> created (after scanning is complete). To work out whether we need to
> evict the neighbours because they impact upon the hole, we have to then
> check the hole afterwards - requiring an extra step in the user of the
> eviction scanner when they apply color_adjust.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> @@ -887,6 +872,33 @@ bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
>  }
>  EXPORT_SYMBOL(drm_mm_scan_remove_block);
>  
> +struct drm_mm_node *drm_mm_scan_color_evict(struct drm_mm_scan *scan)
> +

Kerneldoc so I can verify this function makes sense.

> diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c
> index 09ead31a094d..73353f87f46a 100644
> --- a/drivers/gpu/drm/selftests/test-drm_mm.c
> +++ b/drivers/gpu/drm/selftests/test-drm_mm.c

Use helper func in this file?

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-13 15:40       ` Chris Wilson
@ 2016-12-13 16:21         ` David Herrmann
  0 siblings, 0 replies; 81+ messages in thread
From: David Herrmann @ 2016-12-13 16:21 UTC (permalink / raw)
  To: Chris Wilson, David Herrmann, dri-devel,
	Intel Graphics Development, joonas.lahtinen

Hey Chris

On Tue, Dec 13, 2016 at 4:40 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> On Tue, Dec 13, 2016 at 04:18:35PM +0100, David Herrmann wrote:
>> On Tue, Dec 13, 2016 at 4:16 PM, David Herrmann <dh.herrmann@gmail.com> wrote:
>> > On Mon, Dec 12, 2016 at 12:53 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>> > for (i = 0; i < count; ++i)
>> >         swap(order[i], order[drm_lcg_random(state) % count]);
>> >
>> > Much simpler and I cannot see why it wouldn't work.
>>
>> Hint: swap() does multiple evaluations. So this needs to be:
>
> Hmm, and on switching to size_t count may be larger than u32...

I didn't mean to suggest 'size_t'. All I cared about was 'unsigned'.
So feel free to use 'u32', 'unsigned int', 'size_t', etc.

Thanks
David
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-13 15:16   ` David Herrmann
  2016-12-13 15:18     ` David Herrmann
  2016-12-13 15:26     ` Chris Wilson
@ 2016-12-13 19:39     ` Laurent Pinchart
  2016-12-13 20:50       ` Chris Wilson
  2 siblings, 1 reply; 81+ messages in thread
From: Laurent Pinchart @ 2016-12-13 19:39 UTC (permalink / raw)
  To: dri-devel; +Cc: joonas.lahtinen, Intel Graphics Development

Hello,

On Tuesday 13 Dec 2016 16:16:41 David Herrmann wrote:
> On Mon, Dec 12, 2016 at 12:53 PM, Chris Wilson wrote:
> > For testing, we want a reproducible PRNG, a plain linear congruent
> > generator is suitable for our very limited selftests.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

This doesn't seem very DRM-specific, is there a reason why we can't use the 
ansi_cprng already present in the kernel ?

-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/34] drm: Add a simple linear congruent generator PRNG
  2016-12-13 19:39     ` Laurent Pinchart
@ 2016-12-13 20:50       ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-13 20:50 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: Intel Graphics Development, dri-devel, David Herrmann

On Tue, Dec 13, 2016 at 09:39:06PM +0200, Laurent Pinchart wrote:
> Hello,
> 
> On Tuesday 13 Dec 2016 16:16:41 David Herrmann wrote:
> > On Mon, Dec 12, 2016 at 12:53 PM, Chris Wilson wrote:
> > > For testing, we want a reproducible PRNG, a plain linear congruent
> > > generator is suitable for our very limited selftests.
> > > 
> > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> This doesn't seem very DRM-specific, is there a reason why we can't use the 
> ansi_cprng already present in the kernel ?

That would be a nightmare to drag in a crypto dependency, for what
should be a 5 cycle function. However, I will replace the Hars-Petruska
lcg for prandom (from lib/random32.c). That meets the requirement of
being a seeded PRNG.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 26/34] drm: Rename prev_node to hole in drm_mm_scan_add_block()
  2016-12-13 15:23   ` Joonas Lahtinen
@ 2016-12-13 22:28     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-13 22:28 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx, dri-devel

On Tue, Dec 13, 2016 at 05:23:54PM +0200, Joonas Lahtinen wrote:
> On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> > Acknowledging that we were building up the hole was more useful to me
> > when reading the code, than knowing the relationship between this node
> > and the previous node.
> > 
> 
> I don't really agree. I see that we have nodes and there are holes that
> follow them, so prev_node makes more sense in that mindset.
> 
> node->scanned_preceeds_hole = hole->hole_follows; and
> hole->hole_follows = 1; look especially quirky to me when read aloud.

Remnants of dead code that is summarily executed later on. The scanner is
about tracking the hole generated by eviction, it is just that the
prev_node holds the hole - so prev_node is just the pointer, the hole
is the important detail (and fits in with the other code better).
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 31/34] drm: Simplify drm_mm scan-list manipulation
  2016-12-12 11:53 ` [PATCH 31/34] drm: Simplify drm_mm scan-list manipulation Chris Wilson
@ 2016-12-14  8:27   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-14  8:27 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Since we mandate a strict reverse-order of drm_mm_scan_remove_block()

kerneldoc speaks of forward-order, so better update that.

> after drm_mm_scan_add_block() we can further simplify the list
> manipulations when generating the temporary scan-hole.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> @@ -787,13 +783,8 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
>  	mm->scan_active++;
>  
>  	hole = list_prev_entry(node, node_list);
> -
> -	node->scanned_preceeds_hole = hole->hole_follows;
> -	hole->hole_follows = 1;
> -	list_del(&node->node_list);
> -	node->node_list.prev = &hole->node_list;
> -	node->node_list.next = &scan->prev_scanned_node->node_list;
> -	scan->prev_scanned_node = node;
> +	DRM_MM_BUG_ON(list_next_entry(hole, node_list) != node);
> +	__list_del_entry(&node->node_list);

At least be explicit by adding a comment that we avoid poisoning the
pointers to be able to unwind.

> @@ -887,8 +878,8 @@ bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
>  	node->mm->scan_active--;
>  
>  	prev_node = list_prev_entry(node, node_list);
> -
> -	prev_node->hole_follows = node->scanned_preceeds_hole;

Comment might be in place here too; "node->node_list has been removed
from the list but by carefully avoiding..."

> +	DRM_MM_BUG_ON(list_next_entry(prev_node, node_list) !=
> +		      list_next_entry(node, node_list));
>  	list_add(&node->node_list, &prev_node->node_list);
>  
>  	return (node->start + node->size > scan->hit_start &&

I'm feeling bit uncanny about avoiding poisoning.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 34/34] drm: kselftest for drm_mm and bottom-up allocation
  2016-12-12 11:53 ` [PATCH 34/34] drm: kselftest for drm_mm and bottom-up allocation Chris Wilson
@ 2016-12-14  9:10   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-14  9:10 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Check that if we request bottom-up allocation from drm_mm_insert_node()
> we receive the next available hole from the bottom.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> @@ -1304,6 +1304,95 @@ static int igt_topdown(void *ignored)
> >  	return ret;
>  }
>  
> +static int igt_bottomup(void *ignored)
> +{

<SNIP>

> +	drm_mm_init(&mm, 0, size);
> +	for (n = 0; n < size; n++) {
> +		int err;
> +
> +		err = drm_mm_insert_node_generic(&mm, &nodes[n], 1, 0, 0,
> +						 DRM_MM_INSERT_LOW);
> +		if (err) {
> +			pr_err("bottomup insert failed, step %d\n", n);
> +			ret = err;
> +			goto out;
> +		}
> +
> +		if (nodes[n].hole_size) {

Should really be if (nodes[n].hole_size != size - n) and
pr_err("incorrect hole...");

With that fixed;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 08/34] drm: kselftest for drm_mm_reserve_node()
  2016-12-12 11:53 ` [PATCH 08/34] drm: kselftest for drm_mm_reserve_node() Chris Wilson
@ 2016-12-14  9:55   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-14  9:55 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Exercise drm_mm_reserve_node(), check that we can't reserve an already
> occupied range and that the lists are correct after reserving/removing.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>


> @@ -105,6 +108,135 @@ static int igt_debug(void *ignored)
>  	return 0;
>  }
>  
> +static int __igt_reserve(int count, u64 size)
> +{

<SNIP>

> +	if (!drm_mm_clean(&mm)) {
> +		pr_err("mm not empty on creation\n");
> +		goto out;
> +	}

Usual nag about repeating the drm_mm_clean test.

<SNIP>

> +	/* Repeated use should then fail */
> +	drm_random_reorder(order, count, &lcg_state);
> +	for (n = 0; n < count; n++) {
> +		struct drm_mm_node tmp = {
> +			.start = order[n] * size,
> +			.size = 1
> +		};
> +
> +		if (!drm_mm_reserve_node(&mm, &tmp)) {

Good chance to check that the returne error code is -ENOSPC.

And could make max_count/max_iterations a variable (ditto for
max_prime) to be shared among tests.

Other than that,

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 15/34] drm: kselftest for drm_mm and top-down allocation
  2016-12-12 11:53 ` [PATCH 15/34] drm: kselftest for drm_mm and top-down allocation Chris Wilson
@ 2016-12-14 11:33   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-14 11:33 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Check that if we request top-down allocation from drm_mm_insert_node()
> we receive the next available hole from the top.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 03/34] drm: Add drm_mm_for_each_node_safe()
  2016-12-13  9:35   ` Joonas Lahtinen
@ 2016-12-14 11:39     ` Chris Wilson
  2016-12-14 12:03       ` Joonas Lahtinen
  0 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2016-12-14 11:39 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx, dri-devel

On Tue, Dec 13, 2016 at 11:35:42AM +0200, Joonas Lahtinen wrote:
> On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> > A complement to drm_mm_for_each_node(), wraps list_for_each_entry_safe()
> > for walking the list of nodes safe against removal.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> <SNIP>
> 
> > +#define __drm_mm_nodes(mm) (&(mm)->head_node.node_list)
> 
> Not static inline for type checking?

I was going to make it static inline, then realised the #define makes it
work cleanly with a const drm_mm.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 16/34] drm: kselftest for drm_mm and top-down alignment
  2016-12-12 11:53 ` [PATCH 16/34] drm: kselftest for drm_mm and top-down alignment Chris Wilson
@ 2016-12-14 11:51   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-14 11:51 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Check that if we request top-down allocation with a particular alignment
> from drm_mm_insert_node() that the start of the node matches our
> request.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Assuming magic numbers be gone in next iteration.

> +static int igt_topdown_align(void *ignored)
> +{
> +	struct drm_mm mm;
> +	struct drm_mm_node tmp, resv;
> +	int ret = -EINVAL;
> +	int n, m, err;
> +
> +	drm_mm_init(&mm, 0, ~0ull);
> +	memset(&tmp, 0, sizeof(tmp));
> +	memset(&resv, 0, sizeof(resv));
> +
> +	for (m = 0; m < 32; m++) {
> +		u64 end = ~0ull;
> +
> +		if (m) {
> +			resv.size = BIT_ULL(m);
> +			end -= resv.size;
> +			resv.start = end;
> +
> +			err = drm_mm_reserve_node(&mm, &resv);
> +			if (err) {
> +				pr_err("reservation of sentinel node failed\n");
> +				ret = err;
> +				goto out;
> +			}
> +		}
> +
> +		for (n = 0; n < 63 - m; n++) {
> +			u64 align = BIT_ULL(n);

DRM_MM_BUG_ON(end % align); to protect somebody from modifying? (or end
& (align - 1))

> +
> +			err = drm_mm_insert_node_generic(&mm, &tmp, 1, align, 0,
> +							 DRM_MM_SEARCH_BELOW,
> +							 DRM_MM_CREATE_TOP);
> +			drm_mm_remove_node(&tmp);

Could move this after if (err) again and in the rest of the code, too.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 10/34] drm: kselftest for drm_mm_replace_node()
  2016-12-12 11:53 ` [PATCH 10/34] drm: kselftest for drm_mm_replace_node() Chris Wilson
@ 2016-12-14 12:01   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-14 12:01 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Reuse drm_mm_insert_node() with a temporary node to exercise
> drm_mm_replace_node(). We use the previous test in order to exercise the
> various lists following replacement.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static int __igt_insert(int count, u64 size, bool replace)
>  {
>  	u32 lcg_state = random_seed;
>  	struct drm_mm mm;@@ -264,9 +264,10 @@ static int __igt_insert(int count, u64 size)
>  	}
>  
>  	for (n = 0; n < count; n++) {
> +		struct drm_mm_node tmp, *node;
>  		int err;
>  
> -		node = &nodes[n];
> +		node = memset(replace ? &tmp : &nodes[n], 0, sizeof(*node));

Just memset in a separate line for readability.

> @@ -281,6 +282,20 @@ static int __igt_insert(int count, u64 size)
>                                n, node->start);
>                         goto out;
>                 }
> +
> +               if (replace) {
> +                       drm_mm_replace_node(&tmp, &nodes[n]);
> +                       if (!drm_mm_node_allocated(&nodes[n])) {
> +                               pr_err("replaced new-node not allocated! step %d\n",
> +                                      n);
> +                               goto out;
> +                       }

Maybe go the extra mile and make sure the start and size still match
after replacement.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 03/34] drm: Add drm_mm_for_each_node_safe()
  2016-12-14 11:39     ` Chris Wilson
@ 2016-12-14 12:03       ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-14 12:03 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, dri-devel

On ke, 2016-12-14 at 11:39 +0000, Chris Wilson wrote:
> On Tue, Dec 13, 2016 at 11:35:42AM +0200, Joonas Lahtinen wrote:
> > 
> > On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> > > 
> > > +#define __drm_mm_nodes(mm) (&(mm)->head_node.node_list)
> > 
> > Not static inline for type checking?
> 
> I was going to make it static inline, then realised the #define makes it
> work cleanly with a const drm_mm.

Fair enough.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 09/34] drm: kselftest for drm_mm_insert_node()
  2016-12-12 11:53 ` [PATCH 09/34] drm: kselftest for drm_mm_insert_node() Chris Wilson
@ 2016-12-14 12:26   ` Joonas Lahtinen
  2016-12-14 12:51     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-14 12:26 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Exercise drm_mm_insert_node(), check that we can't overfill a range and
> that the lists are correct after reserving/removing.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static int __igt_insert(int count, u64 size)
> +{

<SNIP>

> +	for (n = 0; n < count; n++) {
> +		int err;
> +
> +		node = &nodes[n];
> +		err = drm_mm_insert_node(&mm, node, size, 0,
> +					 DRM_MM_SEARCH_DEFAULT);
> +		if (err) {
> +			pr_err("insert failed, step %d, start %llu\n",
> +			       n, nodes[n].start);
> +			ret = err;
> +			goto out;
> +		}
> +
> +		if (!drm_mm_node_allocated(node)) {
> +			pr_err("inserted node not allocated! step %d, start %llu\n",
> +			       n, node->start);
> +			goto out;
> +		}
> +	}
> +
> +	/* Repeated use should then fail */
> +	if (1) {

Why if (1)? What are you not telling me.

<SNIP>

> +		if (1) {

Ditto.

> +			struct drm_mm_node tmp;
> +
> +			memset(&tmp, 0, sizeof(tmp));
> +			if (!drm_mm_insert_node(&mm, &tmp, size, 0,
> +						DRM_MM_SEARCH_DEFAULT)) {
> +				drm_mm_remove_node(&tmp);
> +				pr_err("impossible insert succeeded, start %llu\n",
> +				       tmp.start);
> +				goto out;
> +			}
> +		}
> +

Second instance of the code below, could be helper?

> +		m = 0;
> +		drm_mm_for_each_node(node, &mm) {
> +			if (node->start != m * size) {
> +				pr_err("node %d out of order, expected start %llx, found %llx\n",
> +				       m, m * size, node->start);
> +				goto out;
> +			}
> +
> +			if (node->size != size) {
> +				pr_err("node %d has wrong size, expected size %llx, found %llx\n",
> +				       m, size, node->size);
> +				goto out;
> +			}
> +
> +			if (node->hole_follows) {
> +				pr_err("node %d is followed by a hole!\n", m);
> +				goto out;
> +			}
> +
> +			m++;
> +		}
> +	}
> +

I still do not have a solid opinion what is a decent amount of
iterations to do at each place.

With the helper isolated;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 09/34] drm: kselftest for drm_mm_insert_node()
  2016-12-14 12:26   ` Joonas Lahtinen
@ 2016-12-14 12:51     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-14 12:51 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx, dri-devel

On Wed, Dec 14, 2016 at 02:26:36PM +0200, Joonas Lahtinen wrote:
> On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> > Exercise drm_mm_insert_node(), check that we can't overfill a range and
> > that the lists are correct after reserving/removing.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> <SNIP>
> 
> > +static int __igt_insert(int count, u64 size)
> > +{
> 
> <SNIP>
> 
> > +	for (n = 0; n < count; n++) {
> > +		int err;
> > +
> > +		node = &nodes[n];
> > +		err = drm_mm_insert_node(&mm, node, size, 0,
> > +					 DRM_MM_SEARCH_DEFAULT);
> > +		if (err) {
> > +			pr_err("insert failed, step %d, start %llu\n",
> > +			       n, nodes[n].start);
> > +			ret = err;
> > +			goto out;
> > +		}
> > +
> > +		if (!drm_mm_node_allocated(node)) {
> > +			pr_err("inserted node not allocated! step %d, start %llu\n",
> > +			       n, node->start);
> > +			goto out;
> > +		}
> > +	}
> > +
> > +	/* Repeated use should then fail */
> > +	if (1) {
> 
> Why if (1)? What are you not telling me.

That I was too lazy to write a function and dislike bare '{' just to
instantiate a new local so use if(1) as an excuse to start a new block.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 11/34] drm: kselftest for drm_mm_insert_node_in_range()
  2016-12-12 11:53 ` [PATCH 11/34] drm: kselftest for drm_mm_insert_node_in_range() Chris Wilson
@ 2016-12-15  8:43   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-15  8:43 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Exercise drm_mm_insert_node_in_range(), check that we only allocate from
> the specified range.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

With the stylistic changes I proposed to whole series;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 12/34] drm: kselftest for drm_mm and alignment
  2016-12-12 11:53 ` [PATCH 12/34] drm: kselftest for drm_mm and alignment Chris Wilson
@ 2016-12-15  8:59   ` Joonas Lahtinen
  2016-12-15  9:21     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-15  8:59 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Check that we can request alignment to any power-of-two or prime using a
> plain drm_mm_node_insert(), and also handle a reasonable selection of
> primes.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static int igt_align(void *ignored)
> +{
> +	struct drm_mm mm;
> +	struct drm_mm_node *node, *next;
> +	int ret = -EINVAL;
> +	int prime;
> +
> +	drm_mm_init(&mm, 1, U64_MAX - 1);
> +
> +	drm_for_each_prime(prime, 8192) {
> +		u64 size;
> +		int err;
> +
> +		node = kzalloc(sizeof(*node), GFP_KERNEL);
> +		if (!node) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}

If the amount of primes would be predictable (pun intended), we could
malloc the nodes at one go.

Other than that, looks like I reviewed this already;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 12/34] drm: kselftest for drm_mm and alignment
  2016-12-15  8:59   ` Joonas Lahtinen
@ 2016-12-15  9:21     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-15  9:21 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx, dri-devel

On Thu, Dec 15, 2016 at 10:59:10AM +0200, Joonas Lahtinen wrote:
> On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> > Check that we can request alignment to any power-of-two or prime using a
> > plain drm_mm_node_insert(), and also handle a reasonable selection of
> > primes.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> <SNIP>
> 
> > +static int igt_align(void *ignored)
> > +{
> > +	struct drm_mm mm;
> > +	struct drm_mm_node *node, *next;
> > +	int ret = -EINVAL;
> > +	int prime;
> > +
> > +	drm_mm_init(&mm, 1, U64_MAX - 1);
> > +
> > +	drm_for_each_prime(prime, 8192) {
> > +		u64 size;
> > +		int err;
> > +
> > +		node = kzalloc(sizeof(*node), GFP_KERNEL);
> > +		if (!node) {
> > +			ret = -ENOMEM;
> > +			goto out;
> > +		}
> 
> If the amount of primes would be predictable (pun intended), we could
> malloc the nodes at one go.

I think I'll keep a couple of these smaller kzalloc() sites, just to
have some variety in testing. In particular, this coupled with kmemleak
(or slab debug) could catch a leak other tests might miss.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 13/34] drm: kselftest for drm_mm and eviction
  2016-12-12 11:53 ` [PATCH 13/34] drm: kselftest for drm_mm and eviction Chris Wilson
@ 2016-12-15  9:29   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-15  9:29 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Check that we add arbitrary blocks to the eviction scanner in order to
> find the first minimal hole that matches our request.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +		if ((int)tmp.start % n || tmp.size != nsize || tmp.hole_follows) {
> +			pr_err("Inserted did not fill the eviction hole: size=%lld [%d], align=%d [rem=%d] (prime), start=%llx, hole-follows?=%d\n",
> +			       tmp.size, nsize, n, (int)tmp.start % n, tmp.start, tmp.hole_follows);
> +
> +			drm_mm_remove_node(&tmp);
> +			goto out;
> +		}
> +
> +		drm_mm_remove_node(&tmp);
> +		list_for_each_entry(e, &evict_list, link) {
> +			err = drm_mm_reserve_node(&mm, &e->node);

Using helpers, could repeat the ordering tests for reserve vs insert.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 14/34] drm: kselftest for drm_mm and range restricted eviction
  2016-12-12 11:53 ` [PATCH 14/34] drm: kselftest for drm_mm and range restricted eviction Chris Wilson
@ 2016-12-15  9:50   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-15  9:50 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Check that we add arbitrary blocks to a restrited eviction scanner in
> order to find the first minimal hole that matches our request.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static int igt_evict_range(void *ignored)
> +{

<SNIP>

> +	drm_for_each_prime(n, range_size) {
> +		LIST_HEAD(evict_list);
> +		struct evict_node *e, *en;
> +		struct drm_mm_node tmp;
> +		int nsize = (range_size - n + 1) / 2;
> +		int err;

DRM_MM_BUG_ON(!nsize);

Same comment aplies about reserve vs insert (the amount of tests after
each, can be resolved with helpers).

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 17/34] drm: kselftest for drm_mm and color adjustment
  2016-12-12 11:53 ` [PATCH 17/34] drm: kselftest for drm_mm and color adjustment Chris Wilson
@ 2016-12-15 11:04   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-15 11:04 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Check that after applying the driver's color adjustment, fitting of the
> node and its alignment are still correct.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static void no_color_touching(struct drm_mm_node *node,
> +			      unsigned long color,
> +			      u64 *start,
> +			      u64 *end)

Function name made me think of a bool returning one.

Rather call it "{separate,space}_adjacent_colors" or so.

> +{
> +	if (node->allocated && node->color != color)
> +		++*start;
> +
> +	node = list_next_entry(node, node_list);
> +	if (node->allocated && node->color != color)
> +		--*end;
> +}
> +
> +static int igt_color(void *ignored)
> +{

<SNIP>

> +				node->start += n + 1;
> +			rem = node->start;
> +			rem %= n + count;

rem = div64...?

If I could keep with the loop variables, should be good;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 33/34] drm: Fix drm_mm search and insertion
  2016-12-12 11:53 ` [PATCH 33/34] drm: Fix drm_mm search and insertion Chris Wilson
@ 2016-12-15 12:28   ` Joonas Lahtinen
  2016-12-15 12:57     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-15 12:28 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> The drm_mm range manager claimed to support top-down insertion, but it
> was neither searching for the top-most hole that could fit the
> allocation request nor fitting the request to the hole correctly.
> 
> In order to search the range efficiently, we create a secondary index
> for the holes using either their size or their address. This index
> allows us to find the smallest hole or the hole at the bottom or top of
> the range efficiently, whilst keeping the hole stack to rapidly service
> evictions.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static void rm_hole(struct drm_mm_node *node)
> +{
> +	if (!node->hole_size)
> +		return;

I've actively tried to remove conditions that cause asymmetry between
add_/rm_, create_/destroy_ etc. So I think this should be
DRM_MM_BUG_ON() too.

> +static struct drm_mm_node *best_hole(struct drm_mm *mm, u64 size)
>  {
> -	struct drm_mm *mm = hole_node->mm;
> -	u64 hole_start = drm_mm_hole_node_start(hole_node);
> -	u64 hole_end = drm_mm_hole_node_end(hole_node);
> -	u64 adj_start = hole_start;
> -	u64 adj_end = hole_end;
> +	struct rb_node *best = NULL;
> +	struct rb_node **link = &mm->holes_size.rb_node;
> +	while (*link) {
> +		struct rb_node *rb = *link;
> +		if (size <= rb_hole_size(rb))
> +			link = &rb->rb_left, best = rb;

Single assignment per line, by coding style. And
link = &(best = rb)->rb_left is not better :P

> -int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node,
> +int drm_mm_insert_node_in_range_generic(struct drm_mm * const mm,
> +					struct drm_mm_node * const node,

I really have no stance on the const's, I'll defer to higher powers on
this.

> +void drm_mm_remove_node(struct drm_mm_node *node)
>  {

<SNIP>

> -	return best;
> +	rm_hole(prev_node);
> +	add_hole(prev_node);

update_hole?
 
> @@ -799,7 +706,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
>  	if (adj_end <= adj_start || adj_end - adj_start < scan->size)
>  		return false;
>  
> -	if (scan->flags == DRM_MM_CREATE_TOP)
> +	if (scan->flags == DRM_MM_INSERT_HIGH)

Flags are usually checked with & if somebody wants to add them later.
Otherwise you could call it "mode".

Somebody else could give this a glance too.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 20/34] drm/i915: Build DRM range manager selftests for CI
  2016-12-12 11:53 ` [PATCH 20/34] drm/i915: Build DRM range manager selftests for CI Chris Wilson
@ 2016-12-15 12:31   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2016-12-15 12:31 UTC (permalink / raw)
  To: Chris Wilson, dri-devel; +Cc: intel-gfx

On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> Build the struct drm_mm selftests so that we can trivially run them
> within our CI.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

"Enable debug, become developer."

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 33/34] drm: Fix drm_mm search and insertion
  2016-12-15 12:28   ` Joonas Lahtinen
@ 2016-12-15 12:57     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2016-12-15 12:57 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx, dri-devel

On Thu, Dec 15, 2016 at 02:28:32PM +0200, Joonas Lahtinen wrote:
> On ma, 2016-12-12 at 11:53 +0000, Chris Wilson wrote:
> > @@ -799,7 +706,7 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
> >  	if (adj_end <= adj_start || adj_end - adj_start < scan->size)
> >  		return false;
> >  
> > -	if (scan->flags == DRM_MM_CREATE_TOP)
> > +	if (scan->flags == DRM_MM_INSERT_HIGH)
> 
> Flags are usually checked with & if somebody wants to add them later.
> Otherwise you could call it "mode".

Once upon a time, they were intended to be flags. They have since
devolved back into a mode. The only suitable argument for my laziness
was what if I wanted to add a flag later!
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

end of thread, other threads:[~2016-12-15 12:57 UTC | newest]

Thread overview: 81+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-12 11:53 struct drm_mm fixes Chris Wilson
2016-12-12 11:53 ` [PATCH 01/34] drm/i915: Use the MRU stack search after evicting Chris Wilson
2016-12-13  9:29   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 02/34] drm/i915: Simplify i915_gtt_color_adjust() Chris Wilson
2016-12-13  9:32   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 03/34] drm: Add drm_mm_for_each_node_safe() Chris Wilson
2016-12-13  9:35   ` Joonas Lahtinen
2016-12-14 11:39     ` Chris Wilson
2016-12-14 12:03       ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 04/34] drm: Add some kselftests for the DRM range manager (struct drm_mm) Chris Wilson
2016-12-13  9:58   ` Joonas Lahtinen
2016-12-13 10:21     ` Chris Wilson
2016-12-13 15:03       ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 05/34] drm: kselftest for drm_mm_init() Chris Wilson
2016-12-13 10:16   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 06/34] drm: Add a simple linear congruent generator PRNG Chris Wilson
2016-12-13 10:44   ` Joonas Lahtinen
2016-12-13 15:16   ` David Herrmann
2016-12-13 15:18     ` David Herrmann
2016-12-13 15:40       ` Chris Wilson
2016-12-13 16:21         ` David Herrmann
2016-12-13 15:26     ` Chris Wilson
2016-12-13 19:39     ` Laurent Pinchart
2016-12-13 20:50       ` Chris Wilson
2016-12-12 11:53 ` [PATCH 07/34] drm: Add a simple prime number generator Chris Wilson
2016-12-12 11:53 ` [PATCH 08/34] drm: kselftest for drm_mm_reserve_node() Chris Wilson
2016-12-14  9:55   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 09/34] drm: kselftest for drm_mm_insert_node() Chris Wilson
2016-12-14 12:26   ` Joonas Lahtinen
2016-12-14 12:51     ` Chris Wilson
2016-12-12 11:53 ` [PATCH 10/34] drm: kselftest for drm_mm_replace_node() Chris Wilson
2016-12-14 12:01   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 11/34] drm: kselftest for drm_mm_insert_node_in_range() Chris Wilson
2016-12-15  8:43   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 12/34] drm: kselftest for drm_mm and alignment Chris Wilson
2016-12-15  8:59   ` Joonas Lahtinen
2016-12-15  9:21     ` Chris Wilson
2016-12-12 11:53 ` [PATCH 13/34] drm: kselftest for drm_mm and eviction Chris Wilson
2016-12-15  9:29   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 14/34] drm: kselftest for drm_mm and range restricted eviction Chris Wilson
2016-12-15  9:50   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 15/34] drm: kselftest for drm_mm and top-down allocation Chris Wilson
2016-12-14 11:33   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 16/34] drm: kselftest for drm_mm and top-down alignment Chris Wilson
2016-12-14 11:51   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 17/34] drm: kselftest for drm_mm and color adjustment Chris Wilson
2016-12-15 11:04   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 18/34] drm: kselftest for drm_mm and color eviction Chris Wilson
2016-12-12 11:53 ` [PATCH 19/34] drm: kselftest for drm_mm and restricted " Chris Wilson
2016-12-12 11:53 ` [PATCH 20/34] drm/i915: Build DRM range manager selftests for CI Chris Wilson
2016-12-15 12:31   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 21/34] drm: Promote drm_mm alignment to u64 Chris Wilson
2016-12-12 12:39   ` Christian König
2016-12-12 11:53 ` [PATCH 22/34] drm: Constify the drm_mm API Chris Wilson
2016-12-12 11:53 ` [PATCH 23/34] drm: Simplify drm_mm_clean() Chris Wilson
2016-12-13 12:30   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 24/34] drm: Compile time enabling for asserts in drm_mm Chris Wilson
2016-12-13 15:04   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 25/34] drm: Extract struct drm_mm_scan from struct drm_mm Chris Wilson
2016-12-13 15:17   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 26/34] drm: Rename prev_node to hole in drm_mm_scan_add_block() Chris Wilson
2016-12-13 15:23   ` Joonas Lahtinen
2016-12-13 22:28     ` Chris Wilson
2016-12-12 11:53 ` [PATCH 27/34] drm: Unconditionally do the range check " Chris Wilson
2016-12-13 15:28   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 28/34] drm: Fix application of color vs range restriction when scanning drm_mm Chris Wilson
2016-12-12 14:57   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 29/34] drm: Compute tight evictions for drm_mm_scan Chris Wilson
2016-12-13 15:48   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 30/34] drm: Optimise power-of-two alignments in drm_mm_scan_add_block() Chris Wilson
2016-12-13 15:55   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 31/34] drm: Simplify drm_mm scan-list manipulation Chris Wilson
2016-12-14  8:27   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 32/34] drm: Apply tight eviction scanning to color_adjust Chris Wilson
2016-12-13 16:03   ` Joonas Lahtinen
2016-12-12 11:53 ` [PATCH 33/34] drm: Fix drm_mm search and insertion Chris Wilson
2016-12-15 12:28   ` Joonas Lahtinen
2016-12-15 12:57     ` Chris Wilson
2016-12-12 11:53 ` [PATCH 34/34] drm: kselftest for drm_mm and bottom-up allocation Chris Wilson
2016-12-14  9:10   ` Joonas Lahtinen
2016-12-12 12:15 ` ✓ Fi.CI.BAT: success for series starting with [01/34] drm/i915: Use the MRU stack search after evicting Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.