All of lore.kernel.org
 help / color / mirror / Atom feed
* More selftests
@ 2017-01-19 11:41 Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 01/38] drm: Provide a driver hook for drm_dev_release() Chris Wilson
                   ` (38 more replies)
  0 siblings, 39 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

A lot of little tweaks, and a couple of new tests.
-Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH v2 01/38] drm: Provide a driver hook for drm_dev_release()
  2017-01-19 11:41 More selftests Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-25 11:12   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 02/38] drm/i915: Provide a hook for selftests Chris Wilson
                   ` (37 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Some state is coupled into the device lifetime outside of the
load/unload timeframe and requires teardown during final unreference
from drm_dev_release(). For example, dmabufs hold both a device and
module reference and may live longer than expected (i.e. the current
pattern of the driver tearing down its state and then releasing a
reference to the drm device) and yet touch driver private state when
destroyed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_drv.c | 3 +++
 include/drm/drm_drv.h     | 9 +++++++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
index 1b11ab628da7..a150f0c6a299 100644
--- a/drivers/gpu/drm/drm_drv.c
+++ b/drivers/gpu/drm/drm_drv.c
@@ -598,6 +598,9 @@ static void drm_dev_release(struct kref *ref)
 {
 	struct drm_device *dev = container_of(ref, struct drm_device, ref);
 
+	if (dev->driver->release)
+		dev->driver->release(dev);
+
 	if (drm_core_check_feature(dev, DRIVER_GEM))
 		drm_gem_destroy(dev);
 
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index 34ece393c639..dfddd8c15b62 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -103,6 +103,15 @@ struct drm_driver {
 	 *
 	 */
 	void (*unload) (struct drm_device *);
+
+	/**
+	 * @release:
+	 *
+	 * Optional callback for destroying device state after the final
+	 * reference is released, i.e. the device is being destroyed.
+	 */
+	void (*release) (struct drm_device *);
+
 	int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv);
 	int (*dma_quiescent) (struct drm_device *);
 	int (*context_dtor) (struct drm_device *dev, int context);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 02/38] drm/i915: Provide a hook for selftests
  2017-01-19 11:41 More selftests Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 01/38] drm: Provide a driver hook for drm_dev_release() Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-25 11:50   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation Chris Wilson
                   ` (36 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Some pieces of code are independent of hardware but are very tricky to
exercise through the normal userspace ABI or via debugfs hooks. Being
able to create mock unit tests and execute them through CI is vital.
Start by adding a central point where we can execute unit tests and
a parameter to enable them. This is disabled by default as the
expectation is that these tests will occasionally explode.

To facilitate integration with igt, any parameter beginning with
i915.igt__ is interpreted as a subtest executable independently via
igt/drv_selftest.

Two classes of selftests are recognised: mock unit tests and integration
tests. Mock unit tests are run as soon as the module is loaded, before
the device is probed. At that point there is no driver instantiated and
all hw interactions must be "mocked". This is very useful for writing
universal tests to exercise code not typically run on a broad range of
architectures. Alternatively, you can hook into the live selftests and
run when the device has been instantiated - hw interactions are real.

v2: Add a macro for compiling conditional code for mock objects inside
real objects.
v3: Differentiate between mock unit tests and late integration test.
v4: List the tests in natural order, use igt to sort after modparam.
v5: s/late/live/
v6: s/unsigned long/unsigned int/
v7: Use igt_ prefixes for long helpers.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1
---
 drivers/gpu/drm/i915/Kconfig.debug                 |  16 ++
 drivers/gpu/drm/i915/Makefile                      |   4 +
 drivers/gpu/drm/i915/i915_pci.c                    |  19 +-
 drivers/gpu/drm/i915/i915_selftest.h               | 102 +++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  11 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |  11 +
 drivers/gpu/drm/i915/selftests/i915_random.c       |  63 ++++++
 drivers/gpu/drm/i915/selftests/i915_random.h       |  50 +++++
 drivers/gpu/drm/i915/selftests/i915_selftest.c     | 247 +++++++++++++++++++++
 tools/testing/selftests/drivers/gpu/i915.sh        |   1 +
 10 files changed, 523 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/i915/i915_selftest.h
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_live_selftests.h
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_random.c
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_random.h
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_selftest.c

diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
index 598551dbf62c..a4d8cfd77c3c 100644
--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -26,6 +26,7 @@ config DRM_I915_DEBUG
         select DRM_DEBUG_MM if DRM=y
 	select DRM_DEBUG_MM_SELFTEST
 	select DRM_I915_SW_FENCE_DEBUG_OBJECTS
+	select DRM_I915_SELFTEST
         default n
         help
           Choose this option to turn on extra driver debugging that may affect
@@ -59,3 +60,18 @@ config DRM_I915_SW_FENCE_DEBUG_OBJECTS
           Recommended for driver developers only.
 
           If in doubt, say "N".
+
+config DRM_I915_SELFTEST
+	bool "Enable selftests upon driver load"
+	depends on DRM_I915
+	default n
+	select PRIME_NUMBERS
+	help
+	  Choose this option to allow the driver to perform selftests upon
+	  loading; also requires the i915.selftest=1 module parameter. To
+	  exit the module after running the selftests (i.e. to prevent normal
+	  module initialisation afterwards) use i915.selftest=-1.
+
+	  Recommended for driver developers only.
+
+	  If in doubt, say "N".
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 74ca2e8b2494..f50cea336bd2 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -3,6 +3,7 @@
 # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
 
 subdir-ccflags-$(CONFIG_DRM_I915_WERROR) := -Werror
+subdir-ccflags-$(CONFIG_DRM_I915_SELFTEST) += -I$(src) -I$(src)/selftests
 subdir-ccflags-y += \
 	$(call as-instr,movntdqa (%eax)$(comma)%xmm0,-DCONFIG_AS_MOVNTDQA)
 
@@ -116,6 +117,9 @@ i915-y += dvo_ch7017.o \
 
 # Post-mortem debug and GPU hang state capture
 i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o
+i915-$(CONFIG_DRM_I915_SELFTEST) += \
+	selftests/i915_random.o \
+	selftests/i915_selftest.o
 
 # virtual gpu code
 i915-y += i915_vgpu.o
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index ecb487b5356f..b3bf9474f081 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -27,6 +27,7 @@
 #include <linux/vga_switcheroo.h>
 
 #include "i915_drv.h"
+#include "i915_selftest.h"
 
 #define GEN_DEFAULT_PIPEOFFSETS \
 	.pipe_offsets = { PIPE_A_OFFSET, PIPE_B_OFFSET, \
@@ -476,6 +477,7 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 {
 	struct intel_device_info *intel_info =
 		(struct intel_device_info *) ent->driver_data;
+	int err;
 
 	if (IS_ALPHA_SUPPORT(intel_info) && !i915.alpha_support) {
 		DRM_INFO("The driver support for your hardware in this kernel version is alpha quality\n"
@@ -499,7 +501,17 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	if (vga_switcheroo_client_probe_defer(pdev))
 		return -EPROBE_DEFER;
 
-	return i915_driver_load(pdev, ent);
+	err = i915_driver_load(pdev, ent);
+	if (err)
+		return err;
+
+	err = i915_live_selftests(pdev);
+	if (err) {
+		i915_driver_unload(pci_get_drvdata(pdev));
+		return err > 0 ? -ENOTTY : err;
+	}
+
+	return 0;
 }
 
 static void i915_pci_remove(struct pci_dev *pdev)
@@ -521,6 +533,11 @@ static struct pci_driver i915_pci_driver = {
 static int __init i915_init(void)
 {
 	bool use_kms = true;
+	int err;
+
+	err = i915_mock_selftests();
+	if (err)
+		return err > 0 ? 0 : err;
 
 	/*
 	 * Enable KMS by default, unless explicitly overriden by
diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h
new file mode 100644
index 000000000000..692d96423079
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_selftest.h
@@ -0,0 +1,102 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef __I915_SELFTEST_H__
+#define __I915_SELFTEST_H__
+
+struct pci_dev;
+struct drm_i915_private;
+
+struct i915_selftest {
+	unsigned long timeout_jiffies;
+	unsigned int timeout_ms;
+	unsigned int random_seed;
+	int mock;
+	int live;
+};
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+extern struct i915_selftest i915_selftest;
+
+int i915_mock_selftests(void);
+int i915_live_selftests(struct pci_dev *pdev);
+
+/* We extract the function declarations from i915_mock_selftests.h and
+ * i915_live_selftests.h Add your unit test declarations there!
+ *
+ * Mock unit tests are run very early upon module load, before the driver
+ * is probed. All hardware interactions, as well as other subsystems, must
+ * be "mocked".
+ *
+ * Live unit tests are run after the driver is loaded - all hardware
+ * interactions are real.
+ */
+#define selftest(name, func) int func(void);
+#include "i915_mock_selftests.h"
+#undef selftest
+#define selftest(name, func) int func(struct drm_i915_private *i915);
+#include "i915_live_selftests.h"
+#undef selftest
+
+struct i915_subtest {
+	int (*func)(void *data);
+	const char *name;
+};
+
+int i915_subtests(const char *caller,
+		  const struct i915_subtest *st,
+		  unsigned int count,
+		  void *data);
+#define i915_subtests(T, data) \
+	(i915_subtests)(__func__, T, ARRAY_SIZE(T), data)
+
+#define SUBTEST(x) { x, #x }
+
+#define I915_SELFTEST_DECLARE(x) x
+#define I915_SELFTEST_ONLY(x) unlikely(x)
+
+#else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */
+
+static inline int i915_mock_selftests(void) { return 0; }
+static inline int i915_live_selftests(struct pci_dev *pdev) { return 0; }
+
+#define I915_SELFTEST_DECLARE(x)
+#define I915_SELFTEST_ONLY(x) 0
+
+#endif
+
+/* Using the i915_selftest_ prefix becomes a little unwieldy with the helpers.
+ * Instead we use the igt_ shorthand, in reference to the intel-gpu-tools
+ * suite of uabi test cases (which includes a test runner for our selftests).
+ */
+
+#define IGT_TIMEOUT(name__) \
+	unsigned long name__ = jiffies + i915_selftest.timeout_jiffies
+
+__printf(2, 3)
+bool igt_timeout(unsigned long timeout, const char *fmt, ...);
+
+#define igt_timeout(t, fmt, ...) \
+	(igt_timeout)((t), KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__)
+
+#endif /* !__I915_SELFTEST_H__ */
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
new file mode 100644
index 000000000000..f3e17cb10e05
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -0,0 +1,11 @@
+/* List each unit test as selftest(name, function)
+ *
+ * The name is used as both an enum and expanded as subtest__name to create
+ * a module parameter. It must be unique and legal for a C identifier.
+ *
+ * The function should be of type int function(void). It may be conditionally
+ * compiled using #if IS_ENABLED(DRM_I915_SELFTEST).
+ *
+ * Tests are executed in order by igt/drv_selftest
+ */
+selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
new file mode 100644
index 000000000000..69e97a2ba4a6
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -0,0 +1,11 @@
+/* List each unit test as selftest(name, function)
+ *
+ * The name is used as both an enum and expanded as subtest__name to create
+ * a module parameter. It must be unique and legal for a C identifier.
+ *
+ * The function should be of type int function(void). It may be conditionally
+ * compiled using #if IS_ENABLED(DRM_I915_SELFTEST).
+ *
+ * Tests are executed in order by igt/drv_selftest
+ */
+selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
diff --git a/drivers/gpu/drm/i915/selftests/i915_random.c b/drivers/gpu/drm/i915/selftests/i915_random.c
new file mode 100644
index 000000000000..606a237fed17
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_random.c
@@ -0,0 +1,63 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/bitops.h>
+#include <linux/kernel.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "i915_random.h"
+
+static inline u32 i915_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
+{
+	return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);
+}
+
+void i915_random_reorder(unsigned int *order, unsigned int count,
+			 struct rnd_state *state)
+{
+	unsigned int i, j;
+
+	for (i = 0; i < count; ++i) {
+		BUILD_BUG_ON(sizeof(unsigned int) > sizeof(u32));
+		j = i915_prandom_u32_max_state(count, state);
+		swap(order[i], order[j]);
+	}
+}
+
+unsigned int *i915_random_order(unsigned int count, struct rnd_state *state)
+{
+	unsigned int *order, i;
+
+	order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY);
+	if (!order)
+		return order;
+
+	for (i = 0; i < count; i++)
+		order[i] = i;
+
+	i915_random_reorder(order, count, state);
+	return order;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_random.h b/drivers/gpu/drm/i915/selftests/i915_random.h
new file mode 100644
index 000000000000..b18d5cdd9874
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_random.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_SELFTESTS_RANDOM_H__
+#define __I915_SELFTESTS_RANDOM_H__
+
+#include <linux/random.h>
+
+#include "i915_selftest.h"
+
+#define I915_RND_STATE_INITIALIZER(x) ({				\
+	struct rnd_state state__;					\
+	prandom_seed_state(&state__, (x));				\
+	state__;							\
+})
+
+#define I915_RND_STATE(name__) \
+	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(i915_selftest.random_seed)
+
+#define I915_RND_SUBSTATE(name__, parent__) \
+	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(prandom_u32_state(&(parent__)))
+
+unsigned int *i915_random_order(unsigned int count,
+				struct rnd_state *state);
+void i915_random_reorder(unsigned int *order,
+			 unsigned int count,
+			 struct rnd_state *state);
+
+#endif /* !__I915_SELFTESTS_RANDOM_H__ */
diff --git a/drivers/gpu/drm/i915/selftests/i915_selftest.c b/drivers/gpu/drm/i915/selftests/i915_selftest.c
new file mode 100644
index 000000000000..686218655678
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_selftest.c
@@ -0,0 +1,247 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <linux/random.h>
+
+#include "i915_drv.h"
+#include "i915_selftest.h"
+
+struct i915_selftest i915_selftest __read_mostly = {
+	.timeout_ms = 1000,
+};
+
+int i915_mock_sanitycheck(void)
+{
+	pr_info("i915: %s() - ok!\n", __func__);
+	return 0;
+}
+
+int i915_live_sanitycheck(struct drm_i915_private *i915)
+{
+	pr_info("%s: %s() - ok!\n", i915->drm.driver->name, __func__);
+	return 0;
+}
+
+enum {
+#define selftest(name, func) mock_##name,
+#include "i915_mock_selftests.h"
+#undef selftest
+};
+
+enum {
+#define selftest(name, func) live_##name,
+#include "i915_live_selftests.h"
+#undef selftest
+};
+
+struct selftest {
+	bool enabled;
+	const char *name;
+	union {
+		int (*mock)(void);
+		int (*live)(struct drm_i915_private *);
+	};
+};
+
+#define selftest(n, f) [mock_##n] = { .name = #n, .mock = f },
+static struct selftest mock_selftests[] = {
+#include "i915_mock_selftests.h"
+};
+#undef selftest
+
+#define selftest(n, f) [live_##n] = { .name = #n, .live = f },
+static struct selftest live_selftests[] = {
+#include "i915_live_selftests.h"
+};
+#undef selftest
+
+/* Embed the line number into the parameter name so that we can order tests */
+#define selftest(n, func) selftest_0(n, func, param(n))
+#define param(n) __PASTE(igt__, __PASTE(__PASTE(__LINE__, __), mock_##n))
+#define selftest_0(n, func, id) \
+module_param_named(id, mock_selftests[mock_##n].enabled, bool, 0400);
+#include "i915_mock_selftests.h"
+#undef selftest_0
+#undef param
+
+#define param(n) __PASTE(igt__, __PASTE(__PASTE(__LINE__, __), live_##n))
+#define selftest_0(n, func, id) \
+module_param_named(id, live_selftests[live_##n].enabled, bool, 0400);
+#include "i915_live_selftests.h"
+#undef selftest_0
+#undef param
+#undef selftest
+
+static void set_default_test_all(struct selftest *st, unsigned int count)
+{
+	unsigned int i;
+
+	for (i = 0; i < count; i++)
+		if (st[i].enabled)
+			return;
+
+	for (i = 0; i < count; i++)
+		st[i].enabled = true;
+}
+
+static int run_selftests(const char *name,
+			 struct selftest *st,
+			 unsigned int count,
+			 void *data)
+{
+	int err = 0;
+
+	while (!i915_selftest.random_seed)
+		i915_selftest.random_seed = get_random_int();
+
+	i915_selftest.timeout_jiffies =
+		i915_selftest.timeout_ms ?
+		msecs_to_jiffies_timeout(i915_selftest.timeout_ms) :
+		MAX_SCHEDULE_TIMEOUT;
+
+	set_default_test_all(st, count);
+
+	pr_info("i915: Performing %s selftests with st_random_seed=0x%x st_timeout=%u\n",
+		name, i915_selftest.random_seed, i915_selftest.timeout_ms);
+
+	/* Tests are listed in order in i915_*_selftests.h */
+	for (; count--; st++) {
+		if (!st->enabled)
+			continue;
+
+		cond_resched();
+		if (signal_pending(current))
+			return -EINTR;
+
+		pr_debug("i915: Running %s\n", st->name);
+		if (data)
+			err = st->live(data);
+		else
+			err = st->mock();
+		if (err == -EINTR && !signal_pending(current))
+			err = 0;
+		if (err)
+			break;
+	}
+
+	if (WARN(err > 0 || err == -ENOTTY,
+		 "%s returned %d, conflicting with selftest's magic values!\n",
+		 st->name, err))
+		err = -1;
+
+	rcu_barrier();
+	return err;
+}
+
+#define run_selftests(x, data) \
+	(run_selftests)(#x, x##_selftests, ARRAY_SIZE(x##_selftests), data)
+
+int i915_mock_selftests(void)
+{
+	int err;
+
+	if (!i915_selftest.mock)
+		return 0;
+
+	err = run_selftests(mock, NULL);
+	if (err)
+		return err;
+
+	if (i915_selftest.mock < 0)
+		return 1;
+
+	return 0;
+}
+
+int i915_live_selftests(struct pci_dev *pdev)
+{
+	int err;
+
+	if (!i915_selftest.live)
+		return 0;
+
+	err = run_selftests(live, to_i915(pci_get_drvdata(pdev)));
+	if (err) {
+		i915_selftest.live = err;
+		return err;
+	}
+
+	if (i915_selftest.live < 0) {
+		i915_selftest.live = -ENOTTY;
+		return 1;
+	}
+
+	return 0;
+}
+
+int (i915_subtests)(const char *caller,
+		    const struct i915_subtest *st,
+		    unsigned int count,
+		    void *data)
+{
+	int err;
+
+	for (; count--; st++) {
+		cond_resched();
+		if (signal_pending(current))
+			return -EINTR;
+
+		pr_debug("i915: Running %s/%s\n", caller, st->name);
+		err = st->func(data);
+		if (err && err != -EINTR) {
+			pr_err("i915/%s: %s failed with error %d\n",
+			       caller, st->name, err);
+			return err;
+		}
+	}
+
+	return 0;
+}
+
+bool (igt_timeout)(unsigned long timeout, const char *fmt, ...)
+{
+	va_list va;
+
+	if (!signal_pending(current)) {
+		cond_resched();
+		if (time_before(jiffies, timeout))
+			return false;
+	}
+
+	if (fmt) {
+		va_start(va, fmt);
+		vprintk(fmt, va);
+		va_end(va);
+	}
+
+	return true;
+}
+
+module_param_named(st_random_seed, i915_selftest.random_seed, uint, 0400);
+module_param_named(st_timeout, i915_selftest.timeout_ms, uint, 0400);
+
+module_param_named_unsafe(mock_selftests, i915_selftest.mock, int, 0400);
+MODULE_PARM_DESC(mock_selftests, "Run selftests before loading, using mock hardware (0:disabled [default], 1:run tests then load driver, -1:run tests then exit module)");
+
+module_param_named_unsafe(live_selftests, i915_selftest.live, int, 0400);
+MODULE_PARM_DESC(live_selftests, "Run selftests after driver initialisation on the live system (0:disabled [default], 1:run tests then continue, -1:run tests then exit module)");
diff --git a/tools/testing/selftests/drivers/gpu/i915.sh b/tools/testing/selftests/drivers/gpu/i915.sh
index d407f0fa1e3a..c06d6e8a8dcc 100755
--- a/tools/testing/selftests/drivers/gpu/i915.sh
+++ b/tools/testing/selftests/drivers/gpu/i915.sh
@@ -7,6 +7,7 @@ if ! /sbin/modprobe -q -r i915; then
 fi
 
 if /sbin/modprobe -q i915 mock_selftests=-1; then
+	/sbin/modprobe -q -r i915
 	echo "drivers/gpu/i915: ok"
 else
 	echo "drivers/gpu/i915: [FAIL]"
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation
  2017-01-19 11:41 More selftests Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 01/38] drm: Provide a driver hook for drm_dev_release() Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 02/38] drm/i915: Provide a hook for selftests Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-02-01 11:17   ` Tvrtko Ursulin
  2017-01-19 11:41 ` [PATCH v2 04/38] drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove Chris Wilson
                   ` (35 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Start exercising the scattergather lists, especially looking at
iteration after coalescing.

v2: Comment on the peculiarity of table construction (i.e. why this
sg_table might be interesting).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c                    |  11 +-
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/scatterlist.c       | 331 +++++++++++++++++++++
 3 files changed, 340 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/scatterlist.c

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 7c3895230a8a..04edbcaffa25 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2215,17 +2215,17 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
 	mutex_unlock(&obj->mm.lock);
 }
 
-static void i915_sg_trim(struct sg_table *orig_st)
+static bool i915_sg_trim(struct sg_table *orig_st)
 {
 	struct sg_table new_st;
 	struct scatterlist *sg, *new_sg;
 	unsigned int i;
 
 	if (orig_st->nents == orig_st->orig_nents)
-		return;
+		return false;
 
 	if (sg_alloc_table(&new_st, orig_st->nents, GFP_KERNEL | __GFP_NOWARN))
-		return;
+		return false;
 
 	new_sg = new_st.sgl;
 	for_each_sg(orig_st->sgl, sg, orig_st->nents, i) {
@@ -2238,6 +2238,7 @@ static void i915_sg_trim(struct sg_table *orig_st)
 	sg_free_table(orig_st);
 
 	*orig_st = new_st;
+	return true;
 }
 
 static struct sg_table *
@@ -4937,3 +4938,7 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 	sg = i915_gem_object_get_sg(obj, n, &offset);
 	return sg_dma_address(sg) + (offset << PAGE_SHIFT);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/scatterlist.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 69e97a2ba4a6..5f0bdda42ed8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -9,3 +9,4 @@
  * Tests are executed in order by igt/drv_selftest
  */
 selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
+selftest(scatterlist, scatterlist_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/scatterlist.c b/drivers/gpu/drm/i915/selftests/scatterlist.c
new file mode 100644
index 000000000000..4000fdd1b7db
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/scatterlist.c
@@ -0,0 +1,331 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <linux/prime_numbers.h>
+#include <linux/random.h>
+
+#include "i915_selftest.h"
+
+#define PFN_BIAS (1 << 10)
+
+struct pfn_table {
+	struct sg_table st;
+	unsigned long start, end;
+};
+
+typedef unsigned int (*npages_fn_t)(unsigned long n,
+				    unsigned long count,
+				    struct rnd_state *rnd);
+
+static noinline int expect_pfn_sg(struct pfn_table *pt,
+				  npages_fn_t npages_fn,
+				  struct rnd_state *rnd,
+				  const char *who,
+				  unsigned long timeout)
+{
+	struct scatterlist *sg;
+	unsigned long pfn, n;
+
+	pfn = pt->start;
+	for_each_sg(pt->st.sgl, sg, pt->st.nents, n) {
+		struct page *page = sg_page(sg);
+		unsigned int npages = npages_fn(n, pt->st.nents, rnd);
+
+		if (page_to_pfn(page) != pfn) {
+			pr_err("%s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sg)\n",
+			       who, pfn, page_to_pfn(page));
+			return -EINVAL;
+		}
+
+		if (sg->length != npages * PAGE_SIZE) {
+			pr_err("%s: %s copied wrong sg length, expected size %lu, found %u (using for_each_sg)\n",
+			       __func__, who, npages * PAGE_SIZE, sg->length);
+			return -EINVAL;
+		}
+
+		if (igt_timeout(timeout, "%s timed out\n", who))
+			return -EINTR;
+
+		pfn += npages;
+	}
+	if (pfn != pt->end) {
+		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
+		       __func__, who, pt->end, pfn);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static noinline int expect_pfn_sg_page_iter(struct pfn_table *pt,
+					    const char *who,
+					    unsigned long timeout)
+{
+	struct sg_page_iter sgiter;
+	unsigned long pfn;
+
+	pfn = pt->start;
+	for_each_sg_page(pt->st.sgl, &sgiter, pt->st.nents, 0) {
+		struct page *page = sg_page_iter_page(&sgiter);
+
+		if (page != pfn_to_page(pfn)) {
+			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sg_page)\n",
+			       __func__, who, pfn, page_to_pfn(page));
+			return -EINVAL;
+		}
+
+		if (igt_timeout(timeout, "%s timed out\n", who))
+			return -EINTR;
+
+		pfn++;
+	}
+	if (pfn != pt->end) {
+		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
+		       __func__, who, pt->end, pfn);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static noinline int expect_pfn_sgtiter(struct pfn_table *pt,
+				       const char *who,
+				       unsigned long timeout)
+{
+	struct sgt_iter sgt;
+	struct page *page;
+	unsigned long pfn;
+
+	pfn = pt->start;
+	for_each_sgt_page(page, sgt, &pt->st) {
+		if (page != pfn_to_page(pfn)) {
+			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sgt_page)\n",
+			       __func__, who, pfn, page_to_pfn(page));
+			return -EINVAL;
+		}
+
+		if (igt_timeout(timeout, "%s timed out\n", who))
+			return -EINTR;
+
+		pfn++;
+	}
+	if (pfn != pt->end) {
+		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
+		       __func__, who, pt->end, pfn);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int expect_pfn_sgtable(struct pfn_table *pt,
+			      npages_fn_t npages_fn,
+			      struct rnd_state *rnd,
+			      const char *who,
+			      unsigned long timeout)
+{
+	int err;
+
+	err = expect_pfn_sg(pt, npages_fn, rnd, who, timeout);
+	if (err)
+		return err;
+
+	err = expect_pfn_sg_page_iter(pt, who, timeout);
+	if (err)
+		return err;
+
+	err = expect_pfn_sgtiter(pt, who, timeout);
+	if (err)
+		return err;
+
+	return 0;
+}
+
+static unsigned int one(unsigned long n,
+			unsigned long count,
+			struct rnd_state *rnd)
+{
+	return 1;
+}
+
+static unsigned int grow(unsigned long n,
+			 unsigned long count,
+			 struct rnd_state *rnd)
+{
+	return n + 1;
+}
+
+static unsigned int shrink(unsigned long n,
+			   unsigned long count,
+			   struct rnd_state *rnd)
+{
+	return count - n;
+}
+
+static unsigned int random(unsigned long n,
+			   unsigned long count,
+			   struct rnd_state *rnd)
+{
+	return 1 + (prandom_u32_state(rnd) % 1024);
+}
+
+static bool alloc_table(struct pfn_table *pt,
+			unsigned long count, unsigned long max,
+			npages_fn_t npages_fn,
+			struct rnd_state *rnd)
+{
+	struct scatterlist *sg;
+	unsigned long n, pfn;
+
+	if (sg_alloc_table(&pt->st, max,
+			   GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN))
+		return false;
+
+	/* count should be less than 20 to prevent overflowing sg->length */
+	GEM_BUG_ON(overflows_type(count * PAGE_SIZE, sg->length));
+
+	/* Construct a table where each scatterlist contains different number
+	 * of entries. The idea is to check that we can iterate the individual
+	 * pages from inside the coalesced lists.
+	 */
+	pt->start = PFN_BIAS;
+	pfn = pt->start;
+	sg = pt->st.sgl;
+	for (n = 0; n < count; n++) {
+		unsigned long npages = npages_fn(n, count, rnd);
+
+		if (n)
+			sg = sg_next(sg);
+		sg_set_page(sg, pfn_to_page(pfn), npages * PAGE_SIZE, 0);
+
+		GEM_BUG_ON(page_to_pfn(sg_page(sg)) != pfn);
+		GEM_BUG_ON(sg->length != npages * PAGE_SIZE);
+		GEM_BUG_ON(sg->offset != 0);
+
+		pfn += npages;
+	}
+	sg_mark_end(sg);
+	pt->st.nents = n;
+	pt->end = pfn;
+
+	return true;
+}
+
+static const npages_fn_t npages_funcs[] = {
+	one,
+	grow,
+	shrink,
+	random,
+	NULL,
+};
+
+static int igt_sg_alloc(void *ignored)
+{
+	IGT_TIMEOUT(end_time);
+	const unsigned long max_order = 20; /* approximating a 4GiB object */
+	struct rnd_state prng;
+	unsigned long prime;
+
+	for_each_prime_number(prime, max_order) {
+		unsigned long size = BIT(prime);
+		int offset;
+
+		for (offset = -1; offset <= 1; offset++) {
+			unsigned long sz = size + offset;
+			const npages_fn_t *npages;
+			struct pfn_table pt;
+			int err;
+
+			for (npages = npages_funcs; *npages; npages++) {
+				prandom_seed_state(&prng,
+						   i915_selftest.random_seed);
+				if (!alloc_table(&pt, sz, sz, *npages, &prng))
+					return 0; /* out of memory, give up */
+
+				prandom_seed_state(&prng,
+						   i915_selftest.random_seed);
+				err = expect_pfn_sgtable(&pt, *npages, &prng,
+							 "sg_alloc_table",
+							 end_time);
+				sg_free_table(&pt.st);
+				if (err)
+					return err;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int igt_sg_trim(void *ignored)
+{
+	IGT_TIMEOUT(end_time);
+	const unsigned long max = PAGE_SIZE; /* not prime! */
+	struct pfn_table pt;
+	unsigned long prime;
+
+	for_each_prime_number(prime, max) {
+		const npages_fn_t *npages;
+		int err;
+
+		for (npages = npages_funcs; *npages; npages++) {
+			struct rnd_state prng;
+
+			prandom_seed_state(&prng, i915_selftest.random_seed);
+			if (!alloc_table(&pt, prime, max, *npages, &prng))
+				return 0; /* out of memory, give up */
+
+			err = 0;
+			if (i915_sg_trim(&pt.st)) {
+				if (pt.st.orig_nents != prime ||
+				    pt.st.nents != prime) {
+					pr_err("i915_sg_trim failed (nents %u, orig_nents %u), expected %lu\n",
+					       pt.st.nents, pt.st.orig_nents, prime);
+					err = -EINVAL;
+				} else {
+					prandom_seed_state(&prng,
+							   i915_selftest.random_seed);
+					err = expect_pfn_sgtable(&pt,
+								 *npages, &prng,
+								 "i915_sg_trim",
+								 end_time);
+				}
+			}
+			sg_free_table(&pt.st);
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
+
+int scatterlist_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_sg_alloc),
+		SUBTEST(igt_sg_trim),
+	};
+
+	return i915_subtests(tests, NULL);
+}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 04/38] drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (2 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 05/38] drm/i915: Add unit tests for the breadcrumb rbtree, completion Chris Wilson
                   ` (34 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

First retroactive test, make sure that the waiters are in global seqno
order after random inserts and removals.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/intel_breadcrumbs.c           |  21 +++
 drivers/gpu/drm/i915/intel_engine_cs.c             |   4 +
 drivers/gpu/drm/i915/intel_ringbuffer.h            |   2 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 176 +++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_engine.c       |  55 +++++++
 drivers/gpu/drm/i915/selftests/mock_engine.h       |  32 ++++
 7 files changed, 291 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_engine.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_engine.h

diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index fcfa423d08bd..5682c8aa8064 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -109,6 +109,18 @@ static void __intel_breadcrumbs_enable_irq(struct intel_breadcrumbs *b)
 	if (b->rpm_wakelock)
 		return;
 
+	if (I915_SELFTEST_ONLY(b->mock)) {
+		/* For our mock objects we want to avoid interaction
+		 * with the real hardware (which is not set up). So
+		 * we simply pretend we have enabled the powerwell
+		 * and the irq, and leave it up to the mock
+		 * implementation to call intel_engine_wakeup()
+		 * itself when it wants to simulate a user interrupt,
+		 */
+		b->rpm_wakelock = true;
+		return;
+	}
+
 	/* Since we are waiting on a request, the GPU should be busy
 	 * and should have its own rpm reference. For completeness,
 	 * record an rpm reference for ourselves to cover the
@@ -143,6 +155,11 @@ static void __intel_breadcrumbs_disable_irq(struct intel_breadcrumbs *b)
 	if (!b->rpm_wakelock)
 		return;
 
+	if (I915_SELFTEST_ONLY(b->mock)) {
+		b->rpm_wakelock = false;
+		return;
+	}
+
 	if (b->irq_enabled) {
 		irq_disable(engine);
 		b->irq_enabled = false;
@@ -661,3 +678,7 @@ unsigned int intel_breadcrumbs_busy(struct drm_i915_private *i915)
 
 	return mask;
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/intel_breadcrumbs.c"
+#endif
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 371acf109e34..479db257d687 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -482,3 +482,7 @@ void intel_engine_get_instdone(struct intel_engine_cs *engine,
 		break;
 	}
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_engine.c"
+#endif
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 79c2b8d72322..eba238095497 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -5,6 +5,7 @@
 #include "i915_gem_batch_pool.h"
 #include "i915_gem_request.h"
 #include "i915_gem_timeline.h"
+#include "i915_selftest.h"
 
 #define I915_CMD_HASH_ORDER 9
 
@@ -244,6 +245,7 @@ struct intel_engine_cs {
 
 		bool irq_enabled : 1;
 		bool rpm_wakelock : 1;
+		I915_SELFTEST_DECLARE(bool mock : 1);
 	} breadcrumbs;
 
 	/*
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 5f0bdda42ed8..80458e2a2b04 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -10,3 +10,4 @@
  */
 selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
 selftest(scatterlist, scatterlist_mock_selftests)
+selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
new file mode 100644
index 000000000000..a89b25988cc2
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -0,0 +1,176 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_random.h"
+#include "i915_selftest.h"
+
+#include "mock_engine.h"
+
+static int check_rbtree(struct intel_engine_cs *engine,
+			const unsigned long *bitmap,
+			const struct intel_wait *waiters,
+			const int count)
+{
+	struct intel_breadcrumbs *b = &engine->breadcrumbs;
+	struct rb_node *rb;
+	int n;
+
+	if (&b->first_wait->node != rb_first(&b->waiters)) {
+		pr_err("First waiter does not match first element of wait-tree\n");
+		return -EINVAL;
+	}
+
+	n = find_first_bit(bitmap, count);
+	for (rb = rb_first(&b->waiters); rb; rb = rb_next(rb)) {
+		struct intel_wait *w = container_of(rb, typeof(*w), node);
+		int idx = w - waiters;
+
+		if (!test_bit(idx, bitmap)) {
+			pr_err("waiter[%d, seqno=%d] removed but still in wait-tree\n",
+			       idx, w->seqno);
+			return -EINVAL;
+		}
+
+		if (n != idx) {
+			pr_err("waiter[%d, seqno=%d] does not match expected next element in tree [%d]\n",
+			       idx, w->seqno, n);
+			return -EINVAL;
+		}
+
+		n = find_next_bit(bitmap, count, n + 1);
+	}
+
+	return 0;
+}
+
+static int check_rbtree_empty(struct intel_engine_cs *engine)
+{
+	struct intel_breadcrumbs *b = &engine->breadcrumbs;
+
+	if (b->first_wait) {
+		pr_err("Empty breadcrumbs still has a waiter\n");
+		return -EINVAL;
+	}
+
+	if (!RB_EMPTY_ROOT(&b->waiters)) {
+		pr_err("Empty breadcrumbs, but wait-tree not empty\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int igt_random_insert_remove(void *arg)
+{
+	const u32 seqno_bias = 0x1000;
+	I915_RND_STATE(prng);
+	struct intel_engine_cs *engine = arg;
+	struct intel_wait *waiters;
+	const int count = 4096;
+	int *in_order, *out_order;
+	unsigned long *bitmap;
+	int err = -ENOMEM;
+	int n;
+
+	mock_engine_reset(engine);
+
+	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
+	if (!waiters)
+		goto out_engines;
+
+	bitmap = kcalloc(DIV_ROUND_UP(count, BITS_PER_LONG), sizeof(*bitmap),
+			 GFP_TEMPORARY);
+	if (!bitmap)
+		goto out_waiters;
+
+	in_order = i915_random_order(count, &prng);
+	if (!in_order)
+		goto out_bitmap;
+
+	out_order = i915_random_order(count, &prng);
+	if (!out_order)
+		goto out_order;
+
+	for (n = 0; n < count; n++)
+		intel_wait_init(&waiters[n], seqno_bias + n);
+
+	err = check_rbtree(engine, bitmap, waiters, count);
+	if (err)
+		goto err;
+
+	/* Add and remove waiters into the rbtree in random order. At each
+	 * step, we verify that the rbtree is correctly ordered.
+	 */
+	for (n = 0; n < count; n++) {
+		int i = in_order[n];
+
+		intel_engine_add_wait(engine, &waiters[i]);
+		__set_bit(i, bitmap);
+
+		err = check_rbtree(engine, bitmap, waiters, count);
+		if (err)
+			goto err;
+	}
+	for (n = 0; n < count; n++) {
+		int i = out_order[n];
+
+		intel_engine_remove_wait(engine, &waiters[i]);
+		__clear_bit(i, bitmap);
+
+		err = check_rbtree(engine, bitmap, waiters, count);
+		if (err)
+			goto err;
+	}
+
+	err = check_rbtree_empty(engine);
+err:
+	kfree(out_order);
+out_order:
+	kfree(in_order);
+out_bitmap:
+	kfree(bitmap);
+out_waiters:
+	drm_free_large(waiters);
+out_engines:
+	mock_engine_flush(engine);
+	return err;
+}
+
+int intel_breadcrumbs_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_random_insert_remove),
+	};
+	struct intel_engine_cs *engine;
+	int err;
+
+	engine = mock_engine("mock");
+	if (!engine)
+		return -ENOMEM;
+
+	err = i915_subtests(tests, engine);
+	kfree(engine);
+
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.c b/drivers/gpu/drm/i915/selftests/mock_engine.c
new file mode 100644
index 000000000000..4a090bbe807b
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.c
@@ -0,0 +1,55 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_engine.h"
+
+struct intel_engine_cs *mock_engine(const char *name)
+{
+	struct intel_engine_cs *engine;
+	static int id;
+
+	engine = kzalloc(sizeof(*engine) + PAGE_SIZE, GFP_KERNEL);
+	if (!engine)
+		return NULL;
+
+	/* minimal engine setup for seqno */
+	engine->name = name;
+	engine->id = id++;
+	engine->status_page.page_addr = (void *)(engine + 1);
+
+	/* minimal breadcrumbs init */
+	spin_lock_init(&engine->breadcrumbs.lock);
+	engine->breadcrumbs.mock = true;
+
+	return engine;
+}
+
+void mock_engine_flush(struct intel_engine_cs *engine)
+{
+}
+
+void mock_engine_reset(struct intel_engine_cs *engine)
+{
+	intel_write_status_page(engine, I915_GEM_HWS_INDEX, 0);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.h b/drivers/gpu/drm/i915/selftests/mock_engine.h
new file mode 100644
index 000000000000..0ae9a94aaa1e
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_ENGINE_H__
+#define __MOCK_ENGINE_H__
+
+struct intel_engine_cs *mock_engine(const char *name);
+void mock_engine_flush(struct intel_engine_cs *engine);
+void mock_engine_reset(struct intel_engine_cs *engine);
+
+#endif /* !__MOCK_ENGINE_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 05/38] drm/i915: Add unit tests for the breadcrumb rbtree, completion
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (3 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 04/38] drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 06/38] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups Chris Wilson
                   ` (33 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Second retroactive test, make sure that the waiters are removed from the
global wait-tree when their seqno completes.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 107 +++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_engine.h       |   6 ++
 2 files changed, 113 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
index a89b25988cc2..245e5f1b8373 100644
--- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -64,6 +64,27 @@ static int check_rbtree(struct intel_engine_cs *engine,
 	return 0;
 }
 
+static int check_completion(struct intel_engine_cs *engine,
+			    const unsigned long *bitmap,
+			    const struct intel_wait *waiters,
+			    const int count)
+{
+	int n;
+
+	for (n = 0; n < count; n++) {
+		if (intel_wait_complete(&waiters[n]) != !!test_bit(n, bitmap))
+			continue;
+
+		pr_err("waiter[%d, seqno=%d] is %s, but expected %s\n",
+		       n, waiters[n].seqno,
+		       intel_wait_complete(&waiters[n]) ? "complete" : "active",
+		       test_bit(n, bitmap) ? "active" : "complete");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int check_rbtree_empty(struct intel_engine_cs *engine)
 {
 	struct intel_breadcrumbs *b = &engine->breadcrumbs;
@@ -157,10 +178,96 @@ static int igt_random_insert_remove(void *arg)
 	return err;
 }
 
+static int igt_insert_complete(void *arg)
+{
+	const u32 seqno_bias = 0x1000;
+	struct intel_engine_cs *engine = arg;
+	struct intel_wait *waiters;
+	const int count = 4096;
+	unsigned long *bitmap;
+	int err = -ENOMEM;
+	int n, m;
+
+	mock_engine_reset(engine);
+
+	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
+	if (!waiters)
+		goto out_engines;
+
+	bitmap = kcalloc(DIV_ROUND_UP(count, BITS_PER_LONG), sizeof(*bitmap),
+			 GFP_TEMPORARY);
+	if (!bitmap)
+		goto out_waiters;
+
+	for (n = 0; n < count; n++) {
+		intel_wait_init(&waiters[n], n + seqno_bias);
+		intel_engine_add_wait(engine, &waiters[n]);
+		__set_bit(n, bitmap);
+	}
+	err = check_rbtree(engine, bitmap, waiters, count);
+	if (err)
+		goto err;
+
+	/* On each step, we advance the seqno so that several waiters are then
+	 * complete (we increase the seqno by increasingly larger values to
+	 * retire more and more waiters at once). All retired waiters should
+	 * be woken and removed from the rbtree, and so that we check.
+	 */
+	for (n = 0; n < count; n = m) {
+		int seqno = 2 * n;
+
+		GEM_BUG_ON(find_first_bit(bitmap, count) != n);
+
+		if (intel_wait_complete(&waiters[n])) {
+			pr_err("waiter[%d, seqno=%d] completed too early\n",
+			       n, waiters[n].seqno);
+			err = -EINVAL;
+			goto err;
+		}
+
+		/* complete the following waiters */
+		mock_seqno_advance(engine, seqno + seqno_bias);
+		for (m = n; m <= seqno; m++) {
+			if (m == count)
+				break;
+
+			GEM_BUG_ON(!test_bit(m, bitmap));
+			__clear_bit(m, bitmap);
+		}
+
+		intel_engine_remove_wait(engine, &waiters[n]);
+		RB_CLEAR_NODE(&waiters[n].node);
+
+		err = check_rbtree(engine, bitmap, waiters, count);
+		if (err) {
+			pr_err("rbtree corrupt after seqno advance to %d\n",
+			       seqno + seqno_bias);
+			goto err;
+		}
+
+		err = check_completion(engine, bitmap, waiters, count);
+		if (err) {
+			pr_err("completions after seqno advance to %d failed\n",
+			       seqno + seqno_bias);
+			goto err;
+		}
+	}
+
+	err = check_rbtree_empty(engine);
+err:
+	kfree(bitmap);
+out_waiters:
+	drm_free_large(waiters);
+out_engines:
+	mock_engine_flush(engine);
+	return err;
+}
+
 int intel_breadcrumbs_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_random_insert_remove),
+		SUBTEST(igt_insert_complete),
 	};
 	struct intel_engine_cs *engine;
 	int err;
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.h b/drivers/gpu/drm/i915/selftests/mock_engine.h
index 0ae9a94aaa1e..9cfe9671f860 100644
--- a/drivers/gpu/drm/i915/selftests/mock_engine.h
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.h
@@ -29,4 +29,10 @@ struct intel_engine_cs *mock_engine(const char *name);
 void mock_engine_flush(struct intel_engine_cs *engine);
 void mock_engine_reset(struct intel_engine_cs *engine);
 
+static inline void mock_seqno_advance(struct intel_engine_cs *engine, u32 seqno)
+{
+	intel_write_status_page(engine, I915_GEM_HWS_INDEX, seqno);
+	intel_engine_wakeup(engine);
+}
+
 #endif /* !__MOCK_ENGINE_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 06/38] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (4 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 05/38] drm/i915: Add unit tests for the breadcrumb rbtree, completion Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-02-01 11:27   ` Tvrtko Ursulin
  2017-01-19 11:41 ` [PATCH v2 07/38] drm/i915: Mock the GEM device for self-testing Chris Wilson
                   ` (32 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Third retroactive test, make sure that the seqno waiters are woken.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 171 +++++++++++++++++++++
 1 file changed, 171 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
index 245e5f1b8373..fe45c4c7c757 100644
--- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -263,11 +263,182 @@ static int igt_insert_complete(void *arg)
 	return err;
 }
 
+struct igt_wakeup {
+	struct task_struct *tsk;
+	atomic_t *ready, *set, *done;
+	struct intel_engine_cs *engine;
+	unsigned long flags;
+	wait_queue_head_t *wq;
+	u32 seqno;
+};
+
+static int wait_atomic(atomic_t *p)
+{
+	schedule();
+	return 0;
+}
+
+static int wait_atomic_timeout(atomic_t *p)
+{
+	return schedule_timeout(10 * HZ) ? 0 : -ETIMEDOUT;
+}
+
+static int igt_wakeup_thread(void *arg)
+{
+	struct igt_wakeup *w = arg;
+	struct intel_wait wait;
+
+	while (!kthread_should_stop()) {
+		DEFINE_WAIT(ready);
+
+		for (;;) {
+			prepare_to_wait(w->wq, &ready, TASK_INTERRUPTIBLE);
+			if (atomic_read(w->ready) == 0)
+				break;
+
+			schedule();
+		}
+		finish_wait(w->wq, &ready);
+		if (atomic_dec_and_test(w->set))
+			wake_up_atomic_t(w->set);
+
+		if (test_bit(0, &w->flags))
+			break;
+
+		intel_wait_init(&wait, w->seqno);
+		intel_engine_add_wait(w->engine, &wait);
+		for (;;) {
+			set_current_state(TASK_UNINTERRUPTIBLE);
+			if (i915_seqno_passed(intel_engine_get_seqno(w->engine),
+					      w->seqno))
+				break;
+
+			schedule();
+		}
+		intel_engine_remove_wait(w->engine, &wait);
+		__set_current_state(TASK_RUNNING);
+
+		if (atomic_dec_and_test(w->done))
+			wake_up_atomic_t(w->done);
+	}
+
+	if (atomic_dec_and_test(w->done))
+		wake_up_atomic_t(w->done);
+	return 0;
+}
+
+static void igt_wake_all_sync(atomic_t *ready,
+			      atomic_t *set,
+			      atomic_t *done,
+			      wait_queue_head_t *wq,
+			      int count)
+{
+	atomic_set(set, count);
+	atomic_set(done, count);
+
+	atomic_set(ready, 0);
+	wake_up_all(wq);
+
+	wait_on_atomic_t(set, wait_atomic, TASK_UNINTERRUPTIBLE);
+	atomic_set(ready, count);
+}
+
+static int igt_wakeup(void *arg)
+{
+	const int state = TASK_UNINTERRUPTIBLE;
+	struct intel_engine_cs *engine = arg;
+	struct igt_wakeup *waiters;
+	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	const int count = 4096;
+	const u32 max_seqno = count / 4;
+	atomic_t ready, set, done;
+	int err = -ENOMEM;
+	int n, step;
+
+	mock_engine_reset(engine);
+
+	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
+	if (!waiters)
+		goto out_engines;
+
+	/* Create a large number of threads, each waiting on a random seqno.
+	 * Multiple waiters will be waiting for the same seqno.
+	 */
+	atomic_set(&ready, count);
+	for (n = 0; n < count; n++) {
+		waiters[n].wq = &wq;
+		waiters[n].ready = &ready;
+		waiters[n].set = &set;
+		waiters[n].done = &done;
+		waiters[n].engine = engine;
+		waiters[n].flags = 0;
+
+		waiters[n].tsk = kthread_run(igt_wakeup_thread, &waiters[n],
+					     "i915/igt:%d", n);
+		if (IS_ERR(waiters[n].tsk))
+			goto out_waiters;
+
+		get_task_struct(waiters[n].tsk);
+	}
+
+	for (step = 1; step <= max_seqno; step <<= 1) {
+		u32 seqno;
+
+		for (n = 0; n < count; n++)
+			waiters[n].seqno = 1 + get_random_int() % max_seqno;
+
+		mock_seqno_advance(engine, 0);
+		igt_wake_all_sync(&ready, &set, &done, &wq, count);
+
+		for (seqno = 1; seqno <= max_seqno + step; seqno += step) {
+			usleep_range(50, 500);
+			mock_seqno_advance(engine, seqno);
+		}
+		GEM_BUG_ON(intel_engine_get_seqno(engine) < 1 + max_seqno);
+
+		err = wait_on_atomic_t(&done, wait_atomic_timeout, state);
+		if (err) {
+			pr_err("Timed out waiting for %d remaining waiters\n",
+			       atomic_read(&done));
+			break;
+		}
+
+		err = check_rbtree_empty(engine);
+		if (err)
+			break;
+	}
+
+out_waiters:
+	for (n = 0; n < count; n++) {
+		if (IS_ERR(waiters[n].tsk))
+			break;
+
+		set_bit(0, &waiters[n].flags);
+	}
+
+	igt_wake_all_sync(&ready, &set, &done, &wq, n);
+	wait_on_atomic_t(&done, wait_atomic, state);
+
+	for (n = 0; n < count; n++) {
+		if (IS_ERR(waiters[n].tsk))
+			break;
+
+		kthread_stop(waiters[n].tsk);
+		put_task_struct(waiters[n].tsk);
+	}
+
+	drm_free_large(waiters);
+out_engines:
+	mock_engine_flush(engine);
+	return err;
+}
+
 int intel_breadcrumbs_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_random_insert_remove),
 		SUBTEST(igt_insert_complete),
+		SUBTEST(igt_wakeup),
 	};
 	struct intel_engine_cs *engine;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 07/38] drm/i915: Mock the GEM device for self-testing
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (5 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 06/38] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 08/38] drm/i915: Mock a GGTT " Chris Wilson
                   ` (31 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

A simulacrum of drm_i915_private to let us pretend interactions with the
device.

v2: Tidy init error paths

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c                  |   4 +
 drivers/gpu/drm/i915/i915_gem.c                  |   1 +
 drivers/gpu/drm/i915/selftests/mock_drm.c        |  54 ++++++++++++
 drivers/gpu/drm/i915/selftests/mock_drm.h        |  31 +++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 106 +++++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.h |   8 ++
 drivers/gpu/drm/i915/selftests/mock_gem_object.h |   8 ++
 7 files changed, 212 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_drm.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_drm.h
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gem_device.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gem_device.h
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gem_object.h

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 4ae69ebe166e..0eb56c2ce68e 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -2602,3 +2602,7 @@ static struct drm_driver driver = {
 	.minor = DRIVER_MINOR,
 	.patchlevel = DRIVER_PATCHLEVEL,
 };
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_drm.c"
+#endif
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 04edbcaffa25..d93abed05631 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4941,4 +4941,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/scatterlist.c"
+#include "selftests/mock_gem_device.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_drm.c b/drivers/gpu/drm/i915/selftests/mock_drm.c
new file mode 100644
index 000000000000..113dec05c7dc
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_drm.c
@@ -0,0 +1,54 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_drm.h"
+
+static inline struct inode fake_inode(struct drm_i915_private *i915)
+{
+	return (struct inode){ .i_rdev = i915->drm.primary->index };
+}
+
+struct drm_file *mock_file(struct drm_i915_private *i915)
+{
+	struct inode inode = fake_inode(i915);
+	struct file filp = {};
+	struct drm_file *file;
+	int err;
+
+	err = drm_open(&inode, &filp);
+	if (unlikely(err))
+		return ERR_PTR(err);
+
+	file = filp.private_data;
+	file->authenticated = true;
+	return file;
+}
+
+void mock_file_free(struct drm_i915_private *i915, struct drm_file *file)
+{
+	struct inode inode = fake_inode(i915);
+	struct file filp = { .private_data = file };
+
+	drm_release(&inode, &filp);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_drm.h b/drivers/gpu/drm/i915/selftests/mock_drm.h
new file mode 100644
index 000000000000..b39beee9f8f6
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_drm.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_DRM_H
+#define __MOCK_DRM_H
+
+struct drm_file *mock_file(struct drm_i915_private *i915);
+void mock_file_free(struct drm_i915_private *i915, struct drm_file *file);
+
+#endif /* !__MOCK_DRM_H */
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
new file mode 100644
index 000000000000..0d5484467a4b
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -0,0 +1,106 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/pm_runtime.h>
+
+#include "mock_gem_device.h"
+#include "mock_gem_object.h"
+
+static void mock_device_release(struct drm_device *dev)
+{
+	struct drm_i915_private *i915 = to_i915(dev);
+
+	i915_gem_drain_freed_objects(i915);
+
+	kmem_cache_destroy(i915->objects);
+	put_device(&i915->drm.pdev->dev);
+}
+
+static struct drm_driver mock_driver = {
+	.name = "mock",
+	.driver_features = DRIVER_GEM,
+	.release = mock_device_release,
+
+	.gem_close_object = i915_gem_close_object,
+	.gem_free_object_unlocked = i915_gem_free_object,
+};
+
+static void release_dev(struct device *dev)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	kfree(pdev);
+}
+
+struct drm_i915_private *mock_gem_device(void)
+{
+	struct drm_i915_private *i915;
+	struct pci_dev *pdev;
+	int err;
+
+	i915 = kzalloc(sizeof(*i915), GFP_TEMPORARY);
+	if (!i915)
+		goto err;
+
+	pdev = kzalloc(sizeof(*pdev), GFP_TEMPORARY);
+	if (!pdev)
+		goto free_device;
+
+	device_initialize(&pdev->dev);
+	pdev->dev.release = release_dev;
+	dev_set_name(&pdev->dev, "mock");
+	dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+
+	pm_runtime_dont_use_autosuspend(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+	pci_set_drvdata(pdev, i915);
+
+	err = drm_dev_init(&i915->drm, &mock_driver, &pdev->dev);
+	if (err) {
+		pr_err("Failed to initialise mock GEM device: err=%d\n", err);
+		goto put_device;
+	}
+	i915->drm.pdev = pdev;
+	i915->drm.dev_private = i915;
+
+	mkwrite_device_info(i915)->gen = -1;
+
+	spin_lock_init(&i915->mm.object_stat_lock);
+
+	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
+	init_llist_head(&i915->mm.free_list);
+
+	i915->objects = KMEM_CACHE(mock_object, SLAB_HWCACHE_ALIGN);
+	if (!i915->objects)
+		goto put_device;
+
+	return i915;
+
+put_device:
+	put_device(&pdev->dev);
+free_device:
+	kfree(i915);
+err:
+	return NULL;
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.h b/drivers/gpu/drm/i915/selftests/mock_gem_device.h
new file mode 100644
index 000000000000..7ff7c848f731
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.h
@@ -0,0 +1,8 @@
+#ifndef __MOCK_GEM_DEVICE_H__
+#define __MOCK_GEM_DEVICE_H__
+
+#include "i915_drv.h"
+
+struct drm_i915_private *mock_gem_device(void);
+
+#endif /* !__MOCK_GEM_DEVICE_H__ */
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_object.h b/drivers/gpu/drm/i915/selftests/mock_gem_object.h
new file mode 100644
index 000000000000..9fbf67321662
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_object.h
@@ -0,0 +1,8 @@
+#ifndef __MOCK_GEM_OBJECT_H__
+#define __MOCK_GEM_OBJECT_H__
+
+struct mock_object {
+	struct drm_i915_gem_object base;
+};
+
+#endif /* !__MOCK_GEM_OBJECT_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 08/38] drm/i915: Mock a GGTT for self-testing
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (6 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 07/38] drm/i915: Mock the GEM device for self-testing Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 09/38] drm/i915: Mock infrastructure for request emission Chris Wilson
                   ` (30 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

A very simple mockery, just a random manager and timeline. Useful for
inserting objects and ordering retirement; and not much else.

v2: mock_fini_ggtt() to complement mock_init_ggtt().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c              |   4 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c |  31 +++++
 drivers/gpu/drm/i915/selftests/mock_gtt.c        | 138 +++++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gtt.h        |  35 ++++++
 4 files changed, 208 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gtt.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gtt.h

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 30d8dbd04f0b..0d5d2b6ca723 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3742,3 +3742,7 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
 					   size, alignment, color,
 					   start, end, DRM_MM_INSERT_EVICT);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_gtt.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 0d5484467a4b..73fefa78a2cf 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -26,6 +26,7 @@
 
 #include "mock_gem_device.h"
 #include "mock_gem_object.h"
+#include "mock_gtt.h"
 
 static void mock_device_release(struct drm_device *dev)
 {
@@ -33,6 +34,12 @@ static void mock_device_release(struct drm_device *dev)
 
 	i915_gem_drain_freed_objects(i915);
 
+	mutex_lock(&i915->drm.struct_mutex);
+	mock_fini_ggtt(i915);
+	i915_gem_timeline_fini(&i915->gt.global_timeline);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	kmem_cache_destroy(i915->vmas);
 	kmem_cache_destroy(i915->objects);
 	put_device(&i915->drm.pdev->dev);
 }
@@ -84,19 +91,43 @@ struct drm_i915_private *mock_gem_device(void)
 	i915->drm.pdev = pdev;
 	i915->drm.dev_private = i915;
 
+	/* Using the global GTT may ask questions about KMS users, so prepare */
+	drm_mode_config_init(&i915->drm);
+
 	mkwrite_device_info(i915)->gen = -1;
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 
 	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
 	init_llist_head(&i915->mm.free_list);
+	INIT_LIST_HEAD(&i915->mm.unbound_list);
+	INIT_LIST_HEAD(&i915->mm.bound_list);
 
 	i915->objects = KMEM_CACHE(mock_object, SLAB_HWCACHE_ALIGN);
 	if (!i915->objects)
 		goto put_device;
 
+	i915->vmas = KMEM_CACHE(i915_vma, SLAB_HWCACHE_ALIGN);
+	if (!i915->vmas)
+		goto err_objects;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	INIT_LIST_HEAD(&i915->gt.timelines);
+	err = i915_gem_timeline_init__global(i915);
+	if (err) {
+		mutex_unlock(&i915->drm.struct_mutex);
+		goto err_vmas;
+	}
+
+	mock_init_ggtt(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
 	return i915;
 
+err_vmas:
+	kmem_cache_destroy(i915->vmas);
+err_objects:
+	kmem_cache_destroy(i915->objects);
 put_device:
 	put_device(&pdev->dev);
 free_device:
diff --git a/drivers/gpu/drm/i915/selftests/mock_gtt.c b/drivers/gpu/drm/i915/selftests/mock_gtt.c
new file mode 100644
index 000000000000..a61309c7cb3e
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gtt.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_gtt.h"
+
+static void mock_insert_page(struct i915_address_space *vm,
+			     dma_addr_t addr,
+			     u64 offset,
+			     enum i915_cache_level level,
+			     u32 flags)
+{
+}
+
+static void mock_insert_entries(struct i915_address_space *vm,
+				struct sg_table *st,
+				u64 start,
+				enum i915_cache_level level, u32 flags)
+{
+}
+
+static int mock_bind_ppgtt(struct i915_vma *vma,
+			   enum i915_cache_level cache_level,
+			   u32 flags)
+{
+	GEM_BUG_ON(flags & I915_VMA_GLOBAL_BIND);
+	vma->pages = vma->obj->mm.pages;
+	vma->flags |= I915_VMA_LOCAL_BIND;
+	return 0;
+}
+
+static void mock_unbind_ppgtt(struct i915_vma *vma)
+{
+}
+
+static void mock_cleanup(struct i915_address_space *vm)
+{
+}
+
+struct i915_hw_ppgtt *
+mock_ppgtt(struct drm_i915_private *i915,
+	   const char *name)
+{
+	struct i915_hw_ppgtt *ppgtt;
+
+	ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
+	if (!ppgtt)
+		return NULL;
+
+	kref_init(&ppgtt->ref);
+	ppgtt->base.i915 = i915;
+	ppgtt->base.total = round_down(U64_MAX, PAGE_SIZE);
+	ppgtt->base.file = ERR_PTR(-ENODEV);
+
+	INIT_LIST_HEAD(&ppgtt->base.active_list);
+	INIT_LIST_HEAD(&ppgtt->base.inactive_list);
+	INIT_LIST_HEAD(&ppgtt->base.unbound_list);
+
+	INIT_LIST_HEAD(&ppgtt->base.global_link);
+	drm_mm_init(&ppgtt->base.mm, 0, ppgtt->base.total);
+	i915_gem_timeline_init(i915, &ppgtt->base.timeline, name);
+
+	ppgtt->base.clear_range = nop_clear_range;
+	ppgtt->base.insert_page = mock_insert_page;
+	ppgtt->base.insert_entries = mock_insert_entries;
+	ppgtt->base.bind_vma = mock_bind_ppgtt;
+	ppgtt->base.unbind_vma = mock_unbind_ppgtt;
+	ppgtt->base.cleanup = mock_cleanup;
+
+	return ppgtt;
+}
+
+static int mock_bind_ggtt(struct i915_vma *vma,
+			  enum i915_cache_level cache_level,
+			  u32 flags)
+{
+	int err;
+
+	err = i915_get_ggtt_vma_pages(vma);
+	if (err)
+		return err;
+
+	vma->flags |= I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND;
+	return 0;
+}
+
+static void mock_unbind_ggtt(struct i915_vma *vma)
+{
+}
+
+void mock_init_ggtt(struct drm_i915_private *i915)
+{
+	struct i915_ggtt *ggtt = &i915->ggtt;
+
+	INIT_LIST_HEAD(&i915->vm_list);
+
+	ggtt->base.i915 = i915;
+
+	ggtt->mappable_base = 0;
+	ggtt->mappable_end = 2048 * PAGE_SIZE;
+	ggtt->base.total = 4096 * PAGE_SIZE;
+
+	ggtt->base.clear_range = nop_clear_range;
+	ggtt->base.insert_page = mock_insert_page;
+	ggtt->base.insert_entries = mock_insert_entries;
+	ggtt->base.bind_vma = mock_bind_ggtt;
+	ggtt->base.unbind_vma = mock_unbind_ggtt;
+	ggtt->base.cleanup = mock_cleanup;
+
+	i915_address_space_init(&ggtt->base, i915, "global");
+}
+
+void mock_fini_ggtt(struct drm_i915_private *i915)
+{
+	struct i915_ggtt *ggtt = &i915->ggtt;
+
+	i915_address_space_fini(&ggtt->base);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_gtt.h b/drivers/gpu/drm/i915/selftests/mock_gtt.h
new file mode 100644
index 000000000000..9a0a833bb545
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gtt.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_GTT_H
+#define __MOCK_GTT_H
+
+void mock_init_ggtt(struct drm_i915_private *i915);
+void mock_fini_ggtt(struct drm_i915_private *i915);
+
+struct i915_hw_ppgtt *
+mock_ppgtt(struct drm_i915_private *i915,
+	   const char *name);
+
+#endif /* !__MOCK_GTT_H */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 09/38] drm/i915: Mock infrastructure for request emission
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (7 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 08/38] drm/i915: Mock a GGTT " Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 10/38] drm/i915: Create a fake object for testing huge allocations Chris Wilson
                   ` (29 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Create a fake engine that runs requests using a timer to simulate hw.

v2: Prevent leaks of ctx->name along error paths

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_context.c            |   4 +
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c |  11 +-
 drivers/gpu/drm/i915/selftests/mock_context.c      |  78 ++++++++++
 drivers/gpu/drm/i915/selftests/mock_context.h      |  34 ++++
 drivers/gpu/drm/i915/selftests/mock_engine.c       | 172 +++++++++++++++++++--
 drivers/gpu/drm/i915/selftests/mock_engine.h       |  18 ++-
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |  95 +++++++++++-
 drivers/gpu/drm/i915/selftests/mock_gem_device.h   |   1 +
 drivers/gpu/drm/i915/selftests/mock_request.c      |  44 ++++++
 drivers/gpu/drm/i915/selftests/mock_request.h      |  44 ++++++
 10 files changed, 483 insertions(+), 18 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_context.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_context.h
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_request.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_request.h

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 17f90c618208..5dc596a86ab1 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -1164,3 +1164,7 @@ int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
 
 	return 0;
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_context.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
index fe45c4c7c757..a023b8472d25 100644
--- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -25,6 +25,7 @@
 #include "i915_random.h"
 #include "i915_selftest.h"
 
+#include "mock_gem_device.h"
 #include "mock_engine.h"
 
 static int check_rbtree(struct intel_engine_cs *engine,
@@ -440,15 +441,15 @@ int intel_breadcrumbs_mock_selftests(void)
 		SUBTEST(igt_insert_complete),
 		SUBTEST(igt_wakeup),
 	};
-	struct intel_engine_cs *engine;
+	struct drm_i915_private *i915;
 	int err;
 
-	engine = mock_engine("mock");
-	if (!engine)
+	i915 = mock_gem_device();
+	if (!i915)
 		return -ENOMEM;
 
-	err = i915_subtests(tests, engine);
-	kfree(engine);
+	err = i915_subtests(tests, i915->engine[RCS]);
+	drm_dev_unref(&i915->drm);
 
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/selftests/mock_context.c b/drivers/gpu/drm/i915/selftests/mock_context.c
new file mode 100644
index 000000000000..8d3a90c3f8ac
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_context.c
@@ -0,0 +1,78 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_context.h"
+#include "mock_gtt.h"
+
+struct i915_gem_context *
+mock_context(struct drm_i915_private *i915,
+	     const char *name)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return NULL;
+
+	kref_init(&ctx->ref);
+	INIT_LIST_HEAD(&ctx->link);
+	ctx->i915 = i915;
+
+	ret = ida_simple_get(&i915->context_hw_ida,
+			     0, MAX_CONTEXT_HW_ID, GFP_KERNEL);
+	if (ret < 0)
+		goto err_free;
+	ctx->hw_id = ret;
+
+	if (name) {
+		ctx->name = kstrdup(name, GFP_KERNEL);
+		if (!ctx->name)
+			goto err_put;
+
+		ctx->ppgtt = mock_ppgtt(i915, name);
+		if (!ctx->ppgtt)
+			goto err_put;
+	}
+
+	return ctx;
+
+err_free:
+	kfree(ctx);
+	return NULL;
+
+err_put:
+	i915_gem_context_set_closed(ctx);
+	i915_gem_context_put(ctx);
+	return NULL;
+}
+
+void mock_context_close(struct i915_gem_context *ctx)
+{
+	i915_gem_context_set_closed(ctx);
+
+	i915_ppgtt_close(&ctx->ppgtt->base);
+
+	i915_gem_context_put(ctx);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_context.h b/drivers/gpu/drm/i915/selftests/mock_context.h
new file mode 100644
index 000000000000..2427e5c0916a
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_context.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_CONTEXT_H
+#define __MOCK_CONTEXT_H
+
+struct i915_gem_context *
+mock_context(struct drm_i915_private *i915,
+	     const char *name);
+
+void mock_context_close(struct i915_gem_context *ctx);
+
+#endif /* !__MOCK_CONTEXT_H */
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.c b/drivers/gpu/drm/i915/selftests/mock_engine.c
index 4a090bbe807b..8d5ba037064c 100644
--- a/drivers/gpu/drm/i915/selftests/mock_engine.c
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.c
@@ -23,33 +23,185 @@
  */
 
 #include "mock_engine.h"
+#include "mock_request.h"
 
-struct intel_engine_cs *mock_engine(const char *name)
+static struct mock_request *first_request(struct mock_engine *engine)
 {
-	struct intel_engine_cs *engine;
+	return list_first_entry_or_null(&engine->hw_queue,
+					struct mock_request,
+					link);
+}
+
+static void hw_delay_complete(unsigned long data)
+{
+	struct mock_engine *engine = (typeof(engine))data;
+	struct mock_request *request;
+
+	spin_lock(&engine->hw_lock);
+
+	request = first_request(engine);
+	if (request) {
+		list_del_init(&request->link);
+		mock_seqno_advance(&engine->base, request->base.global_seqno);
+	}
+
+	request = first_request(engine);
+	if (request)
+		mod_timer(&engine->hw_delay, jiffies + request->delay);
+
+	spin_unlock(&engine->hw_lock);
+}
+
+static int mock_context_pin(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx)
+{
+	i915_gem_context_get(ctx);
+	return 0;
+}
+
+static void mock_context_unpin(struct intel_engine_cs *engine,
+			       struct i915_gem_context *ctx)
+{
+	i915_gem_context_put(ctx);
+}
+
+static int mock_request_alloc(struct drm_i915_gem_request *request)
+{
+	struct mock_request *mock = container_of(request, typeof(*mock), base);
+
+	INIT_LIST_HEAD(&mock->link);
+	mock->delay = 0;
+
+	request->ring = request->engine->buffer;
+	return 0;
+}
+
+static int mock_emit_flush(struct drm_i915_gem_request *request,
+			   unsigned int flags)
+{
+	return 0;
+}
+
+static void mock_emit_breadcrumb(struct drm_i915_gem_request *request,
+				 u32 *flags)
+{
+}
+
+static void mock_submit_request(struct drm_i915_gem_request *request)
+{
+	struct mock_request *mock = container_of(request, typeof(*mock), base);
+	struct mock_engine *engine =
+		container_of(request->engine, typeof(*engine), base);
+
+	i915_gem_request_submit(request);
+	GEM_BUG_ON(!request->global_seqno);
+
+	spin_lock_irq(&engine->hw_lock);
+	list_add_tail(&mock->link, &engine->hw_queue);
+	if (mock->link.prev == &engine->hw_queue)
+		mod_timer(&engine->hw_delay, jiffies + mock->delay);
+	spin_unlock_irq(&engine->hw_lock);
+}
+
+static struct intel_ring *mock_ring(struct intel_engine_cs *engine)
+{
+	const unsigned long sz = roundup_pow_of_two(sizeof(struct intel_ring));
+	struct intel_ring *ring;
+
+	ring = kzalloc(sizeof(*ring) + sz, GFP_KERNEL);
+	if (!ring)
+		return NULL;
+
+	ring->engine = engine;
+	ring->size = sz;
+	ring->effective_size = sz;
+	ring->vaddr = (void *)(ring + 1);
+
+	INIT_LIST_HEAD(&ring->request_list);
+	ring->last_retired_head = -1;
+	intel_ring_update_space(ring);
+
+	return ring;
+}
+
+struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
+				    const char *name)
+{
+	struct mock_engine *engine;
 	static int id;
 
 	engine = kzalloc(sizeof(*engine) + PAGE_SIZE, GFP_KERNEL);
 	if (!engine)
 		return NULL;
 
-	/* minimal engine setup for seqno */
-	engine->name = name;
-	engine->id = id++;
-	engine->status_page.page_addr = (void *)(engine + 1);
+	engine->base.buffer = mock_ring(&engine->base);
+	if (!engine->base.buffer) {
+		kfree(engine);
+		return NULL;
+	}
 
-	/* minimal breadcrumbs init */
-	spin_lock_init(&engine->breadcrumbs.lock);
-	engine->breadcrumbs.mock = true;
+	/* minimal engine setup for requests */
+	engine->base.i915 = i915;
+	engine->base.name = name;
+	engine->base.id = id++;
+	engine->base.status_page.page_addr = (void *)(engine + 1);
 
-	return engine;
+	engine->base.context_pin = mock_context_pin;
+	engine->base.context_unpin = mock_context_unpin;
+	engine->base.request_alloc = mock_request_alloc;
+	engine->base.emit_flush = mock_emit_flush;
+	engine->base.emit_breadcrumb = mock_emit_breadcrumb;
+	engine->base.submit_request = mock_submit_request;
+
+	engine->base.timeline =
+		&i915->gt.global_timeline.engine[engine->base.id];
+
+	intel_engine_init_breadcrumbs(&engine->base);
+	engine->base.breadcrumbs.mock = true; /* prevent touching HW for irqs */
+
+	/* fake hw queue */
+	spin_lock_init(&engine->hw_lock);
+	setup_timer(&engine->hw_delay,
+		    hw_delay_complete,
+		    (unsigned long)engine);
+	INIT_LIST_HEAD(&engine->hw_queue);
+
+	return &engine->base;
 }
 
 void mock_engine_flush(struct intel_engine_cs *engine)
 {
+	struct mock_engine *mock =
+		container_of(engine, typeof(*mock), base);
+	struct mock_request *request, *rn;
+
+	del_timer_sync(&mock->hw_delay);
+
+	spin_lock_irq(&mock->hw_lock);
+	list_for_each_entry_safe(request, rn, &mock->hw_queue, link) {
+		list_del_init(&request->link);
+		mock_seqno_advance(&mock->base, request->base.global_seqno);
+	}
+	spin_unlock_irq(&mock->hw_lock);
 }
 
 void mock_engine_reset(struct intel_engine_cs *engine)
 {
 	intel_write_status_page(engine, I915_GEM_HWS_INDEX, 0);
 }
+
+void mock_engine_free(struct intel_engine_cs *engine)
+{
+	struct mock_engine *mock =
+		container_of(engine, typeof(*mock), base);
+
+	GEM_BUG_ON(timer_pending(&mock->hw_delay));
+
+	if (engine->last_retired_context)
+		engine->context_unpin(engine, engine->last_retired_context);
+
+	intel_engine_fini_breadcrumbs(engine);
+
+	kfree(engine->buffer);
+	kfree(engine);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.h b/drivers/gpu/drm/i915/selftests/mock_engine.h
index 9cfe9671f860..d080d0a10a4f 100644
--- a/drivers/gpu/drm/i915/selftests/mock_engine.h
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.h
@@ -25,9 +25,25 @@
 #ifndef __MOCK_ENGINE_H__
 #define __MOCK_ENGINE_H__
 
-struct intel_engine_cs *mock_engine(const char *name);
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+
+#include "intel_ringbuffer.h"
+
+struct mock_engine {
+	struct intel_engine_cs base;
+
+	spinlock_t hw_lock;
+	struct list_head hw_queue;
+	struct timer_list hw_delay;
+};
+
+struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
+				    const char *name);
 void mock_engine_flush(struct intel_engine_cs *engine);
 void mock_engine_reset(struct intel_engine_cs *engine);
+void mock_engine_free(struct intel_engine_cs *engine);
 
 static inline void mock_seqno_advance(struct intel_engine_cs *engine, u32 seqno)
 {
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 73fefa78a2cf..ee1caa7a8079 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -24,14 +24,46 @@
 
 #include <linux/pm_runtime.h>
 
+#include "mock_engine.h"
+#include "mock_context.h"
+#include "mock_request.h"
 #include "mock_gem_device.h"
 #include "mock_gem_object.h"
 #include "mock_gtt.h"
 
+void mock_device_flush(struct drm_i915_private *i915)
+{
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	lockdep_assert_held(&i915->drm.struct_mutex);
+
+	for_each_engine(engine, i915, id)
+		mock_engine_flush(engine);
+
+	i915_gem_retire_requests(i915);
+}
+
 static void mock_device_release(struct drm_device *dev)
 {
 	struct drm_i915_private *i915 = to_i915(dev);
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	mock_device_flush(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
 
+	cancel_delayed_work_sync(&i915->gt.retire_work);
+	cancel_delayed_work_sync(&i915->gt.idle_work);
+
+	mutex_lock(&i915->drm.struct_mutex);
+	for_each_engine(engine, i915, id)
+		mock_engine_free(engine);
+	i915_gem_context_fini(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	drain_workqueue(i915->wq);
 	i915_gem_drain_freed_objects(i915);
 
 	mutex_lock(&i915->drm.struct_mutex);
@@ -39,6 +71,10 @@ static void mock_device_release(struct drm_device *dev)
 	i915_gem_timeline_fini(&i915->gt.global_timeline);
 	mutex_unlock(&i915->drm.struct_mutex);
 
+	destroy_workqueue(i915->wq);
+
+	kmem_cache_destroy(i915->dependencies);
+	kmem_cache_destroy(i915->requests);
 	kmem_cache_destroy(i915->vmas);
 	kmem_cache_destroy(i915->objects);
 	put_device(&i915->drm.pdev->dev);
@@ -60,9 +96,19 @@ static void release_dev(struct device *dev)
 	kfree(pdev);
 }
 
+static void mock_retire_work_handler(struct work_struct *work)
+{
+}
+
+static void mock_idle_work_handler(struct work_struct *work)
+{
+}
+
 struct drm_i915_private *mock_gem_device(void)
 {
 	struct drm_i915_private *i915;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
 	struct pci_dev *pdev;
 	int err;
 
@@ -98,36 +144,81 @@ struct drm_i915_private *mock_gem_device(void)
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 
+	init_waitqueue_head(&i915->gpu_error.wait_queue);
+	init_waitqueue_head(&i915->gpu_error.reset_queue);
+
+	i915->wq = alloc_ordered_workqueue("mock", 0);
+	if (!i915->wq)
+		goto put_device;
+
 	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
 	init_llist_head(&i915->mm.free_list);
 	INIT_LIST_HEAD(&i915->mm.unbound_list);
 	INIT_LIST_HEAD(&i915->mm.bound_list);
 
+	ida_init(&i915->context_hw_ida);
+
+	INIT_DELAYED_WORK(&i915->gt.retire_work, mock_retire_work_handler);
+	INIT_DELAYED_WORK(&i915->gt.idle_work, mock_idle_work_handler);
+
+	i915->gt.awake = true;
+
 	i915->objects = KMEM_CACHE(mock_object, SLAB_HWCACHE_ALIGN);
 	if (!i915->objects)
-		goto put_device;
+		goto err_wq;
 
 	i915->vmas = KMEM_CACHE(i915_vma, SLAB_HWCACHE_ALIGN);
 	if (!i915->vmas)
 		goto err_objects;
 
+	i915->requests = KMEM_CACHE(mock_request,
+				    SLAB_HWCACHE_ALIGN |
+				    SLAB_RECLAIM_ACCOUNT |
+				    SLAB_DESTROY_BY_RCU);
+	if (!i915->requests)
+		goto err_vmas;
+
+	i915->dependencies = KMEM_CACHE(i915_dependency,
+					SLAB_HWCACHE_ALIGN |
+					SLAB_RECLAIM_ACCOUNT);
+	if (!i915->dependencies)
+		goto err_requests;
+
 	mutex_lock(&i915->drm.struct_mutex);
 	INIT_LIST_HEAD(&i915->gt.timelines);
 	err = i915_gem_timeline_init__global(i915);
 	if (err) {
 		mutex_unlock(&i915->drm.struct_mutex);
-		goto err_vmas;
+		goto err_dependencies;
 	}
 
 	mock_init_ggtt(i915);
 	mutex_unlock(&i915->drm.struct_mutex);
 
+	mkwrite_device_info(i915)->ring_mask = BIT(0);
+	i915->engine[RCS] = mock_engine(i915, "mock");
+	if (!i915->engine[RCS])
+		goto err_dependencies;
+
+	i915->kernel_context = mock_context(i915, NULL);
+	if (!i915->kernel_context)
+		goto err_engine;
+
 	return i915;
 
+err_engine:
+	for_each_engine(engine, i915, id)
+		mock_engine_free(engine);
+err_dependencies:
+	kmem_cache_destroy(i915->dependencies);
+err_requests:
+	kmem_cache_destroy(i915->requests);
 err_vmas:
 	kmem_cache_destroy(i915->vmas);
 err_objects:
 	kmem_cache_destroy(i915->objects);
+err_wq:
+	destroy_workqueue(i915->wq);
 put_device:
 	put_device(&pdev->dev);
 free_device:
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.h b/drivers/gpu/drm/i915/selftests/mock_gem_device.h
index 7ff7c848f731..7eceff766957 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.h
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.h
@@ -4,5 +4,6 @@
 #include "i915_drv.h"
 
 struct drm_i915_private *mock_gem_device(void);
+void mock_device_flush(struct drm_i915_private *i915);
 
 #endif /* !__MOCK_GEM_DEVICE_H__ */
diff --git a/drivers/gpu/drm/i915/selftests/mock_request.c b/drivers/gpu/drm/i915/selftests/mock_request.c
new file mode 100644
index 000000000000..e23242d1b88a
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_request.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_request.h"
+
+struct drm_i915_gem_request *
+mock_request(struct intel_engine_cs *engine,
+	     struct i915_gem_context *context,
+	     unsigned long delay)
+{
+	struct drm_i915_gem_request *request;
+	struct mock_request *mock;
+
+	/* NB the i915->requests slab cache is enlarged to fit mock_request */
+	request = i915_gem_request_alloc(engine, context);
+	if (!request)
+		return NULL;
+
+	mock = container_of(request, typeof(*mock), base);
+	mock->delay = delay;
+
+	return &mock->base;
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_request.h b/drivers/gpu/drm/i915/selftests/mock_request.h
new file mode 100644
index 000000000000..9c739125cab5
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_request.h
@@ -0,0 +1,44 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_REQUEST__
+#define __MOCK_REQUEST__
+
+#include <linux/list.h>
+
+#include "i915_gem_request.h"
+
+struct mock_request {
+	struct drm_i915_gem_request base;
+
+	struct list_head link;
+	unsigned long delay;
+};
+
+struct drm_i915_gem_request *
+mock_request(struct intel_engine_cs *engine,
+	     struct i915_gem_context *context,
+	     unsigned long delay);
+
+#endif /* !__MOCK_REQUEST__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 10/38] drm/i915: Create a fake object for testing huge allocations
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (8 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 09/38] drm/i915: Mock infrastructure for request emission Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 13:09   ` Matthew Auld
  2017-01-19 11:41 ` [PATCH v2 11/38] drm/i915: Add selftests for i915_gem_request Chris Wilson
                   ` (28 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

We would like to be able to exercise huge allocations even on memory
constrained devices. To do this we create an object that allocates only
a few pages and remaps them across its whole range - each page is reused
multiple times. We can therefore pretend we are rendering into a much
larger object.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c                  |   1 +
 drivers/gpu/drm/i915/i915_gem_object.h           |  20 ++--
 drivers/gpu/drm/i915/selftests/huge_gem_object.c | 135 +++++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/huge_gem_object.h |  33 ++++++
 4 files changed, 181 insertions(+), 8 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_gem_object.c
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_gem_object.h

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d93abed05631..a03eeb6d85bf 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4942,4 +4942,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/scatterlist.c"
 #include "selftests/mock_gem_device.c"
+#include "selftests/huge_gem_object.c"
 #endif
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 290eaa7fc9eb..4114cc8a0b9b 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -167,14 +167,18 @@ struct drm_i915_gem_object {
 	/** Record of address bit 17 of each page at last unbind. */
 	unsigned long *bit_17;
 
-	struct i915_gem_userptr {
-		uintptr_t ptr;
-		unsigned read_only :1;
-
-		struct i915_mm_struct *mm;
-		struct i915_mmu_object *mmu_object;
-		struct work_struct *work;
-	} userptr;
+	union {
+		struct i915_gem_userptr {
+			uintptr_t ptr;
+			unsigned read_only :1;
+
+			struct i915_mm_struct *mm;
+			struct i915_mmu_object *mmu_object;
+			struct work_struct *work;
+		} userptr;
+
+		unsigned long scratch;
+	};
 
 	/** for phys allocated objects */
 	struct drm_dma_handle *phys_handle;
diff --git a/drivers/gpu/drm/i915/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
new file mode 100644
index 000000000000..f39294f4013f
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
@@ -0,0 +1,135 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "huge_gem_object.h"
+
+static void huge_free_pages(struct drm_i915_gem_object *obj,
+			    struct sg_table *pages)
+{
+	unsigned long nreal = obj->scratch / PAGE_SIZE;
+	struct scatterlist *sg;
+
+	for (sg = pages->sgl; sg && nreal--; sg = ____sg_next(sg))
+		__free_page(sg_page(sg));
+
+	sg_free_table(pages);
+	kfree(pages);
+}
+
+static struct sg_table *
+huge_get_pages(struct drm_i915_gem_object *obj)
+{
+#define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
+	const unsigned long nreal = obj->scratch / PAGE_SIZE;
+	const unsigned long npages = obj->base.size / PAGE_SIZE;
+	struct scatterlist *sg, *src, *end;
+	struct sg_table *pages;
+	unsigned long n;
+
+	pages = kmalloc(sizeof(*pages), GFP);
+	if (!pages)
+		return ERR_PTR(-ENOMEM);
+
+	if (sg_alloc_table(pages, npages, GFP)) {
+		kfree(pages);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	sg = pages->sgl;
+	for (n = 0; n < nreal; n++) {
+		struct page *page;
+
+		page = alloc_page(GFP | __GFP_HIGHMEM);
+		if (!page) {
+			sg_mark_end(sg);
+			goto err;
+		}
+
+		sg_set_page(sg, page, PAGE_SIZE, 0);
+		sg = ____sg_next(sg);
+	}
+	if (nreal < npages) {
+		for (end = sg, src = pages->sgl; sg; sg = __sg_next(sg)) {
+			sg_set_page(sg, sg_page(src), PAGE_SIZE, 0);
+			src = ____sg_next(src);
+			if (src == end)
+				src = pages->sgl;
+		}
+	}
+
+	if (i915_gem_gtt_prepare_pages(obj, pages))
+		goto err;
+
+	return pages;
+
+err:
+	huge_free_pages(obj, pages);
+	return ERR_PTR(-ENOMEM);
+#undef GFP
+}
+
+static void huge_put_pages(struct drm_i915_gem_object *obj,
+			   struct sg_table *pages)
+{
+	i915_gem_gtt_finish_pages(obj, pages);
+	huge_free_pages(obj, pages);
+
+	obj->mm.dirty = false;
+}
+
+static const struct drm_i915_gem_object_ops huge_ops = {
+	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
+		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.get_pages = huge_get_pages,
+	.put_pages = huge_put_pages,
+};
+
+struct drm_i915_gem_object *
+huge_gem_object(struct drm_i915_private *i915,
+		phys_addr_t real_size,
+		phys_addr_t fake_size)
+{
+	struct drm_i915_gem_object *obj;
+
+	GEM_BUG_ON(!real_size || real_size > fake_size);
+	GEM_BUG_ON(!IS_ALIGNED(real_size, PAGE_SIZE));
+	GEM_BUG_ON(!IS_ALIGNED(fake_size, PAGE_SIZE));
+
+	if (overflows_type(fake_size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc(i915);
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, fake_size);
+	i915_gem_object_init(obj, &huge_ops);
+
+	obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+	obj->base.read_domains = I915_GEM_DOMAIN_CPU;
+	obj->cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
+	obj->scratch = real_size;
+
+	return obj;
+}
diff --git a/drivers/gpu/drm/i915/selftests/huge_gem_object.h b/drivers/gpu/drm/i915/selftests/huge_gem_object.h
new file mode 100644
index 000000000000..3022a7c93844
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/huge_gem_object.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HUGE_GEM_OBJECT_H
+#define __HUGE_GEM_OBJECT_H
+
+struct drm_i915_gem_object *
+huge_gem_object(struct drm_i915_private *i915,
+		phys_addr_t real_size,
+		phys_addr_t fake_size);
+
+#endif /* !__HUGE_GEM_OBJECT_H */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 11/38] drm/i915: Add selftests for i915_gem_request
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (9 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 10/38] drm/i915: Create a fake object for testing huge allocations Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 12/38] drm/i915: Add a simple request selftest for waiting Chris Wilson
                   ` (27 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Simple starting point for adding seltests for i915_gem_request, first
mock a device (with engines and contexts) that allows us to construct
and execute a request, along with waiting for the request to complete.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_request.c            |  5 ++
 drivers/gpu/drm/i915/selftests/i915_gem_request.c  | 68 ++++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |  1 +
 3 files changed, 74 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_request.c

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 72b7f7d9461d..bd2aeb290cad 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -1193,3 +1193,8 @@ void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
 	for_each_engine(engine, dev_priv, id)
 		engine_retire_requests(engine);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_request.c"
+#include "selftests/i915_gem_request.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
new file mode 100644
index 000000000000..c4ee6a6e7686
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -0,0 +1,68 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_selftest.h"
+
+#include "mock_gem_device.h"
+
+static int igt_add_request(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *request;
+	int err = -ENOMEM;
+
+	/* Basic preliminary test to create a request and let it loose! */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	request = mock_request(i915->engine[RCS],
+			       i915->kernel_context,
+			       HZ / 10);
+	if (!request)
+		goto out_unlock;
+
+	i915_add_request(request);
+
+	err = 0;
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
+int i915_gem_request_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_add_request),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	err = i915_subtests(tests, i915);
+	drm_dev_unref(&i915->drm);
+
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 80458e2a2b04..bda982404ad3 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -11,3 +11,4 @@
 selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
 selftest(scatterlist, scatterlist_mock_selftests)
 selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
+selftest(requests, i915_gem_request_mock_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 12/38] drm/i915: Add a simple request selftest for waiting
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (10 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 11/38] drm/i915: Add selftests for i915_gem_request Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 13/38] drm/i915: Add a simple fence selftest to i915_gem_request Chris Wilson
                   ` (26 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

A trivial kselftest to submit a request and wait upon it.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c | 46 +++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index c4ee6a6e7686..6c2ca8d5a2ba 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -49,10 +49,56 @@ static int igt_add_request(void *arg)
 	return err;
 }
 
+static int igt_wait_request(void *arg)
+{
+	const long T = HZ / 4;
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *request;
+	int err = -EINVAL;
+
+	/* Submit a request, then wait upon it */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	request = mock_request(i915->engine[RCS], i915->kernel_context, T);
+	if (!request) {
+		err = -ENOMEM;
+		goto out_unlock;
+	}
+
+	i915_add_request(request);
+
+	if (i915_gem_request_completed(request)) {
+		pr_err("request completed immediately!\n");
+		goto out_unlock;
+	}
+
+	if (i915_wait_request(request, I915_WAIT_LOCKED, T / 2) != -ETIME) {
+		pr_err("request wait succeeded (expected tiemout!)\n");
+		goto out_unlock;
+	}
+
+	if (i915_wait_request(request, I915_WAIT_LOCKED, T) == -ETIME) {
+		pr_err("request wait timed out!\n");
+		goto out_unlock;
+	}
+
+	if (!i915_gem_request_completed(request)) {
+		pr_err("request not complete after waiting!\n");
+		goto out_unlock;
+	}
+
+	err = 0;
+out_unlock:
+	mock_device_flush(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 int i915_gem_request_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_add_request),
+		SUBTEST(igt_wait_request),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 13/38] drm/i915: Add a simple fence selftest to i915_gem_request
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (11 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 12/38] drm/i915: Add a simple request selftest for waiting Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 14/38] drm/i915: Simple selftest to exercise live requests Chris Wilson
                   ` (25 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Do a quick selftest on in the interoperability of dma_fence_wait on a
i915_gem_request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c | 49 +++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index 6c2ca8d5a2ba..20bf10dd85ed 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -94,11 +94,60 @@ static int igt_wait_request(void *arg)
 	return err;
 }
 
+static int igt_fence_wait(void *arg)
+{
+	const long T = HZ / 4;
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *request;
+	int err = -EINVAL;
+
+	/* Submit a request, treat it as a fence and wait upon it */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	request = mock_request(i915->engine[RCS], i915->kernel_context, T);
+	if (!request) {
+		err = -ENOMEM;
+		goto out_locked;
+	}
+
+	i915_add_request(request);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	if (dma_fence_is_signaled(&request->fence)) {
+		pr_err("fence signaled immediately!\n");
+		goto out_device;
+	}
+
+	if (dma_fence_wait_timeout(&request->fence, false, T / 2) != -ETIME) {
+		pr_err("fence wait success after submit (expected timeout)!\n");
+		goto out_device;
+	}
+
+	if (dma_fence_wait_timeout(&request->fence, false, T) <= 0) {
+		pr_err("fence wait timed out (expected success)!\n");
+		goto out_device;
+	}
+
+	if (!dma_fence_is_signaled(&request->fence)) {
+		pr_err("fence unsignaled after waiting!\n");
+		goto out_device;
+	}
+
+	err = 0;
+out_device:
+	mutex_lock(&i915->drm.struct_mutex);
+out_locked:
+	mock_device_flush(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 int i915_gem_request_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_add_request),
 		SUBTEST(igt_wait_request),
+		SUBTEST(igt_fence_wait),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 14/38] drm/i915: Simple selftest to exercise live requests
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (12 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 13/38] drm/i915: Add a simple fence selftest to i915_gem_request Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-02-01  8:14   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 15/38] drm/i915: Test simultaneously submitting requests to all engines Chris Wilson
                   ` (24 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Just create several batches of requests and expect it to not fall over!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c  | 95 ++++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  1 +
 2 files changed, 96 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index 20bf10dd85ed..19103d87a4c3 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -22,6 +22,8 @@
  *
  */
 
+#include <linux/prime_numbers.h>
+
 #include "i915_selftest.h"
 
 #include "mock_gem_device.h"
@@ -161,3 +163,96 @@ int i915_gem_request_mock_selftests(void)
 
 	return err;
 }
+
+static int live_nop_request(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct intel_engine_cs *engine;
+	unsigned int id;
+	int err;
+
+	/* Submit various sized batches of empty requests, to each engine
+	 * (individually), and wait for the batch to complete. We can check
+	 * the overhead of submitting requests to the hardware.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	for_each_engine(engine, i915, id) {
+		IGT_TIMEOUT(end_time);
+		struct drm_i915_gem_request *request;
+		unsigned int reset_count;
+		unsigned long n, prime;
+		ktime_t times[2] = {};
+
+		err = i915_gem_wait_for_idle(i915, I915_WAIT_LOCKED);
+		if (err) {
+			pr_err("Failed to idle GPU before %s(%s)\n",
+			       __func__, engine->name);
+			goto out_unlock;
+		}
+
+		i915->gpu_error.missed_irq_rings = 0;
+		reset_count = i915_reset_count(&i915->gpu_error);
+
+		for_each_prime_number_from(prime, 1, 8192) {
+			times[1] = ktime_get_raw();
+
+			for (n = 0; n < prime; n++) {
+				request = i915_gem_request_alloc(engine,
+								 i915->kernel_context);
+				if (IS_ERR(request)) {
+					err = PTR_ERR(request);
+					goto out_unlock;
+				}
+
+				i915_add_request(request);
+			}
+			i915_wait_request(request,
+					  I915_WAIT_LOCKED,
+					  MAX_SCHEDULE_TIMEOUT);
+
+			times[1] = ktime_sub(ktime_get_raw(), times[1]);
+			if (prime == 1)
+				times[0] = times[1];
+
+			if (igt_timeout(end_time,
+					"%s(%s) timed out: last batch size %lu\n",
+					__func__, engine->name, prime))
+				break;
+		}
+
+		if (reset_count != i915_reset_count(&i915->gpu_error)) {
+			pr_err("%s(%s): GPU was reset %d times!\n",
+			       __func__, engine->name,
+			       i915_reset_count(&i915->gpu_error) - reset_count);
+			err = -EIO;
+			goto out_unlock;
+		}
+
+		if (i915->gpu_error.missed_irq_rings) {
+			pr_err("%s(%s): Missed interrupts on rings %lx\n",
+			       __func__, engine->name,
+			       i915->gpu_error.missed_irq_rings);
+			err = -EIO;
+			goto out_unlock;
+		}
+
+		pr_info("Request latencies on %s: 1 = %lluns, %lu = %lluns\n",
+			engine->name,
+			ktime_to_ns(times[0]),
+			prime, div64_u64(ktime_to_ns(times[1]), prime));
+	}
+
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
+int i915_gem_request_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(live_nop_request),
+	};
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index f3e17cb10e05..09bf538826df 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -9,3 +9,4 @@
  * Tests are executed in order by igt/drv_selftest
  */
 selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
+selftest(requests, i915_gem_request_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 15/38] drm/i915: Test simultaneously submitting requests to all engines
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (13 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 14/38] drm/i915: Simple selftest to exercise live requests Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-02-01  8:03   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 16/38] drm/i915: Add selftests for object allocation, phys Chris Wilson
                   ` (23 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Use a recursive-batch to busy spin on each to ensure that each is being
run simultaneously.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c | 178 ++++++++++++++++++++++
 1 file changed, 178 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index 19103d87a4c3..fb6f8acc1429 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -249,10 +249,188 @@ static int live_nop_request(void *arg)
 	return err;
 }
 
+static struct i915_vma *recursive_batch(struct drm_i915_private *i915)
+{
+	struct i915_gem_context *ctx = i915->kernel_context;
+	struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+	struct drm_i915_gem_object *obj;
+	const int gen = INTEL_GEN(i915);
+	struct i915_vma *vma;
+	u32 *cmd;
+	int err;
+
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err) {
+		i915_gem_object_put(obj);
+		return ERR_PTR(err);
+	}
+
+	vma = i915_vma_instance(obj, vm, NULL);
+	if (IS_ERR(vma)) {
+		i915_gem_object_put(obj);
+		return vma;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err) {
+		i915_gem_object_put(obj);
+		return ERR_PTR(err);
+	}
+
+	cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	if (IS_ERR(cmd)) {
+		i915_gem_object_put(obj);
+		return ERR_CAST(cmd);
+	}
+
+	if (gen >= 8) {
+		*cmd++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
+		*cmd++ = lower_32_bits(vma->node.start);
+		*cmd++ = upper_32_bits(vma->node.start);
+	} else if (gen >= 6) {
+		*cmd++ = MI_BATCH_BUFFER_START | 1 << 8;
+		*cmd++ = lower_32_bits(vma->node.start);
+	} else if (gen >= 4) {
+		*cmd++ = MI_BATCH_BUFFER_START | MI_BATCH_GTT;
+		*cmd++ = lower_32_bits(vma->node.start);
+	} else {
+		*cmd++ = MI_BATCH_BUFFER_START | MI_BATCH_GTT | 1;
+		*cmd++ = lower_32_bits(vma->node.start);
+	}
+
+	i915_gem_object_unpin_map(obj);
+
+	return vma;
+}
+
+static int live_all_engines(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct intel_engine_cs *engine;
+	struct drm_i915_gem_request *request[I915_NUM_ENGINES];
+	struct i915_vma *batch;
+	unsigned int reset_count;
+	unsigned int id;
+	u32 *cmd;
+	int err;
+
+	/* Check we can submit requests to all engines simultaneously. We
+	 * send a recursive batch to each engine - checking that we don't
+	 * block doing so, and that they don't complete too soon.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	err = i915_gem_wait_for_idle(i915, I915_WAIT_LOCKED);
+	if (err) {
+		pr_err("Failed to idle GPU before %s\n", __func__);
+		goto out_unlock;
+	}
+
+	i915->gpu_error.missed_irq_rings = 0;
+	reset_count = i915_reset_count(&i915->gpu_error);
+
+	batch = recursive_batch(i915);
+	if (IS_ERR(batch)) {
+		err = PTR_ERR(batch);
+		pr_err("%s: Unable to create batch, err=%d\n", __func__, err);
+		goto out_unlock;
+	}
+
+	for_each_engine(engine, i915, id) {
+		request[id] = i915_gem_request_alloc(engine,
+						     i915->kernel_context);
+		if (IS_ERR(request[id])) {
+			err = PTR_ERR(request[id]);
+			pr_err("%s: Request allocation failed with err=%d\n",
+			       __func__, err);
+			goto out_request;
+		}
+
+		engine->emit_bb_start(request[id],
+				      batch->node.start,
+				      batch->node.size,
+				      0);
+		if (!i915_gem_object_has_active_reference(batch->obj)) {
+			i915_gem_object_get(batch->obj);
+			i915_gem_object_set_active_reference(batch->obj);
+		}
+
+		i915_vma_move_to_active(batch, request[id], 0);
+		i915_gem_request_get(request[id]);
+		i915_add_request(request[id]);
+	}
+
+	for_each_engine(engine, i915, id) {
+		if (i915_gem_request_completed(request[id])) {
+			pr_err("%s(%s): request completed too early!\n",
+			       __func__, engine->name);
+			err = -EINVAL;
+			goto out_request;
+		}
+	}
+
+	cmd = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	if (IS_ERR(cmd)) {
+		err = PTR_ERR(cmd);
+		pr_err("%s: failed to WC map batch, err=%d\n", __func__, err);
+		goto out_request;
+	}
+	*cmd = MI_BATCH_BUFFER_END;
+	wmb();
+	i915_gem_object_unpin_map(batch->obj);
+
+	for_each_engine(engine, i915, id) {
+		long timeout;
+
+		timeout = i915_wait_request(request[id],
+					    I915_WAIT_LOCKED,
+					    MAX_SCHEDULE_TIMEOUT);
+		if (timeout < 0) {
+			err = timeout;
+			pr_err("%s: error waiting for request on %s, err=%d\n",
+			       __func__, engine->name, err);
+			goto out_request;
+		}
+
+		GEM_BUG_ON(!i915_gem_request_completed(request[id]));
+		i915_gem_request_put(request[id]);
+		request[id] = NULL;
+	}
+
+	if (reset_count != i915_reset_count(&i915->gpu_error)) {
+		pr_err("%s: GPU was reset %d times!\n", __func__,
+		       i915_reset_count(&i915->gpu_error) - reset_count);
+		err = -EIO;
+		goto out_request;
+	}
+
+	if (i915->gpu_error.missed_irq_rings) {
+		pr_err("%s: Missed interrupts on rings %lx\n", __func__,
+		       i915->gpu_error.missed_irq_rings);
+		err = -EIO;
+		goto out_request;
+	}
+
+out_request:
+	for_each_engine(engine, i915, id)
+		if (request[id])
+			i915_gem_request_put(request[id]);
+	i915_vma_put(batch);
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 int i915_gem_request_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(live_nop_request),
+		SUBTEST(live_all_engines),
 	};
 	return i915_subtests(tests, i915);
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 16/38] drm/i915: Add selftests for object allocation, phys
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (14 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 15/38] drm/i915: Test simultaneously submitting requests to all engines Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 17/38] drm/i915: Add a live seftest for GEM objects Chris Wilson
                   ` (22 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

The phys object is a rarely used device (only very old machines require
a chunk of physically contiguous pages for a few hardware interactions).
As such, it is not exercised by CI and to combat that we want to add a
test that exercises the phys object on all platforms.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c                    |   1 +
 drivers/gpu/drm/i915/selftests/i915_gem_object.c   | 120 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 3 files changed, 122 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_object.c

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a03eeb6d85bf..0772a4e0e3ef 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4943,4 +4943,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 #include "selftests/scatterlist.c"
 #include "selftests/mock_gem_device.c"
 #include "selftests/huge_gem_object.c"
+#include "selftests/i915_gem_object.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
new file mode 100644
index 000000000000..dbb899ee24e0
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
@@ -0,0 +1,120 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_selftest.h"
+
+#include "mock_gem_device.h"
+
+static int igt_gem_object(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	int err = -ENOMEM;
+
+	/* Basic test to ensure we can create an object */
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		pr_err("i915_gem_object_create failed, err=%d\n", err);
+		goto out;
+	}
+
+	err = 0;
+	i915_gem_object_put(obj);
+out:
+	return err;
+}
+
+static int igt_phys_object(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	int err = -ENOMEM;
+
+	/* Create an object and bind it to a contiguous set of physical pages,
+	 * i.e. exercise the i915_gem_object_phys API.
+	 */
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		pr_err("i915_gem_object_create failed, err=%d\n", err);
+		goto out;
+	}
+
+	err = -EINVAL;
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
+	mutex_unlock(&i915->drm.struct_mutex);
+	if (err) {
+		pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
+		goto err;
+	}
+
+	if (obj->ops != &i915_gem_phys_ops) {
+		pr_err("i915_gem_object_attach_phys did not create a phys object\n");
+		goto err;
+	}
+
+	if (!atomic_read(&obj->mm.pages_pin_count)) {
+		pr_err("i915_gem_object_attach_phys did not pin its phys pages\n");
+		goto err;
+	}
+
+	/* Make the object dirty so that put_pages must do copy back the data */
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	mutex_unlock(&i915->drm.struct_mutex);
+	if (err) {
+		pr_err("i915_gem_object_set_to_gtt_domain failed with err=%d\n",
+		       err);
+		goto err;
+	}
+
+	err = 0;
+err:
+	i915_gem_object_put(obj);
+out:
+	return err;
+}
+
+int i915_gem_object_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_gem_object),
+		SUBTEST(igt_phys_object),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	err = i915_subtests(tests, i915);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index bda982404ad3..2ed94e3a71b7 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -12,3 +12,4 @@ selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
 selftest(scatterlist, scatterlist_mock_selftests)
 selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
 selftest(requests, i915_gem_request_mock_selftests)
+selftest(objects, i915_gem_object_mock_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 17/38] drm/i915: Add a live seftest for GEM objects
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (15 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 16/38] drm/i915: Add selftests for object allocation, phys Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 18/38] drm/i915: Test partial mappings Chris Wilson
                   ` (21 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Starting with a placeholder test just to reassure that we can create a
test object,

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_object.c   | 49 ++++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  1 +
 2 files changed, 50 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
index dbb899ee24e0..f7c59d6581ea 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
@@ -100,6 +100,46 @@ static int igt_phys_object(void *arg)
 	return err;
 }
 
+static int igt_gem_huge(void *arg)
+{
+	const unsigned int nreal = 509; /* just to be awkward */
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	unsigned int n;
+	int err;
+
+	/* Basic sanitycheck of our huge fake object allocation */
+
+	obj = huge_gem_object(i915,
+			      nreal * PAGE_SIZE,
+			      i915->ggtt.base.total + PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_pin_pages(obj);
+	if (err) {
+		pr_err("Failed to allocate %u pages (%zu total), err=%d\n",
+		       nreal, obj->base.size / PAGE_SIZE, err);
+		goto err;
+	}
+
+	for (n = 0; n < obj->base.size / PAGE_SIZE; n++) {
+		if (i915_gem_object_get_page(obj, n) !=
+		    i915_gem_object_get_page(obj, n % nreal)) {
+			pr_err("Page lookup mismatch at index %u [%u]\n",
+			       n, n % nreal);
+			err = -EINVAL;
+			goto err_unpin;
+		}
+	}
+
+err_unpin:
+	i915_gem_object_unpin_pages(obj);
+err:
+	i915_gem_object_put(obj);
+	return err;
+}
+
 int i915_gem_object_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
@@ -118,3 +158,12 @@ int i915_gem_object_mock_selftests(void)
 	drm_dev_unref(&i915->drm);
 	return err;
 }
+
+int i915_gem_object_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_gem_huge),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 09bf538826df..1822ac99d577 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -10,3 +10,4 @@
  */
 selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
 selftest(requests, i915_gem_request_live_selftests)
+selftest(object, i915_gem_object_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 18/38] drm/i915: Test partial mappings
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (16 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 17/38] drm/i915: Add a live seftest for GEM objects Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 19/38] drm/i915: Test exhaustion of the mmap space Chris Wilson
                   ` (20 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Create partial mappings to cover a large object, investigating tiling
(fenced regions) and VMA reuse.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_object.c | 277 +++++++++++++++++++++++
 1 file changed, 277 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
index f7c59d6581ea..20dc4f6efc18 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
@@ -140,6 +140,282 @@ static int igt_gem_huge(void *arg)
 	return err;
 }
 
+struct tile {
+	unsigned int width;
+	unsigned int height;
+	unsigned int stride;
+	unsigned int size;
+	unsigned int tiling;
+	unsigned int swizzle;
+};
+
+static u64 swizzle_bit(unsigned int bit, u64 offset)
+{
+	return (offset & BIT_ULL(bit)) >> (bit - 6);
+}
+
+static u64 tiled_offset(const struct tile *tile, u64 v)
+{
+	u64 x, y;
+
+	if (tile->tiling == I915_TILING_NONE)
+		return v;
+
+	switch (tile->swizzle) {
+	case I915_BIT_6_SWIZZLE_9:
+		v ^= swizzle_bit(9, v);
+		break;
+	case I915_BIT_6_SWIZZLE_9_10:
+		v ^= swizzle_bit(9, v) ^ swizzle_bit(10, v);
+		break;
+	case I915_BIT_6_SWIZZLE_9_11:
+		v ^= swizzle_bit(9, v) ^ swizzle_bit(11, v);
+		break;
+	case I915_BIT_6_SWIZZLE_9_10_11:
+		v ^= swizzle_bit(9, v) ^ swizzle_bit(10, v) ^ swizzle_bit(11, v);
+		break;
+	}
+
+	y = div64_u64_rem(v, tile->stride, &x);
+	v = div64_u64_rem(y, tile->height, &y) * tile->stride * tile->height;
+
+	if (tile->tiling == I915_TILING_X) {
+		v += y * tile->width;
+		v += div64_u64_rem(x, tile->width, &x) << tile->size;
+		v += x;
+	} else {
+		const unsigned int ytile_span = 16;
+		const unsigned int ytile_height = 32 * ytile_span;
+
+		v += y * ytile_span;
+		v += div64_u64_rem(x, ytile_span, &x) * ytile_height;
+		v += x;
+	}
+
+	return v;
+}
+
+static int check_partial_mapping(struct drm_i915_gem_object *obj,
+				 const struct tile *tile,
+				 unsigned long end_time)
+{
+	const unsigned int nreal = obj->scratch / PAGE_SIZE;
+	const unsigned long npages = obj->base.size / PAGE_SIZE;
+	struct i915_vma *vma;
+	unsigned long page;
+	int err;
+
+	if (igt_timeout(end_time,
+			"%s: timed out before tiling=%d stride=%d\n",
+			__func__, tile->tiling, tile->stride))
+		return -EINTR;
+
+	err = i915_gem_object_set_tiling(obj, tile->tiling, tile->stride);
+	if (err)
+		return err;
+
+	GEM_BUG_ON(i915_gem_object_get_tiling(obj) != tile->tiling);
+	GEM_BUG_ON(i915_gem_object_get_stride(obj) != tile->stride);
+
+	for_each_prime_number_from(page, 1, npages) {
+		struct i915_ggtt_view view =
+			compute_partial_view(obj, page, MIN_CHUNK_PAGES);
+		u32 __iomem *io;
+		struct page *p;
+		unsigned int n;
+		u64 offset;
+		u32 *cpu;
+
+		GEM_BUG_ON(view.partial.size > nreal);
+
+		err = i915_gem_object_set_to_gtt_domain(obj, true);
+		if (err)
+			return err;
+
+		vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, PIN_MAPPABLE);
+		if (IS_ERR(vma)) {
+			pr_err("Failed to pin partial view: offset=%lu\n",
+			       page);
+			return PTR_ERR(vma);
+		}
+
+		n = page - view.partial.offset;
+		GEM_BUG_ON(n >= view.partial.size);
+
+		io = i915_vma_pin_iomap(vma);
+		i915_vma_unpin(vma);
+		if (IS_ERR(io)) {
+			pr_err("Failed to iomap partial view: offset=%lu\n",
+			       page);
+			return PTR_ERR(io);
+		}
+
+		err = i915_vma_get_fence(vma);
+		if (err) {
+			pr_err("Failed to get fence for partial view: offset=%lu\n",
+			       page);
+			i915_vma_unpin_iomap(vma);
+			return err;
+		}
+
+		iowrite32(page, io + n * PAGE_SIZE/sizeof(*io));
+		i915_vma_unpin_iomap(vma);
+
+		offset = tiled_offset(tile, page << PAGE_SHIFT);
+		if (offset >= obj->base.size)
+			continue;
+
+		i915_gem_object_flush_gtt_write_domain(obj);
+
+		p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
+		cpu = kmap(p) + offset_in_page(offset);
+		drm_clflush_virt_range(cpu, sizeof(*cpu));
+		if (*cpu != (u32)page) {
+			pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
+			       page, n,
+			       view.partial.offset,
+			       view.partial.size,
+			       vma->size >> PAGE_SHIFT,
+			       tile_row_pages(obj),
+			       vma->fence ? vma->fence->id : -1, tile->tiling, tile->stride,
+			       offset >> PAGE_SHIFT,
+			       (unsigned int)offset_in_page(offset),
+			       offset,
+			       (u32)page, *cpu);
+			err = -EINVAL;
+		}
+		*cpu = 0;
+		drm_clflush_virt_range(cpu, sizeof(*cpu));
+		kunmap(p);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int igt_partial_tiling(void *arg)
+{
+	IGT_TIMEOUT(end);
+	const unsigned int nreal = 1 << 12; /* largest tile row x2 */
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct tile tile;
+	int err;
+
+	/* We want to check the page mapping and fencing of a large object
+	 * mmapped through the GTT. The object we create is larger than can
+	 * possibly be mmaped as a whole, and so we must use partial GGTT vma.
+	 * We then check that a write through each partial GGTT vma ends up
+	 * in the right set of pages within the object, and with the expected
+	 * tiling, which we verify by manual swizzling.
+	 */
+
+	obj = huge_gem_object(i915,
+			      nreal << PAGE_SHIFT,
+			      (1 + next_prime_number(i915->ggtt.base.total >> PAGE_SHIFT)) << PAGE_SHIFT);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_pin_pages(obj);
+	if (err) {
+		pr_err("Failed to allocate %u pages (%zu total), err=%d\n",
+		       nreal, obj->base.size / PAGE_SIZE, err);
+		goto err;
+	}
+
+	tile.height = 1;
+	tile.width = 1;
+	tile.size = 0;
+	tile.stride = 0;
+	tile.swizzle = I915_BIT_6_SWIZZLE_NONE;
+	tile.tiling = I915_TILING_NONE;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = check_partial_mapping(obj, &tile, end);
+	if (err)
+		goto err_unlock;
+
+	for (tile.tiling = I915_TILING_X;
+	     tile.tiling <= I915_TILING_Y;
+	     tile.tiling++) {
+		unsigned int max_pitch;
+		unsigned int pitch;
+
+		switch (tile.tiling) {
+		case I915_TILING_X:
+			tile.swizzle = i915->mm.bit_6_swizzle_x;
+			break;
+		case I915_TILING_Y:
+			tile.swizzle = i915->mm.bit_6_swizzle_y;
+			break;
+		}
+
+		if (tile.swizzle == I915_BIT_6_SWIZZLE_UNKNOWN ||
+		    tile.swizzle == I915_BIT_6_SWIZZLE_9_10_17)
+			continue;
+
+		if (INTEL_GEN(i915) <= 2) {
+			tile.height = 16;
+			tile.width = 128;
+			tile.size = 11;
+		} else if (tile.tiling == I915_TILING_Y &&
+			   HAS_128_BYTE_Y_TILING(i915)) {
+			tile.height = 32;
+			tile.width = 128;
+			tile.size = 12;
+		} else {
+			tile.height = 8;
+			tile.width = 512;
+			tile.size = 12;
+		}
+
+		if (INTEL_GEN(i915) < 4)
+			max_pitch = 8192 / tile.width;
+		else if (INTEL_GEN(i915) < 7)
+			max_pitch = 128 * I965_FENCE_MAX_PITCH_VAL / tile.width;
+		else
+			max_pitch = 128 * GEN7_FENCE_MAX_PITCH_VAL / tile.width;
+
+		for (pitch = max_pitch; pitch; pitch >>= 1) {
+			tile.stride = tile.width * pitch;
+			err = check_partial_mapping(obj, &tile, end);
+			if (err)
+				goto err_unlock;
+
+			if (pitch > 2 && INTEL_GEN(i915) >= 4) {
+				tile.stride = tile.width * (pitch - 1);
+				err = check_partial_mapping(obj, &tile, end);
+				if (err)
+					goto err_unlock;
+			}
+
+			if (pitch < max_pitch && INTEL_GEN(i915) >= 4) {
+				tile.stride = tile.width * (pitch + 1);
+				err = check_partial_mapping(obj, &tile, end);
+				if (err)
+					goto err_unlock;
+			}
+		}
+
+		if (INTEL_GEN(i915) >= 4) {
+			for_each_prime_number(pitch, max_pitch) {
+				tile.stride = tile.width * pitch;
+				err = check_partial_mapping(obj, &tile, end);
+				if (err)
+					goto err_unlock;
+			}
+		}
+	}
+
+err_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	i915_gem_object_unpin_pages(obj);
+err:
+	i915_gem_object_put(obj);
+	return err;
+}
+
 int i915_gem_object_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
@@ -163,6 +439,7 @@ int i915_gem_object_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_gem_huge),
+		SUBTEST(igt_partial_tiling),
 	};
 
 	return i915_subtests(tests, i915);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 19/38] drm/i915: Test exhaustion of the mmap space
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (17 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 18/38] drm/i915: Test partial mappings Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 20/38] drm/i915: Test coherency of and barriers between cache domains Chris Wilson
                   ` (19 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

An unlikely error condition that we can simulate by stealing most of
the range before trying to insert new objects.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_object.c | 138 +++++++++++++++++++++++
 1 file changed, 138 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
index 20dc4f6efc18..b86276769e76 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
@@ -25,6 +25,7 @@
 #include "i915_selftest.h"
 
 #include "mock_gem_device.h"
+#include "huge_gem_object.h"
 
 static int igt_gem_object(void *arg)
 {
@@ -416,6 +417,142 @@ static int igt_partial_tiling(void *arg)
 	return err;
 }
 
+static int make_obj_busy(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct drm_i915_gem_request *rq;
+	struct i915_vma *vma;
+	int err;
+
+	vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		return err;
+
+	rq = i915_gem_request_alloc(i915->engine[RCS], i915->kernel_context);
+	if (IS_ERR(rq)) {
+		i915_vma_unpin(vma);
+		return PTR_ERR(rq);
+	}
+
+	i915_vma_move_to_active(vma, rq, 0);
+	i915_add_request(rq);
+
+	i915_gem_object_set_active_reference(obj);
+	i915_vma_unpin(vma);
+	return 0;
+}
+
+static bool assert_mmap_offset(struct drm_i915_private *i915,
+			       unsigned long size,
+			       int expected)
+{
+	struct drm_i915_gem_object *obj;
+	int err;
+
+	obj = i915_gem_object_create_internal(i915, size);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_create_mmap_offset(obj);
+	i915_gem_object_put(obj);
+
+	return err == expected;
+}
+
+static int igt_mmap_offset_exhaustion(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_mm *mm = &i915->drm.vma_offset_manager->vm_addr_space_mm;
+	struct drm_i915_gem_object *obj;
+	struct drm_mm_node resv, *hole;
+	u64 hole_start, hole_end;
+	int loop, err;
+
+	/* Trim the device mmap space to only a page */
+	memset(&resv, 0, sizeof(resv));
+	drm_mm_for_each_hole(hole, mm, hole_start, hole_end) {
+		resv.start = hole_start;
+		resv.size = hole_end - hole_start - 1; /* PAGE_SIZE units */
+		err = drm_mm_reserve_node(mm, &resv);
+		if (err) {
+			pr_err("Failed to trim VMA manager, err=%d\n", err);
+			return err;
+		}
+		break;
+	}
+
+	/* Just fits! */
+	if (!assert_mmap_offset(i915, PAGE_SIZE, 0)) {
+		pr_err("Unable to insert object into single page hole\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	/* Too large */
+	if (!assert_mmap_offset(i915, 2*PAGE_SIZE, -ENOSPC)) {
+		pr_err("Unexpectedly succeeded in inserting too large object into single page hole\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	/* Fill the hole, further allocation attempts should then fail */
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		goto err;
+	}
+
+	err = i915_gem_object_create_mmap_offset(obj);
+	if (err) {
+		pr_err("Unable to insert object into reclaimed hole\n");
+		goto err_obj;
+	}
+
+	if (!assert_mmap_offset(i915, PAGE_SIZE, -ENOSPC)) {
+		pr_err("Unexpectedly succeeded in inserting object into no holes!\n");
+		err = -EINVAL;
+		goto err_obj;
+	}
+
+	i915_gem_object_put(obj);
+
+	/* Now fill with busy dead objects that we expect to reap */
+	for (loop = 0; loop < 3; loop++) {
+		obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto err;
+		}
+
+		mutex_lock(&i915->drm.struct_mutex);
+		err = make_obj_busy(obj);
+		mutex_unlock(&i915->drm.struct_mutex);
+		if (err) {
+			pr_err("[loop %d] Failed to busy the object\n", loop);
+			goto err_obj;
+		}
+
+		GEM_BUG_ON(!i915_gem_object_is_active(obj));
+		err = i915_gem_object_create_mmap_offset(obj);
+		if (err) {
+			pr_err("[loop %d] i915_gem_object_create_mmap_offset failed with err=%d\n",
+			       loop, err);
+			goto err;
+		}
+	}
+
+err:
+	drm_mm_remove_node(&resv);
+	return err;
+err_obj:
+	i915_gem_object_put(obj);
+	goto err;
+}
+
 int i915_gem_object_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
@@ -440,6 +577,7 @@ int i915_gem_object_live_selftests(struct drm_i915_private *i915)
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_gem_huge),
 		SUBTEST(igt_partial_tiling),
+		SUBTEST(igt_mmap_offset_exhaustion),
 	};
 
 	return i915_subtests(tests, i915);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 20/38] drm/i915: Test coherency of and barriers between cache domains
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (18 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 19/38] drm/i915: Test exhaustion of the mmap space Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 13:01   ` Matthew Auld
  2017-01-19 11:41 ` [PATCH v2 21/38] drm/i915: Move uncore selfchecks to live selftest infrastructure Chris Wilson
                   ` (18 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Write into an object using WB, WC, GTT, and GPU paths and make sure that
our internal API is sufficient to ensure coherent reads and writes.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c                    |   1 +
 .../gpu/drm/i915/selftests/i915_gem_coherency.c    | 363 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 3 files changed, 365 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_coherency.c

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0772a4e0e3ef..2b6c0f9b02d0 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4944,4 +4944,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 #include "selftests/mock_gem_device.c"
 #include "selftests/huge_gem_object.c"
 #include "selftests/i915_gem_object.c"
+#include "selftests/i915_gem_coherency.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
new file mode 100644
index 000000000000..0a5ef721c501
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
@@ -0,0 +1,363 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/prime_numbers.h>
+
+#include "i915_selftest.h"
+#include "i915_random.h"
+
+static int cpu_set(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 v)
+{
+	unsigned int needs_clflush;
+	struct page *page;
+	typeof(v) *map;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_write(obj, &needs_clflush);
+	if (err)
+		return err;
+
+	page = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
+	map = kmap_atomic(page);
+	if (needs_clflush & CLFLUSH_BEFORE)
+		clflush(map+offset_in_page(offset) / sizeof(*map));
+	map[offset_in_page(offset) / sizeof(*map)] = v;
+	if (needs_clflush & CLFLUSH_AFTER)
+		clflush(map+offset_in_page(offset) / sizeof(*map));
+	kunmap_atomic(map);
+
+	i915_gem_obj_finish_shmem_access(obj);
+	return 0;
+}
+
+static int cpu_get(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 *v)
+{
+	unsigned int needs_clflush;
+	struct page *page;
+	typeof(v) map;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_read(obj, &needs_clflush);
+	if (err)
+		return err;
+
+	page = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
+	map = kmap_atomic(page);
+	if (needs_clflush & CLFLUSH_BEFORE)
+		clflush(map+offset_in_page(offset) / sizeof(*map));
+	*v = map[offset_in_page(offset) / sizeof(*map)];
+	kunmap_atomic(map);
+
+	i915_gem_obj_finish_shmem_access(obj);
+	return 0;
+}
+
+static int gtt_set(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 v)
+{
+	struct i915_vma *vma;
+	typeof(v) *map;
+	int err;
+
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	if (err)
+		return err;
+
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, PIN_MAPPABLE);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	map = i915_vma_pin_iomap(vma);
+	i915_vma_unpin(vma);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	map[offset / sizeof(*map)] = v;
+	i915_vma_unpin_iomap(vma);
+
+	return 0;
+}
+
+static int gtt_get(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 *v)
+{
+	struct i915_vma *vma;
+	typeof(v) map;
+	int err;
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		return err;
+
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, PIN_MAPPABLE);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	map = i915_vma_pin_iomap(vma);
+	i915_vma_unpin(vma);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	*v = map[offset / sizeof(*map)];
+	i915_vma_unpin_iomap(vma);
+
+	return 0;
+}
+
+static int wc_set(struct drm_i915_gem_object *obj,
+		  unsigned long offset,
+		  u32 v)
+{
+	typeof(v) *map;
+	int err;
+
+	/* XXX GTT write followed by WC write go missing */
+	i915_gem_object_flush_gtt_write_domain(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	if (err)
+		return err;
+
+	map = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	map[offset / sizeof(*map)] = v;
+	i915_gem_object_unpin_map(obj);
+
+	return 0;
+}
+
+static int wc_get(struct drm_i915_gem_object *obj,
+		  unsigned long offset,
+		  u32 *v)
+{
+	typeof(v) map;
+	int err;
+
+	/* XXX WC write followed by GTT write go missing */
+	i915_gem_object_flush_gtt_write_domain(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		return err;
+
+	map = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	*v = map[offset / sizeof(*map)];
+	i915_gem_object_unpin_map(obj);
+
+	return 0;
+}
+
+static int gpu_set(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 v)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct drm_i915_gem_request *rq;
+	struct i915_vma *vma;
+	int err;
+
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	if (err)
+		return err;
+
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	rq = i915_gem_request_alloc(i915->engine[RCS], i915->kernel_context);
+	if (IS_ERR(rq)) {
+		i915_vma_unpin(vma);
+		return PTR_ERR(rq);
+	}
+
+	err = intel_ring_begin(rq, 4);
+	if (err) {
+		__i915_add_request(rq, false);
+		i915_vma_unpin(vma);
+		return err;
+	}
+
+	if (INTEL_GEN(i915) >= 8) {
+		intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM_GEN4 | 1 << 22);
+		intel_ring_emit(rq->ring, lower_32_bits(i915_ggtt_offset(vma) + offset));
+		intel_ring_emit(rq->ring, upper_32_bits(i915_ggtt_offset(vma) + offset));
+		intel_ring_emit(rq->ring, v);
+	} else if (INTEL_GEN(i915) >= 4) {
+		intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM_GEN4 | 1 << 22);
+		intel_ring_emit(rq->ring, 0);
+		intel_ring_emit(rq->ring, i915_ggtt_offset(vma) + offset);
+		intel_ring_emit(rq->ring, v);
+	} else {
+		intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM | 1 << 22);
+		intel_ring_emit(rq->ring, i915_ggtt_offset(vma) + offset);
+		intel_ring_emit(rq->ring, v);
+		intel_ring_emit(rq->ring, MI_NOOP);
+	}
+	intel_ring_advance(rq->ring);
+
+	i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
+	i915_vma_unpin(vma);
+
+	reservation_object_lock(obj->resv, NULL);
+	reservation_object_add_excl_fence(obj->resv, &rq->fence);
+	reservation_object_unlock(obj->resv);
+
+	__i915_add_request(rq, true);
+
+	return 0;
+}
+
+static const struct igt_coherency_mode {
+	const char *name;
+	int (*set)(struct drm_i915_gem_object *, unsigned long offset, u32 v);
+	int (*get)(struct drm_i915_gem_object *, unsigned long offset, u32 *v);
+} igt_coherency_mode[] = {
+	{ "cpu", cpu_set, cpu_get },
+	{ "gtt", gtt_set, gtt_get },
+	{ "wc", wc_set, wc_get },
+	{ "gpu", gpu_set, NULL },
+	{ },
+};
+
+static int igt_gem_coherency(void *arg)
+{
+	const unsigned int ncachelines = PAGE_SIZE/64;
+	I915_RND_STATE(prng);
+	struct drm_i915_private *i915 = arg;
+	const struct igt_coherency_mode *read, *write, *over;
+	struct drm_i915_gem_object *obj;
+	unsigned long count, n;
+	u32 *offsets, *values;
+	int err;
+
+	/* We repeatedly write, overwrite and read from a sequence of
+	 * cachelines in order to try and detect incoherency (unflushed writes
+	 * from either the CPU or GPU). Each setter/getter uses our cache
+	 * domain API which should prevent incoherency.
+	 */
+
+	offsets = kmalloc_array(ncachelines, 2*sizeof(u32), GFP_KERNEL);
+	if (!offsets)
+		return -ENOMEM;
+	for (count = 0; count < ncachelines; count++)
+		offsets[count] = count * 64 + 4 * (count % 16);
+
+	values = offsets + ncachelines;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	for (over = igt_coherency_mode; over->name; over++) {
+		if (!over->set)
+			continue;
+
+		for (write = igt_coherency_mode; write->name; write++) {
+			if (!write->set)
+				continue;
+
+			for (read = igt_coherency_mode; read->name; read++) {
+				if (!read->get)
+					continue;
+
+				for_each_prime_number_from(count, 1, ncachelines) {
+					obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+					if (IS_ERR(obj)) {
+						err = PTR_ERR(obj);
+						goto unlock;
+					}
+
+					i915_random_reorder(offsets, ncachelines, &prng);
+					for (n = 0; n < count; n++)
+						values[n] = prandom_u32_state(&prng);
+
+					for (n = 0; n < count; n++) {
+						err = over->set(obj, offsets[n], ~values[n]);
+						if (err) {
+							pr_err("Failed to set stale value[%ld/%ld] in object using %s, err=%d\n",
+							       n, count, over->name, err);
+							goto unlock;
+						}
+					}
+
+					for (n = 0; n < count; n++) {
+						err = write->set(obj, offsets[n], values[n]);
+						if (err) {
+							pr_err("Failed to set value[%ld/%ld] in object using %s, err=%d\n",
+							       n, count, write->name, err);
+							goto unlock;
+						}
+					}
+
+					for (n = 0; n < count; n++) {
+						u32 found;
+
+						err = read->get(obj, offsets[n], &found);
+						if (err) {
+							pr_err("Failed to get value[%ld/%ld] in object using %s, err=%d\n",
+							       n, count, read->name, err);
+							goto unlock;
+						}
+
+						if (found != values[n]) {
+							pr_err("Value[%ld/%ld] mismatch, (overwrite with %s) wrote [%s] %x read [%s] %x (inverse %x), at offset %x\n",
+							       n, count, over->name,
+							       write->name, values[n],
+							       read->name, found,
+							       ~values[n], offsets[n]);
+							err = -EINVAL;
+							goto unlock;
+						}
+					}
+
+					__i915_gem_object_release_unless_active(obj);
+					obj = NULL;
+				}
+			}
+		}
+	}
+unlock:
+	if (obj)
+		__i915_gem_object_release_unless_active(obj);
+	mutex_unlock(&i915->drm.struct_mutex);
+	kfree(offsets);
+	return err;
+}
+
+int i915_gem_coherency_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_gem_coherency),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 1822ac99d577..fde9ef22cfe8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -11,3 +11,4 @@
 selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
 selftest(requests, i915_gem_request_live_selftests)
 selftest(object, i915_gem_object_live_selftests)
+selftest(coherency, i915_gem_coherency_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 21/38] drm/i915: Move uncore selfchecks to live selftest infrastructure
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (19 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 20/38] drm/i915: Test coherency of and barriers between cache domains Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 22/38] drm/i915: Test all fw tables during mock selftests Chris Wilson
                   ` (17 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Now that the kselftest infrastructure exists, put it to use and add to
it the existing consistency checks on the fw register lookup tables.

v2: s/tabke/table/

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/intel_uncore.c                | 52 +-----------
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  1 +
 drivers/gpu/drm/i915/selftests/intel_uncore.c      | 99 ++++++++++++++++++++++
 3 files changed, 104 insertions(+), 48 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_uncore.c

diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
index abe08885a5ba..b6ce8de2cc86 100644
--- a/drivers/gpu/drm/i915/intel_uncore.c
+++ b/drivers/gpu/drm/i915/intel_uncore.c
@@ -635,33 +635,6 @@ find_fw_domain(struct drm_i915_private *dev_priv, u32 offset)
 	return entry->domains;
 }
 
-static void
-intel_fw_table_check(struct drm_i915_private *dev_priv)
-{
-	const struct intel_forcewake_range *ranges;
-	unsigned int num_ranges;
-	s32 prev;
-	unsigned int i;
-
-	if (!IS_ENABLED(CONFIG_DRM_I915_DEBUG))
-		return;
-
-	ranges = dev_priv->uncore.fw_domains_table;
-	if (!ranges)
-		return;
-
-	num_ranges = dev_priv->uncore.fw_domains_table_entries;
-
-	for (i = 0, prev = -1; i < num_ranges; i++, ranges++) {
-		WARN_ON_ONCE(IS_GEN9(dev_priv) &&
-			     (prev + 1) != (s32)ranges->start);
-		WARN_ON_ONCE(prev >= (s32)ranges->start);
-		prev = ranges->start;
-		WARN_ON_ONCE(prev >= (s32)ranges->end);
-		prev = ranges->end;
-	}
-}
-
 #define GEN_FW_RANGE(s, e, d) \
 	{ .start = (s), .end = (e), .domains = (d) }
 
@@ -700,23 +673,6 @@ static const i915_reg_t gen8_shadowed_regs[] = {
 	/* TODO: Other registers are not yet used */
 };
 
-static void intel_shadow_table_check(void)
-{
-	const i915_reg_t *reg = gen8_shadowed_regs;
-	s32 prev;
-	u32 offset;
-	unsigned int i;
-
-	if (!IS_ENABLED(CONFIG_DRM_I915_DEBUG))
-		return;
-
-	for (i = 0, prev = -1; i < ARRAY_SIZE(gen8_shadowed_regs); i++, reg++) {
-		offset = i915_mmio_reg_offset(*reg);
-		WARN_ON_ONCE(prev >= (s32)offset);
-		prev = offset;
-	}
-}
-
 static int mmio_reg_cmp(u32 key, const i915_reg_t *reg)
 {
 	u32 offset = i915_mmio_reg_offset(*reg);
@@ -1445,10 +1401,6 @@ void intel_uncore_init(struct drm_i915_private *dev_priv)
 		break;
 	}
 
-	intel_fw_table_check(dev_priv);
-	if (INTEL_GEN(dev_priv) >= 8)
-		intel_shadow_table_check();
-
 	if (intel_vgpu_active(dev_priv)) {
 		ASSIGN_WRITE_MMIO_VFUNCS(vgpu);
 		ASSIGN_READ_MMIO_VFUNCS(vgpu);
@@ -1971,3 +1923,7 @@ intel_uncore_forcewake_for_reg(struct drm_i915_private *dev_priv,
 
 	return fw_domains;
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/intel_uncore.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index fde9ef22cfe8..c060bf24928e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -9,6 +9,7 @@
  * Tests are executed in order by igt/drv_selftest
  */
 selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
+selftest(uncore, intel_uncore_live_selftests)
 selftest(requests, i915_gem_request_live_selftests)
 selftest(object, i915_gem_object_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_uncore.c b/drivers/gpu/drm/i915/selftests/intel_uncore.c
new file mode 100644
index 000000000000..0ac467940a4f
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_uncore.c
@@ -0,0 +1,99 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_selftest.h"
+
+static int intel_fw_table_check(struct drm_i915_private *i915)
+{
+	const struct intel_forcewake_range *ranges;
+	unsigned int num_ranges, i;
+	s32 prev;
+
+	ranges = i915->uncore.fw_domains_table;
+	if (!ranges)
+		return 0;
+
+	num_ranges = i915->uncore.fw_domains_table_entries;
+	for (i = 0, prev = -1; i < num_ranges; i++, ranges++) {
+		/* Check that the table is watertight */
+		if (IS_GEN9(i915) && (prev + 1) != (s32)ranges->start) {
+			pr_err("%s: entry[%d]:(%x, %x) is not watertight to previous (%x)\n",
+			       __func__, i, ranges->start, ranges->end, prev);
+			return -EINVAL;
+		}
+
+		/* Check that the table never goes backwards */
+		if (prev >= (s32)ranges->start) {
+			pr_err("%s: entry[%d]:(%x, %x) is less than the previous (%x)\n",
+			       __func__, i, ranges->start, ranges->end, prev);
+			return -EINVAL;
+		}
+
+		/* Check that the entry is valid */
+		if (ranges->start >= ranges->end) {
+			pr_err("%s: entry[%d]:(%x, %x) has negative length\n",
+			       __func__, i, ranges->start, ranges->end);
+			return -EINVAL;
+		}
+
+		prev = ranges->end;
+	}
+
+	return 0;
+}
+
+static int intel_shadow_table_check(void)
+{
+	const i915_reg_t *reg = gen8_shadowed_regs;
+	unsigned int i;
+	s32 prev;
+
+	for (i = 0, prev = -1; i < ARRAY_SIZE(gen8_shadowed_regs); i++, reg++) {
+		u32 offset = i915_mmio_reg_offset(*reg);
+		if (prev >= (s32)offset) {
+			pr_err("%s: entry[%d]:(%x) is before previous (%x)\n",
+			       __func__, i, offset, prev);
+			return -EINVAL;
+		}
+
+		prev = offset;
+	}
+
+	return 0;
+}
+
+int intel_uncore_live_selftests(struct drm_i915_private *i915)
+{
+	int err;
+
+	err = intel_fw_table_check(i915);
+	if (err)
+		return err;
+
+	err = intel_shadow_table_check();
+	if (err)
+		return err;
+
+	return 0;
+}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 22/38] drm/i915: Test all fw tables during mock selftests
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (20 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 21/38] drm/i915: Move uncore selfchecks to live selftest infrastructure Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 23/38] drm/i915: Sanity check all registers for matching fw domains Chris Wilson
                   ` (16 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

In addition to just testing the fw table we load, during the initial
mock testing we can test that all tables are valid (so the testing is
not limited to just the platforms that load that particular table).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |  1 +
 drivers/gpu/drm/i915/selftests/intel_uncore.c      | 49 ++++++++++++++++------
 2 files changed, 37 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 2ed94e3a71b7..c61e08de7913 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -10,6 +10,7 @@
  */
 selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
 selftest(scatterlist, scatterlist_mock_selftests)
+selftest(uncore, intel_uncore_mock_selftests)
 selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
 selftest(requests, i915_gem_request_mock_selftests)
 selftest(objects, i915_gem_object_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_uncore.c b/drivers/gpu/drm/i915/selftests/intel_uncore.c
index 0ac467940a4f..c18fddb12d00 100644
--- a/drivers/gpu/drm/i915/selftests/intel_uncore.c
+++ b/drivers/gpu/drm/i915/selftests/intel_uncore.c
@@ -24,20 +24,16 @@
 
 #include "i915_selftest.h"
 
-static int intel_fw_table_check(struct drm_i915_private *i915)
+static int intel_fw_table_check(const struct intel_forcewake_range *ranges,
+				unsigned int num_ranges,
+				bool is_watertight)
 {
-	const struct intel_forcewake_range *ranges;
-	unsigned int num_ranges, i;
+	unsigned int i;
 	s32 prev;
 
-	ranges = i915->uncore.fw_domains_table;
-	if (!ranges)
-		return 0;
-
-	num_ranges = i915->uncore.fw_domains_table_entries;
 	for (i = 0, prev = -1; i < num_ranges; i++, ranges++) {
 		/* Check that the table is watertight */
-		if (IS_GEN9(i915) && (prev + 1) != (s32)ranges->start) {
+		if (is_watertight && (prev + 1) != (s32)ranges->start) {
 			pr_err("%s: entry[%d]:(%x, %x) is not watertight to previous (%x)\n",
 			       __func__, i, ranges->start, ranges->end, prev);
 			return -EINVAL;
@@ -83,15 +79,42 @@ static int intel_shadow_table_check(void)
 	return 0;
 }
 
-int intel_uncore_live_selftests(struct drm_i915_private *i915)
+int intel_uncore_mock_selftests(void)
 {
-	int err;
+	struct {
+		const struct intel_forcewake_range *ranges;
+		unsigned int num_ranges;
+		bool is_watertight;
+	} fw[] = {
+		{ __vlv_fw_ranges, ARRAY_SIZE(__vlv_fw_ranges), false },
+		{ __chv_fw_ranges, ARRAY_SIZE(__chv_fw_ranges), false },
+		{ __gen9_fw_ranges, ARRAY_SIZE(__gen9_fw_ranges), true },
+	};
+	int err, i;
+
+	for (i = 0; i < ARRAY_SIZE(fw); i++) {
+		err = intel_fw_table_check(fw[i].ranges,
+					   fw[i].num_ranges,
+					   fw[i].is_watertight);
+		if (err)
+			return err;
+	}
 
-	err = intel_fw_table_check(i915);
+	err = intel_shadow_table_check();
 	if (err)
 		return err;
 
-	err = intel_shadow_table_check();
+	return 0;
+}
+
+int intel_uncore_live_selftests(struct drm_i915_private *i915)
+{
+	int err;
+
+	/* Confirm the table we load is still valid */
+	err = intel_fw_table_check(i915->uncore.fw_domains_table,
+				   i915->uncore.fw_domains_table_entries,
+				   INTEL_GEN(i915) >= 9);
 	if (err)
 		return err;
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 23/38] drm/i915: Sanity check all registers for matching fw domains
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (21 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 22/38] drm/i915: Test all fw tables during mock selftests Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 24/38] drm/i915: Add some mock tests for dmabuf interop Chris Wilson
                   ` (15 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Add a late selftest that walks over all forcewake registers (those below
0x40000) and uses the mmio debug register to check to see if any are
unclaimed. This is possible if we fail to wake the appropriate
powerwells for the register.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/intel_uncore.c | 48 +++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/intel_uncore.c b/drivers/gpu/drm/i915/selftests/intel_uncore.c
index c18fddb12d00..fba76fef4d55 100644
--- a/drivers/gpu/drm/i915/selftests/intel_uncore.c
+++ b/drivers/gpu/drm/i915/selftests/intel_uncore.c
@@ -107,6 +107,50 @@ int intel_uncore_mock_selftests(void)
 	return 0;
 }
 
+static int intel_uncore_check_forcewake_domains(struct drm_i915_private *dev_priv)
+{
+#define FW_RANGE 0x40000
+	unsigned long *valid;
+	u32 offset;
+	int err;
+
+	valid = kzalloc(BITS_TO_LONGS(FW_RANGE) * sizeof(*valid),
+			GFP_TEMPORARY);
+	if (!valid)
+		return -ENOMEM;
+
+	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
+
+	check_for_unclaimed_mmio(dev_priv);
+	for (offset = 0; offset < FW_RANGE; offset += 4) {
+		i915_reg_t reg = { offset };
+
+		(void)I915_READ_FW(reg);
+		if (!check_for_unclaimed_mmio(dev_priv))
+			set_bit(offset, valid);
+	}
+
+	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
+
+	err = 0;
+	for_each_set_bit(offset, valid, FW_RANGE) {
+		i915_reg_t reg = { offset };
+
+		intel_uncore_forcewake_reset(dev_priv, false);
+		check_for_unclaimed_mmio(dev_priv);
+
+		(void)I915_READ(reg);
+		if (check_for_unclaimed_mmio(dev_priv)) {
+			pr_err("Unclaimed mmio read to register 0x%04x\n",
+			       offset);
+			err = -EINVAL;
+		}
+	}
+
+	kfree(valid);
+	return err;
+}
+
 int intel_uncore_live_selftests(struct drm_i915_private *i915)
 {
 	int err;
@@ -118,5 +162,9 @@ int intel_uncore_live_selftests(struct drm_i915_private *i915)
 	if (err)
 		return err;
 
+	err = intel_uncore_check_forcewake_domains(i915);
+	if (err)
+		return err;
+
 	return 0;
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 24/38] drm/i915: Add some mock tests for dmabuf interop
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (22 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 23/38] drm/i915: Sanity check all registers for matching fw domains Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 25/38] drm/i915: Add initial selftests for i915_gem_gtt Chris Wilson
                   ` (14 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Check that we can create both dmabuf and objects from dmabuf.

v2: Cleanups, correct include, fix unpin on dead path and prevent
explosion on dmabuf init failure

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |   5 +
 drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c   | 298 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/mock_dmabuf.c       | 176 ++++++++++++
 drivers/gpu/drm/i915/selftests/mock_dmabuf.h       |  41 +++
 5 files changed, 521 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_dmabuf.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_dmabuf.h

diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index d037adcda6f2..3e276eee0450 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -307,3 +307,8 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
 
 	return ERR_PTR(ret);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_dmabuf.c"
+#include "selftests/i915_gem_dmabuf.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
new file mode 100644
index 000000000000..2f61274310da
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
@@ -0,0 +1,298 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_selftest.h"
+
+#include "mock_gem_device.h"
+#include "mock_dmabuf.h"
+
+static int igt_dmabuf_export(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	int err;
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	dmabuf = i915_gem_prime_export(&i915->drm, &obj->base, 0);
+	if (IS_ERR(dmabuf)) {
+		pr_err("i915_gem_prime_export failed with err=%d\n",
+		       (int)PTR_ERR(dmabuf));
+		err = PTR_ERR(dmabuf);
+		goto err;
+	}
+
+	err = 0;
+	dma_buf_put(dmabuf);
+err:
+	i915_gem_object_put(obj);
+	return err;
+}
+
+static int igt_dmabuf_import_self(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct drm_gem_object *import;
+	struct dma_buf *dmabuf;
+	int err;
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	dmabuf = i915_gem_prime_export(&i915->drm, &obj->base, 0);
+	if (IS_ERR(dmabuf)) {
+		pr_err("i915_gem_prime_export failed with err=%d\n",
+		       (int)PTR_ERR(dmabuf));
+		err = PTR_ERR(dmabuf);
+		goto err;
+	}
+
+	import = i915_gem_prime_import(&i915->drm, dmabuf);
+	if (IS_ERR(import)) {
+		pr_err("i915_gem_prime_import failed with err=%d\n",
+		       (int)PTR_ERR(import));
+		err = PTR_ERR(import);
+		goto err_dmabuf;
+	}
+
+	if (import != &obj->base) {
+		pr_err("i915_gem_prime_import created a new object!\n");
+		err = -EINVAL;
+		goto err_import;
+	}
+
+	err = 0;
+err_import:
+	i915_gem_object_put(to_intel_bo(import));
+err_dmabuf:
+	dma_buf_put(dmabuf);
+err:
+	i915_gem_object_put(obj);
+	return err;
+}
+
+static int igt_dmabuf_import(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	void *obj_map, *dma_map;
+	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
+	int err, i;
+
+	dmabuf = mock_dmabuf(1);
+	if (IS_ERR(dmabuf))
+		return PTR_ERR(dmabuf);
+
+	obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
+	if (IS_ERR(obj)) {
+		pr_err("i915_gem_prime_import failed with err=%d\n",
+		       (int)PTR_ERR(obj));
+		err = PTR_ERR(obj);
+		goto err_dmabuf;
+	}
+
+	if (obj->base.dev != &i915->drm) {
+		pr_err("i915_gem_prime_import created a non-i915 object!\n");
+		err = -EINVAL;
+		goto err_obj;
+	}
+
+	if (obj->base.size != PAGE_SIZE) {
+		pr_err("i915_gem_prime_import is wrong size found %lld, expected %ld\n",
+		       (long long)obj->base.size, PAGE_SIZE);
+		err = -EINVAL;
+		goto err_obj;
+	}
+
+	dma_map = dma_buf_vmap(dmabuf);
+	if (!dma_map) {
+		pr_err("dma_buf_vmap failed\n");
+		err = -ENOMEM;
+		goto err_obj;
+	}
+
+	if (0) { /* Can not yet map dmabuf */
+		obj_map = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		if (IS_ERR(obj_map)) {
+			err = PTR_ERR(obj_map);
+			pr_err("i915_gem_object_pin_map failed with err=%d\n", err);
+			goto err_dma_map;
+		}
+
+		for (i = 0; i < ARRAY_SIZE(pattern); i++) {
+			memset(dma_map, pattern[i], PAGE_SIZE);
+			if (memchr_inv(obj_map, pattern[i], PAGE_SIZE)) {
+				err = -EINVAL;
+				pr_err("imported vmap not all set to %x!\n", pattern[i]);
+				i915_gem_object_unpin_map(obj);
+				goto err_dma_map;
+			}
+		}
+
+		for (i = 0; i < ARRAY_SIZE(pattern); i++) {
+			memset(obj_map, pattern[i], PAGE_SIZE);
+			if (memchr_inv(dma_map, pattern[i], PAGE_SIZE)) {
+				err = -EINVAL;
+				pr_err("exported vmap not all set to %x!\n", pattern[i]);
+				i915_gem_object_unpin_map(obj);
+				goto err_dma_map;
+			}
+		}
+
+		i915_gem_object_unpin_map(obj);
+	}
+
+	err = 0;
+err_dma_map:
+	dma_buf_vunmap(dmabuf, dma_map);
+err_obj:
+	i915_gem_object_put(obj);
+err_dmabuf:
+	dma_buf_put(dmabuf);
+	return err;
+}
+
+static int igt_dmabuf_import_ownership(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	void *ptr;
+	int err;
+
+	dmabuf = mock_dmabuf(1);
+	if (IS_ERR(dmabuf))
+		return PTR_ERR(dmabuf);
+
+	ptr = dma_buf_vmap(dmabuf);
+	if (!ptr) {
+		pr_err("dma_buf_vmap failed\n");
+		err = -ENOMEM;
+		goto err_dmabuf;
+	}
+
+	memset(ptr, 0xc5, PAGE_SIZE);
+	dma_buf_vunmap(dmabuf, ptr);
+
+	obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
+	if (IS_ERR(obj)) {
+		pr_err("i915_gem_prime_import failed with err=%d\n",
+		       (int)PTR_ERR(obj));
+		err = PTR_ERR(obj);
+		goto err_dmabuf;
+	}
+
+	dma_buf_put(dmabuf);
+
+	err = i915_gem_object_pin_pages(obj);
+	if (err) {
+		pr_err("i915_gem_object_pin_pages failed with err=%d\n", err);
+		goto err_obj;
+	}
+
+	err = 0;
+	i915_gem_object_unpin_pages(obj);
+err_obj:
+	i915_gem_object_put(obj);
+	return err;
+
+err_dmabuf:
+	dma_buf_put(dmabuf);
+	return err;
+}
+
+static int igt_dmabuf_export_vmap(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	void *ptr;
+	int err;
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	dmabuf = i915_gem_prime_export(&i915->drm, &obj->base, 0);
+	if (IS_ERR(dmabuf)) {
+		pr_err("i915_gem_prime_export failed with err=%d\n",
+		       (int)PTR_ERR(dmabuf));
+		err = PTR_ERR(dmabuf);
+		goto err_obj;
+	}
+	i915_gem_object_put(obj);
+
+	ptr = dma_buf_vmap(dmabuf);
+	if (IS_ERR(ptr)) {
+		err = PTR_ERR(ptr);
+		pr_err("dma_buf_vmap failed with err=%d\n", err);
+		goto err;
+	}
+
+	if (memchr_inv(ptr, 0, dmabuf->size)) {
+		pr_err("Exported object not initialiased to zero!\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	memset(ptr, 0xc5, dmabuf->size);
+
+	err = 0;
+	dma_buf_vunmap(dmabuf, ptr);
+err:
+	dma_buf_put(dmabuf);
+	return err;
+
+err_obj:
+	i915_gem_object_put(obj);
+	return err;
+}
+
+int i915_gem_dmabuf_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_dmabuf_export),
+		SUBTEST(igt_dmabuf_import_self),
+		SUBTEST(igt_dmabuf_import),
+		SUBTEST(igt_dmabuf_import_ownership),
+		SUBTEST(igt_dmabuf_export_vmap),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	err = i915_subtests(tests, i915);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index c61e08de7913..955a4d6ccdaf 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -14,3 +14,4 @@ selftest(uncore, intel_uncore_mock_selftests)
 selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
 selftest(requests, i915_gem_request_mock_selftests)
 selftest(objects, i915_gem_object_mock_selftests)
+selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/mock_dmabuf.c b/drivers/gpu/drm/i915/selftests/mock_dmabuf.c
new file mode 100644
index 000000000000..99da8f4ef497
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_dmabuf.c
@@ -0,0 +1,176 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_dmabuf.h"
+
+static struct sg_table *mock_map_dma_buf(struct dma_buf_attachment *attachment,
+					 enum dma_data_direction dir)
+{
+	struct mock_dmabuf *mock = to_mock(attachment->dmabuf);
+	struct sg_table *st;
+	struct scatterlist *sg;
+	int i, err;
+
+	st = kmalloc(sizeof(*st), GFP_KERNEL);
+	if (!st)
+		return ERR_PTR(-ENOMEM);
+
+	err = sg_alloc_table(st, mock->npages, GFP_KERNEL);
+	if (err)
+		goto err_free;
+
+	sg = st->sgl;
+	for (i = 0; i < mock->npages; i++) {
+		sg_set_page(sg, mock->pages[i], PAGE_SIZE, 0);
+		sg = sg_next(sg);
+	}
+
+	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+		err = -ENOMEM;
+		goto err_st;
+	}
+
+	return st;
+
+err_st:
+	sg_free_table(st);
+err_free:
+	kfree(st);
+	return ERR_PTR(err);
+}
+
+static void mock_unmap_dma_buf(struct dma_buf_attachment *attachment,
+			       struct sg_table *st,
+			       enum dma_data_direction dir)
+{
+	dma_unmap_sg(attachment->dev, st->sgl, st->nents, dir);
+	sg_free_table(st);
+	kfree(st);
+}
+
+static void mock_dmabuf_release(struct dma_buf *dma_buf)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+	int i;
+
+	for (i = 0; i < mock->npages; i++)
+		put_page(mock->pages[i]);
+
+	kfree(mock);
+}
+
+static void *mock_dmabuf_vmap(struct dma_buf *dma_buf)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	return vm_map_ram(mock->pages, mock->npages, 0, PAGE_KERNEL);
+}
+
+static void mock_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	vm_unmap_ram(vaddr, mock->npages);
+}
+
+static void *mock_dmabuf_kmap_atomic(struct dma_buf *dma_buf, unsigned long page_num)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	return kmap_atomic(mock->pages[page_num]);
+}
+
+static void mock_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, unsigned long page_num, void *addr)
+{
+	kunmap_atomic(addr);
+}
+
+static void *mock_dmabuf_kmap(struct dma_buf *dma_buf, unsigned long page_num)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	return kmap(mock->pages[page_num]);
+}
+
+static void mock_dmabuf_kunmap(struct dma_buf *dma_buf, unsigned long page_num, void *addr)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	return kunmap(mock->pages[page_num]);
+}
+
+static int mock_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
+{
+	return -ENODEV;
+}
+
+static const struct dma_buf_ops mock_dmabuf_ops =  {
+	.map_dma_buf = mock_map_dma_buf,
+	.unmap_dma_buf = mock_unmap_dma_buf,
+	.release = mock_dmabuf_release,
+	.kmap = mock_dmabuf_kmap,
+	.kmap_atomic = mock_dmabuf_kmap_atomic,
+	.kunmap = mock_dmabuf_kunmap,
+	.kunmap_atomic = mock_dmabuf_kunmap_atomic,
+	.mmap = mock_dmabuf_mmap,
+	.vmap = mock_dmabuf_vmap,
+	.vunmap = mock_dmabuf_vunmap,
+};
+
+static struct dma_buf *mock_dmabuf(int npages)
+{
+	struct mock_dmabuf *mock;
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+	struct dma_buf *dmabuf;
+	int i;
+
+	mock = kmalloc(sizeof(*mock) + npages * sizeof(struct page *),
+		       GFP_KERNEL);
+	if (!mock)
+		return ERR_PTR(-ENOMEM);
+
+	mock->npages = npages;
+	for (i = 0; i < npages; i++) {
+		mock->pages[i] = alloc_page(GFP_KERNEL);
+		if (!mock->pages[i])
+			goto err;
+	}
+
+	exp_info.ops = &mock_dmabuf_ops;
+	exp_info.size = npages * PAGE_SIZE;
+	exp_info.flags = O_CLOEXEC;
+	exp_info.priv = mock;
+
+	dmabuf = dma_buf_export(&exp_info);
+	if (IS_ERR(dmabuf))
+		goto err;
+
+	return dmabuf;
+
+err:
+	while (i--)
+		put_page(mock->pages[i]);
+	kfree(mock);
+	return ERR_PTR(-ENOMEM);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_dmabuf.h b/drivers/gpu/drm/i915/selftests/mock_dmabuf.h
new file mode 100644
index 000000000000..ec80613159b9
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_dmabuf.h
@@ -0,0 +1,41 @@
+
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_DMABUF_H__
+#define __MOCK_DMABUF_H__
+
+#include <linux/dma-buf.h>
+
+struct mock_dmabuf {
+	int npages;
+	struct page *pages[];
+};
+
+static struct mock_dmabuf *to_mock(struct dma_buf *buf)
+{
+	return buf->priv;
+}
+
+#endif /* !__MOCK_DMABUF_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 25/38] drm/i915: Add initial selftests for i915_gem_gtt
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (23 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 24/38] drm/i915: Add some mock tests for dmabuf interop Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 26/38] drm/i915: Exercise filling the top/bottom portions of the ppgtt Chris Wilson
                   ` (13 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Simple starting point for adding selftests for i915_gem_gtt, first
try creating a ppGTT and filling it.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c                |  1 +
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      | 97 ++++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  1 +
 3 files changed, 99 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 0d5d2b6ca723..2675cb245618 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3745,4 +3745,5 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_gtt.c"
+#include "selftests/i915_gem_gtt.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
new file mode 100644
index 000000000000..2559600c4755
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -0,0 +1,97 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_selftest.h"
+
+static int igt_ppgtt_alloc(void *arg)
+{
+	struct drm_i915_private *dev_priv = arg;
+	struct i915_hw_ppgtt *ppgtt;
+	u64 size, last;
+	int err;
+
+	/* Allocate a ppggt and try to fill the entire range */
+
+	if (!USES_PPGTT(dev_priv))
+		return 0;
+
+	ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
+	if (!ppgtt)
+		return -ENOMEM;
+
+	err = __hw_ppgtt_init(ppgtt, dev_priv);
+	if (err)
+		goto err_ppgtt;
+
+	if (!ppgtt->base.allocate_va_range)
+		goto err_ppgtt_cleanup;
+
+	/* Check we can allocate the entire range */
+	for (size = 4096;
+	     size <= ppgtt->base.total;
+	     size <<= 2) {
+		err = ppgtt->base.allocate_va_range(&ppgtt->base, 0, size);
+		if (err) {
+			if (err == -ENOMEM) {
+				pr_info("[1] Ran out of memory for va_range [0 + %llx] [bit %d]\n",
+					size, ilog2(size));
+				err = 0; /* virtual space too large! */
+			}
+			goto err_ppgtt_cleanup;
+		}
+
+		ppgtt->base.clear_range(&ppgtt->base, 0, size);
+	}
+
+	/* Check we can incrementally allocate the entire range */
+	for (last = 0, size = 4096;
+	     size <= ppgtt->base.total;
+	     last = size, size <<= 2) {
+		err = ppgtt->base.allocate_va_range(&ppgtt->base,
+						    last, size - last);
+		if (err) {
+			if (err == -ENOMEM) {
+				pr_info("[2] Ran out of memory for va_range [%llx + %llx] [bit %d]\n",
+					last, size - last, ilog2(size));
+				err = 0; /* virtual space too large! */
+			}
+			goto err_ppgtt_cleanup;
+		}
+	}
+
+err_ppgtt_cleanup:
+	ppgtt->base.cleanup(&ppgtt->base);
+err_ppgtt:
+	kfree(ppgtt);
+	return err;
+}
+
+int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_ppgtt_alloc),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index c060bf24928e..94517ad6dbd1 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -13,3 +13,4 @@ selftest(uncore, intel_uncore_live_selftests)
 selftest(requests, i915_gem_request_live_selftests)
 selftest(object, i915_gem_object_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
+selftest(gtt, i915_gem_gtt_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 26/38] drm/i915: Exercise filling the top/bottom portions of the ppgtt
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (24 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 25/38] drm/i915: Add initial selftests for i915_gem_gtt Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-31 12:32   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 27/38] drm/i915: Exercise filling the top/bottom portions of the global GTT Chris Wilson
                   ` (12 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Allocate objects with varying number of pages (which should hopefully
consist of a mixture of contiguous page chunks and so coalesced sg
lists) and check that the sg walkers in insert_pages cope.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 174 ++++++++++++++++++++++++++
 1 file changed, 174 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 2559600c4755..98c23a585ed3 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -22,7 +22,11 @@
  *
  */
 
+#include <linux/prime_numbers.h>
+
 #include "i915_selftest.h"
+#include "mock_drm.h"
+#include "huge_gem_object.h"
 
 static int igt_ppgtt_alloc(void *arg)
 {
@@ -87,10 +91,180 @@ static int igt_ppgtt_alloc(void *arg)
 	return err;
 }
 
+static void close_object_list(struct list_head *objects,
+			      struct i915_address_space *vm)
+{
+	struct drm_i915_gem_object *obj, *on;
+
+	list_for_each_entry_safe(obj, on, objects, batch_pool_link) {
+		struct i915_vma *vma;
+
+		vma = i915_vma_instance(obj, vm, NULL);
+		if (!IS_ERR(vma))
+			i915_vma_close(vma);
+
+		list_del(&obj->batch_pool_link);
+		i915_gem_object_put(obj);
+	}
+}
+
+static int fill_hole(struct drm_i915_private *i915,
+		     struct i915_address_space *vm,
+		     u64 hole_start, u64 hole_end,
+		     unsigned long end_time)
+{
+	const u64 hole_size = hole_end - hole_start;
+	struct drm_i915_gem_object *obj;
+	const unsigned long max_pages =
+		min_t(u64, 1 << 20, hole_size/2 >> PAGE_SHIFT);
+	unsigned long npages, prime;
+	struct i915_vma *vma;
+	LIST_HEAD(objects);
+	int err;
+
+	for_each_prime_number_from(prime, 2, 13) {
+		for (npages = 1; npages <= max_pages; npages *= prime) {
+			const u64 full_size = npages << PAGE_SHIFT;
+			const struct {
+				u64 base;
+				s64 step;
+				const char *name;
+			} phases[] = {
+				{
+					(hole_end - full_size) | PIN_OFFSET_FIXED | PIN_USER,
+					-full_size,
+					"top-down",
+				},
+				{
+					hole_start | PIN_OFFSET_FIXED | PIN_USER,
+					full_size,
+					"bottom-up",
+				},
+				{ }
+			}, *p;
+
+			GEM_BUG_ON(!full_size);
+			obj = huge_gem_object(i915, PAGE_SIZE, full_size);
+			if (IS_ERR(obj))
+				break;
+
+			list_add(&obj->batch_pool_link, &objects);
+
+			/* Align differing sized objects against the edges, and
+			 * check we don't walk off into the void when binding
+			 * them into the GTT.
+			 */
+			for (p = phases; p->name; p++) {
+				u64 flags;
+
+				flags = p->base;
+				list_for_each_entry(obj, &objects, batch_pool_link) {
+					vma = i915_vma_instance(obj, vm, NULL);
+					if (IS_ERR(vma))
+						continue;
+
+					err = i915_vma_pin(vma, 0, 0, flags);
+					if (err) {
+						pr_err("Fill %s pin failed with err=%d on size=%lu pages (prime=%lu), flags=%llx\n", p->name, err, npages, prime, flags);
+						goto err;
+					}
+
+					i915_vma_unpin(vma);
+
+					flags += p->step;
+					if (flags < hole_start ||
+					    flags > hole_end)
+						break;
+				}
+
+				flags = p->base;
+				list_for_each_entry(obj, &objects, batch_pool_link) {
+					vma = i915_vma_instance(obj, vm, NULL);
+					if (IS_ERR(vma))
+						continue;
+
+					if (!drm_mm_node_allocated(&vma->node) ||
+					    i915_vma_misplaced(vma, 0, 0, flags)) {
+						pr_err("Fill %s moved vma.node=%llx + %llx, expected offset %llx\n",
+						       p->name, vma->node.start, vma->node.size,
+						       flags & PAGE_MASK);
+						err = -EINVAL;
+						goto err;
+					}
+
+					err = i915_vma_unbind(vma);
+					if (err) {
+						pr_err("Fill %s unbind of vma.node=%llx + %llx failed with err=%d\n",
+						       p->name, vma->node.start, vma->node.size,
+						       err);
+						goto err;
+					}
+
+					flags += p->step;
+					if (flags < hole_start ||
+					    flags > hole_end)
+						break;
+				}
+			}
+
+			if (igt_timeout(end_time, "Fill timed out (npages=%lu, prime=%lu)\n",
+					npages, prime)) {
+				err = -EINTR;
+				goto err;
+			}
+		}
+
+		close_object_list(&objects, vm);
+	}
+
+	return 0;
+
+err:
+	close_object_list(&objects, vm);
+	return err;
+}
+
+static int igt_ppgtt_fill(void *arg)
+{
+	struct drm_i915_private *dev_priv = arg;
+	struct drm_file *file;
+	struct i915_hw_ppgtt *ppgtt;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	/* Try binding many VMA working outwards from either edge */
+
+	if (!USES_FULL_PPGTT(dev_priv))
+		return 0;
+
+	file = mock_file(dev_priv);
+	if (IS_ERR(file))
+		return PTR_ERR(file);
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	ppgtt = i915_ppgtt_create(dev_priv, file->driver_priv, "mock");
+	if (IS_ERR(ppgtt)) {
+		err = PTR_ERR(ppgtt);
+		goto out_unlock;
+	}
+	GEM_BUG_ON(offset_in_page(ppgtt->base.total));
+
+	err = fill_hole(dev_priv, &ppgtt->base, 0, ppgtt->base.total, end_time);
+
+	i915_ppgtt_close(&ppgtt->base);
+	i915_ppgtt_put(ppgtt);
+out_unlock:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	mock_file_free(dev_priv, file);
+	return err;
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_ppgtt_alloc),
+		SUBTEST(igt_ppgtt_fill),
 	};
 
 	return i915_subtests(tests, i915);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 27/38] drm/i915: Exercise filling the top/bottom portions of the global GTT
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (25 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 26/38] drm/i915: Exercise filling the top/bottom portions of the ppgtt Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 28/38] drm/i915: Fill different pages of the GTT Chris Wilson
                   ` (11 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Same test as previously for the per-process GTT instead applied to the
global GTT.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 36 ++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 98c23a585ed3..7a98cf79173f 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -100,7 +100,8 @@ static void close_object_list(struct list_head *objects,
 		struct i915_vma *vma;
 
 		vma = i915_vma_instance(obj, vm, NULL);
-		if (!IS_ERR(vma))
+		/* Only ppgtt vma may be closed before the object is freed */
+		if (!IS_ERR(vma) && !i915_vma_is_ggtt(vma))
 			i915_vma_close(vma);
 
 		list_del(&obj->batch_pool_link);
@@ -260,12 +261,45 @@ static int igt_ppgtt_fill(void *arg)
 	return err;
 }
 
+static int igt_ggtt_fill(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	u64 hole_start, hole_end;
+	struct drm_mm_node *node;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	/* Try binding many VMA working outwards from either edge */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	drm_mm_for_each_hole(node, &ggtt->base.mm, hole_start, hole_end) {
+		if (ggtt->base.mm.color_adjust)
+			ggtt->base.mm.color_adjust(node, 0,
+						   &hole_start, &hole_end);
+		if (hole_start >= hole_end)
+			continue;
+
+		err = fill_hole(i915, &ggtt->base,
+				hole_start, hole_end,
+				end_time);
+		if (err)
+			break;
+	}
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	return err;
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_ppgtt_alloc),
 		SUBTEST(igt_ppgtt_fill),
+		SUBTEST(igt_ggtt_fill),
 	};
 
+	GEM_BUG_ON(offset_in_page(i915->ggtt.base.total));
+
 	return i915_subtests(tests, i915);
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 28/38] drm/i915: Fill different pages of the GTT
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (26 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 27/38] drm/i915: Exercise filling the top/bottom portions of the global GTT Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 29/38] drm/i915: Exercise filling and removing random ranges from the live GTT Chris Wilson
                   ` (10 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Exercise filling different pages of the GTT

v2: Walk all holes until we timeout

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 123 ++++++++++++++++++++++++++
 1 file changed, 123 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 7a98cf79173f..81aa2abddb68 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -225,6 +225,61 @@ static int fill_hole(struct drm_i915_private *i915,
 	return err;
 }
 
+static int walk_hole(struct drm_i915_private *i915,
+		     struct i915_address_space *vm,
+		     u64 hole_start, u64 hole_end,
+		     unsigned long end_time)
+{
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	u64 addr;
+	int err;
+
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	vma = i915_vma_instance(obj, vm, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto err;
+	}
+
+	for (addr = hole_start; addr < hole_end; addr += PAGE_SIZE) {
+		err = i915_vma_pin(vma, 0, 0,
+				   addr | PIN_OFFSET_FIXED | PIN_USER);
+		if (err) {
+			pr_err("Walk bind failed at %llx with err=%d\n",
+			       addr, err);
+			break;
+		}
+		i915_vma_unpin(vma);
+
+		if (!drm_mm_node_allocated(&vma->node) ||
+		    i915_vma_misplaced(vma, 0, 0, addr | PIN_OFFSET_FIXED)) {
+			pr_err("Walk incorrect at %llx\n", addr);
+			err = -EINVAL;
+			break;
+		}
+
+		err = i915_vma_unbind(vma);
+		if (err) {
+			pr_err("Walk unbind failed at %llx with err=%d\n",
+			       addr, err);
+			break;
+		}
+
+		if (igt_timeout(end_time, "Walk timed out at %llx\n", addr))
+			break;
+	}
+
+	if (!i915_vma_is_ggtt(vma))
+		i915_vma_close(vma);
+err:
+	i915_gem_object_put(obj);
+	return err;
+}
+
 static int igt_ppgtt_fill(void *arg)
 {
 	struct drm_i915_private *dev_priv = arg;
@@ -261,6 +316,42 @@ static int igt_ppgtt_fill(void *arg)
 	return err;
 }
 
+static int igt_ppgtt_walk(void *arg)
+{
+	struct drm_i915_private *dev_priv = arg;
+	struct drm_file *file;
+	struct i915_hw_ppgtt *ppgtt;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	/* Try binding a single VMA in different positions along the ppgtt */
+
+	if (!USES_FULL_PPGTT(dev_priv))
+		return 0;
+
+	file = mock_file(dev_priv);
+	if (IS_ERR(file))
+		return PTR_ERR(file);
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	ppgtt = i915_ppgtt_create(dev_priv, file->driver_priv, "mock");
+	if (IS_ERR(ppgtt)) {
+		err = PTR_ERR(ppgtt);
+		goto err_unlock;
+	}
+	GEM_BUG_ON(offset_in_page(ppgtt->base.total));
+
+	err = walk_hole(dev_priv, &ppgtt->base, 0, ppgtt->base.total, end_time);
+
+	i915_ppgtt_close(&ppgtt->base);
+	i915_ppgtt_put(ppgtt);
+err_unlock:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	mock_file_free(dev_priv, file);
+	return err;
+}
+
 static int igt_ggtt_fill(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
@@ -291,11 +382,43 @@ static int igt_ggtt_fill(void *arg)
 	return err;
 }
 
+static int igt_ggtt_walk(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	u64 hole_start, hole_end;
+	struct drm_mm_node *node;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	/* Try binding a single VMA in different positions along the ggtt */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	drm_mm_for_each_hole(node, &ggtt->base.mm, hole_start, hole_end) {
+		if (ggtt->base.mm.color_adjust)
+			ggtt->base. mm.color_adjust(node, 0,
+						    &hole_start, &hole_end);
+		if (hole_end <= hole_start)
+			continue;
+
+		err = walk_hole(i915, &ggtt->base,
+				hole_start, hole_end,
+				end_time);
+		if (err)
+			break;
+	}
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	return err;
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_ppgtt_alloc),
+		SUBTEST(igt_ppgtt_walk),
 		SUBTEST(igt_ppgtt_fill),
+		SUBTEST(igt_ggtt_walk),
 		SUBTEST(igt_ggtt_fill),
 	};
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 29/38] drm/i915: Exercise filling and removing random ranges from the live GTT
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (27 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 28/38] drm/i915: Fill different pages of the GTT Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-20 10:39   ` Matthew Auld
  2017-01-19 11:41 ` [PATCH v2 30/38] drm/i915: Test creation of VMA Chris Wilson
                   ` (9 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Test the low-level i915_address_space interfaces to sanity check the
live insertion/removal of address ranges.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 151 ++++++++++++++++++++++++++
 1 file changed, 151 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 81aa2abddb68..28915e4225e3 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -25,6 +25,7 @@
 #include <linux/prime_numbers.h>
 
 #include "i915_selftest.h"
+#include "i915_random.h"
 #include "mock_drm.h"
 #include "huge_gem_object.h"
 
@@ -280,6 +281,86 @@ static int walk_hole(struct drm_i915_private *i915,
 	return err;
 }
 
+static int fill_random_hole(struct drm_i915_private *i915,
+			    struct i915_address_space *vm,
+			    u64 hole_start, u64 hole_end,
+			    unsigned long end_time)
+{
+	I915_RND_STATE(seed_prng);
+	unsigned int size;
+
+	/* Keep creating larger objects until one cannot fit into the hole */
+	for (size = 12; (hole_end - hole_start) >> size; size++) {
+		I915_RND_SUBSTATE(prng, seed_prng);
+		struct drm_i915_gem_object *obj;
+		unsigned int *order, count, n;
+		u64 hole_size;
+
+		hole_size = (hole_end - hole_start) >> size;
+		if (hole_size > KMALLOC_MAX_SIZE / sizeof(u32))
+			hole_size = KMALLOC_MAX_SIZE / sizeof(u32);
+		count = hole_size;
+		do {
+			count >>= 1;
+			order = i915_random_order(count, &prng);
+		} while (!order && count);
+		if (!order)
+			break;
+
+		/* Ignore allocation failures (i.e. don't report them as
+		 * a test failure) as we are purposefully allocating very
+		 * larger objects without checking that we have sufficient
+		 * memory. We expect to hit ENOMEM.
+		 */
+
+		obj = huge_gem_object(i915, PAGE_SIZE, BIT_ULL(size));
+		if (IS_ERR(obj)) {
+			kfree(order);
+			break;
+		}
+
+		GEM_BUG_ON(obj->base.size != BIT_ULL(size));
+
+		if (i915_gem_object_pin_pages(obj)) {
+			i915_gem_object_put(obj);
+			kfree(order);
+			break;
+		}
+
+		for (n = 0; n < count; n++) {
+			if (vm->allocate_va_range &&
+			    vm->allocate_va_range(vm,
+						  order[n] * BIT_ULL(size),
+						  BIT_ULL(size)))
+				break;
+
+			vm->insert_entries(vm, obj->mm.pages,
+					   order[n] * BIT_ULL(size),
+					   I915_CACHE_NONE, 0);
+			if (igt_timeout(end_time,
+					"%s timed out after %d/%d\n",
+					__func__, n, count)) {
+				hole_start = hole_end; /* quit */
+				break;
+			}
+		}
+		count = n;
+
+		i915_random_reorder(order, count, &prng);
+		for (n = 0; n < count; n++)
+			vm->clear_range(vm,
+					order[n]* BIT_ULL(size),
+					BIT_ULL(size));
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+
+		kfree(order);
+	}
+
+	return 0;
+}
+
 static int igt_ppgtt_fill(void *arg)
 {
 	struct drm_i915_private *dev_priv = arg;
@@ -352,6 +433,44 @@ static int igt_ppgtt_walk(void *arg)
 	return err;
 }
 
+static int igt_ppgtt_drunk(void *arg)
+{
+	struct drm_i915_private *dev_priv = arg;
+	struct drm_file *file;
+	struct i915_hw_ppgtt *ppgtt;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	/* Try binding many VMA in a random pattern within the ppgtt */
+
+	if (!USES_FULL_PPGTT(dev_priv))
+		return 0;
+
+	file = mock_file(dev_priv);
+	if (IS_ERR(file))
+		return PTR_ERR(file);
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	ppgtt = i915_ppgtt_create(dev_priv, file->driver_priv, "mock");
+	if (IS_ERR(ppgtt)) {
+		err = PTR_ERR(ppgtt);
+		goto err_unlock;
+	}
+	GEM_BUG_ON(offset_in_page(ppgtt->base.total));
+
+	err = fill_random_hole(dev_priv, &ppgtt->base,
+			       0, ppgtt->base.total,
+			       end_time);
+
+	i915_ppgtt_close(&ppgtt->base);
+	i915_ppgtt_put(ppgtt);
+err_unlock:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	mock_file_free(dev_priv, file);
+	return err;
+}
+
 static int igt_ggtt_fill(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
@@ -412,12 +531,44 @@ static int igt_ggtt_walk(void *arg)
 	return err;
 }
 
+static int igt_ggtt_drunk(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	u64 hole_start, hole_end;
+	struct drm_mm_node *node;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	/* Try binding many VMA in a random pattern within the ggtt */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	drm_mm_for_each_hole(node, &ggtt->base.mm, hole_start, hole_end) {
+		if (ggtt->base.mm.color_adjust)
+			ggtt->base. mm.color_adjust(node, 0,
+						    &hole_start, &hole_end);
+		if (hole_start >= hole_end)
+			continue;
+
+		err = fill_random_hole(i915, &ggtt->base,
+				       hole_start, hole_end,
+				       end_time);
+		if (err)
+			break;
+	}
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	return err;
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_ppgtt_alloc),
+		SUBTEST(igt_ppgtt_drunk),
 		SUBTEST(igt_ppgtt_walk),
 		SUBTEST(igt_ppgtt_fill),
+		SUBTEST(igt_ggtt_drunk),
 		SUBTEST(igt_ggtt_walk),
 		SUBTEST(igt_ggtt_fill),
 	};
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 30/38] drm/i915: Test creation of VMA
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (28 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 29/38] drm/i915: Exercise filling and removing random ranges from the live GTT Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-31 10:50   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 31/38] drm/i915: Exercise i915_vma_pin/i915_vma_insert Chris Wilson
                   ` (8 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Simple test to exercise creation and lookup of VMA within an object.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_vma.c                    |   3 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/i915_vma.c          | 183 +++++++++++++++++++++
 3 files changed, 187 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_vma.c

diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 635f2635b1f2..b11ed9b95bec 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -669,3 +669,6 @@ int i915_vma_unbind(struct i915_vma *vma)
 	return 0;
 }
 
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/i915_vma.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 955a4d6ccdaf..b450eab7e6e1 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -15,3 +15,4 @@ selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
 selftest(requests, i915_gem_request_mock_selftests)
 selftest(objects, i915_gem_object_mock_selftests)
 selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
+selftest(vma, i915_vma_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
new file mode 100644
index 000000000000..ad3454036aa5
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -0,0 +1,183 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/prime_numbers.h>
+
+#include "i915_selftest.h"
+
+#include "mock_gem_device.h"
+#include "mock_context.h"
+
+static bool assert_vma(struct i915_vma *vma,
+		       struct drm_i915_gem_object *obj,
+		       struct i915_gem_context *ctx)
+{
+	if (vma->vm != &ctx->ppgtt->base) {
+		pr_err("VMA created with wrong VM\n");
+		return false;
+	}
+
+	if (vma->size != obj->base.size) {
+		pr_err("VMA created with wrong size, found %llu, expected %zu\n",
+		       vma->size, obj->base.size);
+		return false;
+	}
+
+	if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
+		pr_err("VMA created with wrong type [%d]\n",
+		       vma->ggtt_view.type);
+		return false;
+	}
+
+	return true;
+}
+
+static int create_vmas(struct drm_i915_private *i915,
+		       struct list_head *objects,
+		       struct list_head *contexts)
+{
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_context *ctx;
+	int pinned;
+
+	list_for_each_entry(obj, objects, batch_pool_link) {
+		for (pinned = 0; pinned <= 1; pinned++) {
+			list_for_each_entry(ctx, contexts, link) {
+				struct i915_address_space *vm =
+					&ctx->ppgtt->base;
+				struct i915_vma *vma;
+				int err;
+
+				vma = i915_vma_instance(obj, vm, NULL);
+				if (IS_ERR(vma))
+					return PTR_ERR(vma);
+
+				if (i915_vma_compare(vma, vm, NULL)) {
+					pr_err("i915_vma_compare failed!\n");
+					return -EINVAL;
+				}
+
+				if (!assert_vma(vma, obj, ctx)) {
+					pr_err("VMA lookup/create failed\n");
+					return -EINVAL;
+				}
+
+				if (!pinned) {
+					err = i915_vma_pin(vma, 0, 0, PIN_USER);
+					if (err) {
+						pr_err("Failed to pin VMA\n");
+						return err;
+					}
+				} else {
+					i915_vma_unpin(vma);
+				}
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int igt_vma_create(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj, *on;
+	struct i915_gem_context *ctx, *cn;
+	unsigned long num_obj, num_ctx;
+	unsigned long no, nc;
+	IGT_TIMEOUT(end_time);
+	LIST_HEAD(contexts);
+	LIST_HEAD(objects);
+	int err;
+
+	/* Exercise creating many vma amonst many objections, checking the
+	 * vma creation and lookup routines.
+	 */
+
+	no = 0;
+	for_each_prime_number(num_obj, ULONG_MAX - 1) {
+		for (; no < num_obj; no++) {
+			obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+			if (IS_ERR(obj))
+				goto err;
+
+			list_add(&obj->batch_pool_link, &objects);
+		}
+
+		nc = 0;
+		for_each_prime_number(num_ctx, MAX_CONTEXT_HW_ID) {
+			for (; nc < num_ctx; nc++) {
+				ctx = mock_context(i915, "mock");
+				if (!ctx)
+					goto err;
+
+				list_move(&ctx->link, &contexts);
+			}
+
+			err = create_vmas(i915, &objects, &contexts);
+			if (err)
+				goto err;
+
+			if (igt_timeout(end_time,
+					"%s timed out: after %lu objects\n",
+					__func__, no))
+				goto out;
+		}
+
+		list_for_each_entry_safe(ctx, cn, &contexts, link)
+			mock_context_close(ctx);
+	}
+
+out:
+	/* Final pass to lookup all created contexts */
+	err = create_vmas(i915, &objects, &contexts);
+err:
+	list_for_each_entry_safe(ctx, cn, &contexts, link)
+		mock_context_close(ctx);
+
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link)
+		i915_gem_object_put(obj);
+	return err;
+}
+
+int i915_vma_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_vma_create),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_subtests(tests, i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
+
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 31/38] drm/i915: Exercise i915_vma_pin/i915_vma_insert
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (29 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 30/38] drm/i915: Test creation of VMA Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA Chris Wilson
                   ` (7 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

High-level testing of the struct drm_mm by verifying our handling of
weird requests to i915_vma_pin.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_vma.c           |   4 +-
 drivers/gpu/drm/i915/i915_vma.h           |   4 +-
 drivers/gpu/drm/i915/selftests/i915_vma.c | 150 ++++++++++++++++++++++++++++++
 3 files changed, 154 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index b11ed9b95bec..017549913cdd 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -307,8 +307,8 @@ void i915_vma_unpin_and_release(struct i915_vma **p_vma)
 	__i915_gem_object_release_unless_active(obj);
 }
 
-bool
-i915_vma_misplaced(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
+bool i915_vma_misplaced(const struct i915_vma *vma,
+			u64 size, u64 alignment, u64 flags)
 {
 	if (!drm_mm_node_allocated(&vma->node))
 		return false;
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index e39d922cfb6f..2e03f81dddbe 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -228,8 +228,8 @@ i915_vma_compare(struct i915_vma *vma,
 int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
 		  u32 flags);
 bool i915_gem_valid_gtt_space(struct i915_vma *vma, unsigned long cache_level);
-bool
-i915_vma_misplaced(struct i915_vma *vma, u64 size, u64 alignment, u64 flags);
+bool i915_vma_misplaced(const struct i915_vma *vma,
+			u64 size, u64 alignment, u64 flags);
 void __i915_vma_set_map_and_fenceable(struct i915_vma *vma);
 int __must_check i915_vma_unbind(struct i915_vma *vma);
 void i915_vma_close(struct i915_vma *vma);
diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
index ad3454036aa5..b45b392444e4 100644
--- a/drivers/gpu/drm/i915/selftests/i915_vma.c
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -161,10 +161,160 @@ static int igt_vma_create(void *arg)
 	return err;
 }
 
+struct pin_mode {
+	u64 size;
+	u64 flags;
+	bool (*assert)(const struct i915_vma *,
+		       const struct pin_mode *mode,
+		       int result);
+	const char *string;
+};
+
+static bool assert_pin_valid(const struct i915_vma *vma,
+			     const struct pin_mode *mode,
+			     int result)
+{
+	if (result)
+		return false;
+
+	if (i915_vma_misplaced(vma, mode->size, 0, mode->flags))
+		return false;
+
+	return true;
+}
+
+__maybe_unused
+static bool assert_pin_e2big(const struct i915_vma *vma,
+			     const struct pin_mode *mode,
+			     int result)
+{
+	return result == -E2BIG;
+}
+
+__maybe_unused
+static bool assert_pin_enospc(const struct i915_vma *vma,
+			      const struct pin_mode *mode,
+			      int result)
+{
+	return result == -ENOSPC;
+}
+
+__maybe_unused
+static bool assert_pin_einval(const struct i915_vma *vma,
+			      const struct pin_mode *mode,
+			      int result)
+{
+	return result == -EINVAL;
+}
+
+static int igt_vma_pin1(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	const struct pin_mode modes[] = {
+#define VALID(sz, fl) { .size = (sz), .flags = (fl), .assert = assert_pin_valid, .string = #sz ", " #fl ", (valid) " }
+#define __INVALID(sz, fl, check, eval) { .size = (sz), .flags = (fl), .assert = (check), .string = #sz ", " #fl ", (invalid " #eval ")" }
+#define INVALID(sz, fl) __INVALID(sz, fl, assert_pin_einval, EINVAL)
+#define TOOBIG(sz, fl) __INVALID(sz, fl, assert_pin_e2big, E2BIG)
+#define NOSPACE(sz, fl) __INVALID(sz, fl, assert_pin_enospc, ENOSPC)
+		VALID(0, PIN_GLOBAL),
+		VALID(0, PIN_GLOBAL | PIN_MAPPABLE),
+
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | 4096),
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | 8192),
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
+		VALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.base.total - 4096)),
+
+		VALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | (i915->ggtt.mappable_end - 4096)),
+		INVALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | i915->ggtt.mappable_end),
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | (i915->ggtt.base.total - 4096)),
+		INVALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | i915->ggtt.base.total),
+		INVALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | round_down(U64_MAX, PAGE_SIZE)),
+
+		VALID(4096, PIN_GLOBAL),
+		VALID(8192, PIN_GLOBAL),
+		VALID(i915->ggtt.mappable_end - 4096, PIN_GLOBAL | PIN_MAPPABLE),
+		VALID(i915->ggtt.mappable_end, PIN_GLOBAL | PIN_MAPPABLE),
+		TOOBIG(i915->ggtt.mappable_end + 4096, PIN_GLOBAL | PIN_MAPPABLE),
+		VALID(i915->ggtt.base.total - 4096, PIN_GLOBAL),
+		VALID(i915->ggtt.base.total, PIN_GLOBAL),
+		TOOBIG(i915->ggtt.base.total + 4096, PIN_GLOBAL),
+		TOOBIG(round_down(U64_MAX, PAGE_SIZE), PIN_GLOBAL),
+		INVALID(8192, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | (i915->ggtt.mappable_end - 4096)),
+		INVALID(8192, PIN_GLOBAL | PIN_OFFSET_FIXED | (i915->ggtt.base.total - 4096)),
+		INVALID(8192, PIN_GLOBAL | PIN_OFFSET_FIXED | (round_down(U64_MAX, PAGE_SIZE) - 4096)),
+
+		VALID(8192, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
+
+#if !IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)
+		/* Misusing BIAS is a programming error (it is not controllable
+		 * from userspace) so when debugging is enabled, it explodes.
+		 * However, the tests are still quite interesting for checking
+		 * variable start, end and size.
+		 */
+		NOSPACE(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | i915->ggtt.mappable_end),
+		NOSPACE(0, PIN_GLOBAL | PIN_OFFSET_BIAS | i915->ggtt.base.total),
+		NOSPACE(8192, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
+		NOSPACE(8192, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.base.total - 4096)),
+#endif
+		{ },
+#undef NOSPACE
+#undef TOOBIG
+#undef INVALID
+#undef __INVALID
+#undef VALID
+	}, *m;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	int err = -EINVAL;
+
+	/* Exercise all the weird and wonderful i915_vma_pin requests,
+	 * focusing on error handling of boundary conditions.
+	 */
+
+	GEM_BUG_ON(!drm_mm_clean(&i915->ggtt.base.mm));
+
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+	if (IS_ERR(vma))
+		goto err;
+
+	for (m = modes; m->assert; m++) {
+		err = i915_vma_pin(vma, m->size, 0, m->flags);
+		if (!m->assert(vma, m, err)) {
+			pr_err("%s to pin single page into GGTT with mode[%ld:%s]: size=%llx flags=%llx, err=%d\n",
+			       m->assert == assert_pin_valid ? "Failed" : "Unexpectedly succeeded",
+			       m - modes, m->string, m->size, m->flags, err);
+			if (!err)
+				i915_vma_unpin(vma);
+			err = -EINVAL;
+			goto err;
+		}
+
+		if (!err) {
+			i915_vma_unpin(vma);
+			err = i915_vma_unbind(vma);
+			if (err) {
+				pr_err("Failed to unbind single page from GGTT, err=%d\n", err);
+				goto err;
+			}
+		}
+	}
+
+	err = 0;
+err:
+	i915_gem_object_put(obj);
+	return err;
+}
+
 int i915_vma_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_vma_create),
+		SUBTEST(igt_vma_pin1),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (30 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 31/38] drm/i915: Exercise i915_vma_pin/i915_vma_insert Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-02-01 13:26   ` Matthew Auld
  2017-02-01 14:33   ` Tvrtko Ursulin
  2017-01-19 11:41 ` [PATCH v2 33/38] drm/i915: Test creation of partial VMA Chris Wilson
                   ` (6 subsequent siblings)
  38 siblings, 2 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Exercise creating rotated VMA and checking the page order within.

v2: Be more creative in rotated params

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_vma.c | 177 ++++++++++++++++++++++++++++++
 1 file changed, 177 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
index b45b392444e4..2bda93f53b47 100644
--- a/drivers/gpu/drm/i915/selftests/i915_vma.c
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -310,11 +310,188 @@ static int igt_vma_pin1(void *arg)
 	return err;
 }
 
+static unsigned long rotated_index(const struct intel_rotation_info *r,
+				   unsigned int n,
+				   unsigned int x,
+				   unsigned int y)
+{
+	return (r->plane[n].stride * (r->plane[n].height - y - 1) +
+		r->plane[n].offset + x);
+}
+
+static struct scatterlist *
+assert_rotated(struct drm_i915_gem_object *obj,
+	       const struct intel_rotation_info *r, unsigned int n,
+	       struct scatterlist *sg)
+{
+	unsigned int x, y;
+
+	for (x = 0; x < r->plane[n].width; x++) {
+		for (y = 0; y < r->plane[n].height; y++) {
+			unsigned long src_idx;
+			dma_addr_t src;
+
+			src_idx = rotated_index(r, n, x, y);
+			src = i915_gem_object_get_dma_address(obj, src_idx);
+
+			if (sg_dma_len(sg) != PAGE_SIZE) {
+				pr_err("Invalid sg.length, found %d, expected %lu for rotated page (%d, %d) [src index %lu]\n",
+				       sg_dma_len(sg), PAGE_SIZE,
+				       x, y, src_idx);
+				return ERR_PTR(-EINVAL);
+			}
+
+			if (sg_dma_address(sg) != src) {
+				pr_err("Invalid address for rotated page (%d, %d) [src index %lu]\n",
+				       x, y, src_idx);
+				return ERR_PTR(-EINVAL);
+			}
+
+			sg = ____sg_next(sg);
+		}
+	}
+
+	return sg;
+}
+
+static unsigned int rotated_size(const struct intel_rotation_plane_info *a,
+				 const struct intel_rotation_plane_info *b)
+{
+	return a->width * a->height + b->width * b->height;
+}
+
+static int igt_vma_rotate(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	const struct intel_rotation_plane_info planes[] = {
+		{ .width = 1, .height = 1, .stride = 1 },
+		{ .width = 3, .height = 5, .stride = 4 },
+		{ .width = 5, .height = 3, .stride = 7 },
+		{ .width = 6, .height = 4, .stride = 6 },
+		{ }
+	}, *a, *b;
+	const unsigned int max_pages = 64;
+	int err = -ENOMEM;
+
+	/* Create VMA for many different combinations of planes and check
+	 * that the page layout within the rotated VMA match our expectations.
+	 */
+
+	obj = i915_gem_object_create_internal(i915, max_pages * PAGE_SIZE);
+	if (IS_ERR(obj))
+		goto err;
+
+	for (a = planes; a->width; a++) {
+		for (b = planes + ARRAY_SIZE(planes); b-- != planes; ) {
+			struct i915_ggtt_view view;
+			struct scatterlist *sg;
+			unsigned int n, max_offset;
+
+			max_offset = max(a->stride * a->height,
+					 b->stride * b->height);
+			GEM_BUG_ON(max_offset >= max_pages);
+			max_offset = max_pages - max_offset;
+
+			view.type = I915_GGTT_VIEW_ROTATED;
+			view.rotated.plane[0] = *a;
+			view.rotated.plane[1] = *b;
+
+			for_each_prime_number_from(view.rotated.plane[0].offset, 0, max_offset) {
+				for_each_prime_number_from(view.rotated.plane[1].offset, 0, max_offset) {
+					struct i915_address_space *vm =
+						&i915->ggtt.base;
+					struct i915_vma *vma;
+
+					vma = i915_vma_instance(obj, vm, &view);
+					if (IS_ERR(vma)) {
+						err = PTR_ERR(vma);
+						goto err_object;
+					}
+
+					if (!i915_vma_is_ggtt(vma) ||
+					    vma->vm != vm) {
+						pr_err("VMA is not in the GGTT!\n");
+						err = -EINVAL;
+						goto err_object;
+					}
+
+					if (memcmp(&vma->ggtt_view, &view, sizeof(view))) {
+						pr_err("VMA mismatch upon creation!\n");
+						err = -EINVAL;
+						goto err_object;
+					}
+
+					if (i915_vma_compare(vma,
+							     vma->vm,
+							     &vma->ggtt_view)) {
+						pr_err("VMA compmare failed with itself\n");
+						err = -EINVAL;
+						goto err_object;
+					}
+
+					err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
+					if (err) {
+						pr_err("Failed to pin VMA, err=%d\n", err);
+						goto err_object;
+					}
+
+					if (vma->size != rotated_size(a, b) * PAGE_SIZE) {
+						pr_err("VMA is wrong size, expected %lu, found %llu\n",
+						       PAGE_SIZE * rotated_size(a, b), vma->size);
+						err = -EINVAL;
+						goto err_object;
+					}
+
+					if (vma->node.size < vma->size) {
+						pr_err("VMA binding too small, expected %llu, found %llu\n",
+						       vma->size, vma->node.size);
+						err = -EINVAL;
+						goto err_object;
+					}
+
+					if (vma->pages == obj->mm.pages) {
+						pr_err("VMA using unrotated object pages!\n");
+						err = -EINVAL;
+						goto err_object;
+					}
+
+					sg = vma->pages->sgl;
+					for (n = 0; n < ARRAY_SIZE(view.rotated.plane); n++) {
+						sg = assert_rotated(obj, &view.rotated, n, sg);
+						if (IS_ERR(sg)) {
+							pr_err("Inconsistent VMA pages for plane %d: [(%d, %d, %d, %d), (%d, %d, %d, %d)]\n", n,
+							view.rotated.plane[0].width,
+							view.rotated.plane[0].height,
+							view.rotated.plane[0].stride,
+							view.rotated.plane[0].offset,
+							view.rotated.plane[1].width,
+							view.rotated.plane[1].height,
+							view.rotated.plane[1].stride,
+							view.rotated.plane[1].offset);
+							err = -EINVAL;
+							goto err_object;
+						}
+					}
+
+					i915_vma_unpin(vma);
+				}
+			}
+		}
+	}
+
+err_object:
+	i915_gem_object_put(obj);
+err:
+	return err;
+}
+
 int i915_vma_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_vma_create),
 		SUBTEST(igt_vma_pin1),
+		SUBTEST(igt_vma_rotate),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 33/38] drm/i915: Test creation of partial VMA
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (31 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-31 12:03   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 34/38] drm/i915: Live testing for context execution Chris Wilson
                   ` (5 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Mock testing to ensure we can create and lookup partial VMA.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_vma.c | 191 ++++++++++++++++++++++++++++++
 1 file changed, 191 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
index 2bda93f53b47..8287a863ad17 100644
--- a/drivers/gpu/drm/i915/selftests/i915_vma.c
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -486,12 +486,203 @@ static int igt_vma_rotate(void *arg)
 	return err;
 }
 
+static bool assert_partial(struct drm_i915_gem_object *obj,
+			   struct i915_vma *vma,
+			   unsigned long offset,
+			   unsigned long size)
+{
+	struct sgt_iter sgt;
+	dma_addr_t dma;
+
+	for_each_sgt_dma(dma, sgt, vma->pages) {
+		dma_addr_t src;
+
+		if (!size) {
+			pr_err("Partial scattergather list too long\n");
+			return false;
+		}
+
+		src = i915_gem_object_get_dma_address(obj, offset);
+		if (src != dma) {
+			pr_err("DMA mismatch for partial page offset %lu\n",
+			       offset);
+			return false;
+		}
+
+		offset++;
+		size--;
+	}
+
+	return true;
+}
+
+static int igt_vma_partial(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	const unsigned int npages = 1021; /* prime! */
+	struct drm_i915_gem_object *obj;
+	unsigned int sz, offset, loop;
+	struct i915_vma *vma;
+	int err = -ENOMEM;
+
+	/* Create lots of different VMA for the object and check that
+	 * we are returned the same VMA when we later request the same range.
+	 */
+
+	obj = i915_gem_object_create_internal(i915, npages*PAGE_SIZE);
+	if (IS_ERR(obj))
+		goto err;
+
+	for (loop = 0; loop <= 1; loop++) { /* exercise both create/lookup */
+		unsigned int count, nvma;
+
+		nvma = loop;
+		for_each_prime_number_from(sz, 1, npages) {
+			for_each_prime_number_from(offset, 0, npages - sz) {
+				struct i915_address_space *vm =
+					&i915->ggtt.base;
+				struct i915_ggtt_view view;
+
+				view.type = I915_GGTT_VIEW_PARTIAL;
+				view.partial.offset = offset;
+				view.partial.size = sz;
+
+				if (sz == npages)
+					view.type = I915_GGTT_VIEW_NORMAL;
+
+				vma = i915_vma_instance(obj, vm, &view);
+				if (IS_ERR(vma)) {
+					err = PTR_ERR(vma);
+					goto err_object;
+				}
+
+				if (!i915_vma_is_ggtt(vma) || vma->vm != vm) {
+					pr_err("VMA is not in the GGTT!\n");
+					err = -EINVAL;
+					goto err_object;
+				}
+
+				if (i915_vma_compare(vma,
+						     vma->vm,
+						     &vma->ggtt_view)) {
+					pr_err("VMA compare failed with itself\n");
+					err = -EINVAL;
+					goto err_object;
+				}
+
+				err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
+				if (err)
+					goto err_object;
+
+				if (vma->size != sz*PAGE_SIZE) {
+					pr_err("VMA is wrong size, expected %lu, found %llu\n",
+					       sz*PAGE_SIZE, vma->size);
+					err = -EINVAL;
+					goto err_object;
+				}
+
+				if (vma->node.size < vma->size) {
+					pr_err("VMA binding too small, expected %llu, found %llu\n",
+					       vma->size, vma->node.size);
+					err = -EINVAL;
+					goto err_object;
+				}
+
+				if (view.type != I915_GGTT_VIEW_NORMAL) {
+					if (memcmp(&vma->ggtt_view, &view, sizeof(view))) {
+						pr_err("VMA mismatch upon creation!\n");
+						err = -EINVAL;
+						goto err_object;
+					}
+
+					if (vma->pages == obj->mm.pages) {
+						pr_err("VMA using unrotated object pages!\n");
+						err = -EINVAL;
+						goto err_object;
+					}
+				}
+
+				if (!assert_partial(obj, vma, offset, sz)) {
+					pr_err("Inconsistent partial pages for (offset=%d, size=%d)\n", offset, sz);
+					err = -EINVAL;
+					goto err_object;
+				}
+
+				i915_vma_unpin(vma);
+				nvma++;
+			}
+		}
+
+		count = loop;
+		list_for_each_entry(vma, &obj->vma_list, obj_link)
+			count++;
+		if (count != nvma) {
+			pr_err("All partial vma were not recorded on the obj->vma_list: found %u, expected %u\n",
+			       count, nvma);
+			err = -EINVAL;
+			goto err_object;
+		}
+
+		/* Create a mapping for the entire object, just for extra fun */
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err_object;
+		}
+
+		if (!i915_vma_is_ggtt(vma)) {
+			pr_err("VMA is not in the GGTT!\n");
+			err = -EINVAL;
+			goto err_object;
+		}
+
+		err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
+		if (err)
+			goto err_object;
+
+		if (vma->size != obj->base.size) {
+			pr_err("VMA is wrong size, expected %lu, found %llu\n",
+			       sz*PAGE_SIZE, vma->size);
+			err = -EINVAL;
+			goto err_object;
+		}
+
+		if (vma->node.size < vma->size) {
+			pr_err("VMA binding too small, expected %llu, found %llu\n",
+			       vma->size, vma->node.size);
+			err = -EINVAL;
+			goto err_object;
+		}
+
+		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
+			pr_err("Not the normal ggtt view! Found %d\n",
+			       vma->ggtt_view.type);
+			err = -EINVAL;
+			goto err_object;
+		}
+
+		if (vma->pages != obj->mm.pages) {
+			pr_err("VMA not using object pages!\n");
+			err = -EINVAL;
+			goto err_object;
+		}
+
+		i915_vma_unpin(vma);
+	}
+
+err_object:
+	i915_gem_object_put(obj);
+err:
+	return err;
+}
+
 int i915_vma_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_vma_create),
 		SUBTEST(igt_vma_pin1),
 		SUBTEST(igt_vma_rotate),
+		SUBTEST(igt_vma_partial),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 34/38] drm/i915: Live testing for context execution
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (32 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 33/38] drm/i915: Test creation of partial VMA Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-25 14:51   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 35/38] drm/i915: Initial selftests for exercising eviction Chris Wilson
                   ` (4 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Check we can create and execution within a context.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_context.c            |   1 +
 drivers/gpu/drm/i915/selftests/i915_gem_context.c  | 323 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 3 files changed, 325 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_context.c

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 5dc596a86ab1..460979cc0745 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -1167,4 +1167,5 @@ int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_context.c"
+#include "selftests/i915_gem_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/selftests/i915_gem_context.c
new file mode 100644
index 000000000000..68bfa56b5626
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_context.c
@@ -0,0 +1,323 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_selftest.h"
+#include "mock_drm.h"
+#include "huge_gem_object.h"
+
+#define DW_PER_PAGE (PAGE_SIZE / sizeof(u32))
+
+static struct i915_vma *
+gpu_fill_pages(struct i915_vma *vma, u64 offset, unsigned long count, u32 value)
+{
+	struct drm_i915_gem_object *obj;
+	const int gen = INTEL_GEN(vma->vm->i915);
+	unsigned long sz = (4*count + 1)*sizeof(u32);
+	u32 *cmd;
+	int err;
+
+	obj = i915_gem_object_create_internal(vma->vm->i915,
+					      round_up(sz, PAGE_SIZE));
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(cmd)) {
+		i915_gem_object_put(obj);
+		return ERR_CAST(cmd);
+	}
+
+	GEM_BUG_ON(offset + (count - 1) * PAGE_SIZE > vma->node.size);
+	offset += vma->node.start;
+
+	for (sz = 0; sz < count; sz++) {
+		if (gen >= 8) {
+			*cmd++ = MI_STORE_DWORD_IMM_GEN4;
+			*cmd++ = lower_32_bits(offset);
+			*cmd++ = upper_32_bits(offset);
+			*cmd++ = value;
+		} else if (gen >= 4) {
+			*cmd++ = MI_STORE_DWORD_IMM_GEN4;
+			*cmd++ = 0;
+			*cmd++ = offset;
+			*cmd++ = value;
+		} else {
+			*cmd++ = MI_STORE_DWORD_IMM | 1 << 22;
+			*cmd++ = offset;
+			*cmd++ = value;
+		}
+		offset += PAGE_SIZE;
+	}
+	*cmd = MI_BATCH_BUFFER_END;
+	i915_gem_object_unpin_map(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err) {
+		i915_gem_object_put(obj);
+		return ERR_PTR(err);
+	}
+
+	vma = i915_vma_instance(obj, vma->vm, NULL);
+	if (IS_ERR(vma)) {
+		i915_gem_object_put(obj);
+		return vma;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err) {
+		i915_gem_object_put(obj);
+		return ERR_PTR(err);
+	}
+
+	return vma;
+}
+
+static int gpu_fill(struct drm_i915_gem_object *obj,
+		    struct i915_gem_context *ctx,
+		    struct intel_engine_cs *engine)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_address_space *vm =
+		ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+	struct i915_vma *vma;
+	struct i915_vma *batch;
+	unsigned long n, max;
+	int err;
+
+	vma = i915_vma_instance(obj, vm, NULL);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		return err;
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		return err;
+
+	GEM_BUG_ON(!IS_ALIGNED(obj->base.size >> PAGE_SHIFT, DW_PER_PAGE));
+	max = (obj->base.size >> PAGE_SHIFT) / DW_PER_PAGE;
+	for (n = 0; n < max; n++) {
+		struct drm_i915_gem_request *rq;
+
+		batch = gpu_fill_pages(vma,
+				       ((n * DW_PER_PAGE) << PAGE_SHIFT) |
+				       (n * sizeof(u32)),
+				       DW_PER_PAGE,
+				       n);
+		if (IS_ERR(batch)) {
+			err = PTR_ERR(batch);
+			goto err_vma;
+		}
+
+		rq = i915_gem_request_alloc(engine, ctx);
+		if (IS_ERR(rq)) {
+			err = PTR_ERR(rq);
+			goto err_batch;
+		}
+
+		i915_switch_context(rq);
+		engine->emit_bb_start(rq,
+				      batch->node.start, batch->node.size, 0);
+
+		i915_vma_move_to_active(batch, rq, 0);
+		i915_gem_object_set_active_reference(batch->obj);
+		i915_vma_unpin(batch);
+		i915_vma_close(batch);
+
+		i915_vma_move_to_active(vma, rq, 0);
+
+		reservation_object_lock(obj->resv, NULL);
+		reservation_object_add_excl_fence(obj->resv, &rq->fence);
+		reservation_object_unlock(obj->resv);
+
+		__i915_add_request(rq, true);
+	}
+	i915_vma_unpin(vma);
+
+	return 0;
+
+err_batch:
+	i915_vma_unpin(batch);
+err_vma:
+	i915_vma_unpin(vma);
+	return err;
+}
+
+static int cpu_fill(struct drm_i915_gem_object *obj, u32 value)
+{
+	const bool has_llc = HAS_LLC(to_i915(obj->base.dev));
+	unsigned int n, m, need_flush;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_write(obj, &need_flush);
+	if (err)
+		return err;
+
+	for (n = 0; n < DW_PER_PAGE; n++) {
+		u32 *map;
+
+		map = kmap_atomic(i915_gem_object_get_page(obj, n));
+		for (m = 0; m < DW_PER_PAGE; m++)
+			map[m] = value;
+		if (!has_llc)
+			drm_clflush_virt_range(map, PAGE_SIZE);
+		kunmap_atomic(map);
+	}
+
+	i915_gem_obj_finish_shmem_access(obj);
+	obj->base.read_domains = I915_GEM_DOMAIN_GTT | I915_GEM_DOMAIN_CPU;
+	obj->base.write_domain = 0;
+	return 0;
+}
+
+static int cpu_check(struct drm_i915_gem_object *obj,
+		     unsigned long num)
+{
+	const unsigned int max = (obj->base.size >> PAGE_SHIFT) / DW_PER_PAGE;
+	unsigned int n, m, needs_flush;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_read(obj, &needs_flush);
+	if (err)
+		return err;
+
+	for (n = 0; !err && n < DW_PER_PAGE; n++) {
+		u32 *map;
+
+		map = kmap_atomic(i915_gem_object_get_page(obj, n));
+		if (needs_flush & CLFLUSH_BEFORE)
+			drm_clflush_virt_range(map, sizeof(u32)*max);
+		for (m = 0; !err && m < max; m++) {
+			if (map[m] != m) {
+				pr_err("Invalid value in object %lu at page %d, offset %d: found %x expected %x\n",
+				       num, n, m, map[m], m);
+				err = -EINVAL;
+			}
+		}
+		kunmap_atomic(map);
+	}
+
+	i915_gem_obj_finish_shmem_access(obj);
+	return err;
+}
+
+static int igt_ctx_exec(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_file *file = mock_file(i915);
+	struct drm_i915_gem_object *obj;
+	IGT_TIMEOUT(end_time);
+	LIST_HEAD(objects);
+	unsigned int count;
+	int err = 0;
+
+	/* Create a few different contexts (with different mm) and write
+	 * through each ctx/mm using the GPU making sure those writes end
+	 * up in the expected pages of our obj.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	count = 0;
+	while (!time_after(jiffies, end_time)) {
+		struct intel_engine_cs *engine;
+		struct i915_gem_context *ctx;
+		struct i915_address_space *vm;
+		unsigned int id;
+
+		ctx = i915_gem_create_context(i915, file->driver_priv);
+		if (IS_ERR(ctx)) {
+			err = PTR_ERR(ctx);
+			goto err;
+		}
+
+		vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+
+		for_each_engine(engine, i915, id) {
+			u64 npages;
+			u32 handle;
+
+			npages = min(vm->total / 2,
+				     1024ull * DW_PER_PAGE * PAGE_SIZE);
+			npages = round_down(npages, DW_PER_PAGE * PAGE_SIZE);
+			obj = huge_gem_object(i915,
+					      DW_PER_PAGE * PAGE_SIZE,
+					      npages);
+			if (IS_ERR(obj)) {
+				err = PTR_ERR(obj);
+				goto err;
+			}
+
+			/* tie the handle to the drm_file for easy reaping */
+			err = drm_gem_handle_create(file, &obj->base, &handle);
+			if (err) {
+				i915_gem_object_put(obj);
+				goto err;
+			}
+
+			err = cpu_fill(obj, 0xdeadbeef);
+			if (err) {
+				pr_err("Failed to fill object with cpu, err=%d\n",
+				       err);
+				goto err;
+			}
+
+			err = gpu_fill(obj, ctx, engine);
+			if (err) {
+				pr_err("Failed to fill object with gpu (%s), err=%d\n",
+				       engine->name, err);
+				goto err;
+			}
+
+			list_add_tail(&obj->batch_pool_link, &objects);
+		}
+		count++;
+	}
+	pr_info("Submitted %d contexts (across %u engines)\n",
+		count, INTEL_INFO(i915)->num_rings);
+
+	count = 0;
+	list_for_each_entry(obj, &objects, batch_pool_link) {
+		err = cpu_check(obj, count++);
+		if (err)
+			break;
+	}
+
+err:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	mock_file_free(i915, file);
+	return err;
+}
+
+int i915_gem_context_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_ctx_exec),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 94517ad6dbd1..0c925f17b445 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -14,3 +14,4 @@ selftest(requests, i915_gem_request_live_selftests)
 selftest(object, i915_gem_object_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
 selftest(gtt, i915_gem_gtt_live_selftests)
+selftest(context, i915_gem_context_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 35/38] drm/i915: Initial selftests for exercising eviction
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (33 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 34/38] drm/i915: Live testing for context execution Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-19 11:41 ` [PATCH v2 36/38] drm/i915: Add mock exercise for i915_gem_gtt_reserve Chris Wilson
                   ` (3 subsequent siblings)
  38 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Very simple tests to just ask eviction to find some free space in a full
GTT and one with some available space.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_evict.c              |   4 +
 drivers/gpu/drm/i915/selftests/i915_gem_evict.c    | 258 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 3 files changed, 263 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_evict.c

diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index c181b1bb3d2c..609a8fcb48ca 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -387,3 +387,7 @@ int i915_gem_evict_vm(struct i915_address_space *vm, bool do_idle)
 
 	return 0;
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/i915_gem_evict.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
new file mode 100644
index 000000000000..5bf5a1ccfd5b
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
@@ -0,0 +1,258 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_selftest.h"
+
+#include "mock_gem_device.h"
+
+static int populate_ggtt(struct drm_i915_private *i915)
+{
+	struct drm_i915_gem_object *obj;
+	u64 size;
+
+	for (size = 0;
+	     size + I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     size += I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, I915_GTT_PAGE_SIZE);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
+		if (IS_ERR(vma))
+			return PTR_ERR(vma);
+	}
+
+	if (!list_empty(&i915->mm.unbound_list)) {
+		size = 0;
+		list_for_each_entry(obj, &i915->mm.unbound_list, global_link)
+			size++;
+
+		pr_err("Found %lld objects unbound!\n", size);
+		return -EINVAL;
+	}
+
+	if (list_empty(&i915->ggtt.base.inactive_list)) {
+		pr_err("No objects on the GGTT inactive list!\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void unpin_ggtt(struct drm_i915_private *i915)
+{
+	struct i915_vma *vma;
+
+	list_for_each_entry(vma, &i915->ggtt.base.inactive_list, vm_link)
+		i915_vma_unpin(vma);
+}
+
+static void cleanup_objects(struct drm_i915_private *i915)
+{
+	struct drm_i915_gem_object *obj, *on;
+
+	list_for_each_entry_safe(obj, on, &i915->mm.unbound_list, global_link)
+		i915_gem_object_put(obj);
+
+	list_for_each_entry_safe(obj, on, &i915->mm.bound_list, global_link)
+		i915_gem_object_put(obj);
+
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	i915_gem_drain_freed_objects(i915);
+
+	mutex_lock(&i915->drm.struct_mutex);
+}
+
+static int igt_evict_something(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	int err;
+
+	/* Fill the GGTT with pinned objects and try to evict one. */
+
+	err = populate_ggtt(i915);
+	if (err)
+		goto cleanup;
+
+	/* Everything is pinned, nothing should happen */
+	err = i915_gem_evict_something(&ggtt->base,
+				       I915_GTT_PAGE_SIZE, 0, 0,
+				       0, U64_MAX,
+				       0);
+	if (err != -ENOSPC) {
+		pr_err("i915_gem_evict_something failed on a full GGTT with err=%d\n",
+		       err);
+		goto cleanup;
+	}
+
+	unpin_ggtt(i915);
+
+	/* Everything is unpinned, we should be able to evict something */
+	err = i915_gem_evict_something(&ggtt->base,
+				       I915_GTT_PAGE_SIZE, 0, 0,
+				       0, U64_MAX,
+				       0);
+	if (err) {
+		pr_err("i915_gem_evict_something failed on a full GGTT with err=%d\n",
+		       err);
+		goto cleanup;
+	}
+
+cleanup:
+	cleanup_objects(i915);
+	return err;
+}
+
+static int igt_overcommit(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	int err;
+
+	/* Fill the GGTT with pinned objects and then try to pin one more.
+	 * We expect it to fail.
+	 */
+
+	err = populate_ggtt(i915);
+	if (err)
+		goto cleanup;
+
+	obj = i915_gem_object_create_internal(i915, I915_GTT_PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		goto cleanup;
+	}
+
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
+	if (IS_ERR(vma) && PTR_ERR(vma) != -ENOSPC) {
+		pr_err("Failed to evict+insert, i915_gem_object_ggtt_pin returned err=%d\n", (int)PTR_ERR(vma));
+		err = PTR_ERR(vma);
+		goto cleanup;
+	}
+
+cleanup:
+	cleanup_objects(i915);
+	return err;
+}
+
+static int igt_evict_for_vma(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	struct drm_mm_node target = {
+		.start = 0,
+		.size = 4096,
+	};
+	int err;
+
+	/* Fill the GGTT with pinned objects and try to evict a range. */
+
+	err = populate_ggtt(i915);
+	if (err)
+		goto cleanup;
+
+	/* Everything is pinned, nothing should happen */
+	err = i915_gem_evict_for_node(&ggtt->base, &target, 0);
+	if (err != -ENOSPC) {
+		pr_err("i915_gem_evict_for_node on a full GGTT returned err=%d\n",
+		       err);
+		goto cleanup;
+	}
+
+	unpin_ggtt(i915);
+
+	/* Everything is unpinned, we should be able to evict the node */
+	err = i915_gem_evict_for_node(&ggtt->base, &target, 0);
+	if (err) {
+		pr_err("i915_gem_evict_for_node returned err=%d\n",
+		       err);
+		goto cleanup;
+	}
+
+cleanup:
+	cleanup_objects(i915);
+	return err;
+}
+
+static int igt_evict_vm(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	int err;
+
+	/* Fill the GGTT with pinned objects and try to evict everything. */
+
+	err = populate_ggtt(i915);
+	if (err)
+		goto cleanup;
+
+	/* Everything is pinned, nothing should happen */
+	err = i915_gem_evict_vm(&ggtt->base, false);
+	if (err) {
+		pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",
+		       err);
+		goto cleanup;
+	}
+
+	unpin_ggtt(i915);
+
+	err = i915_gem_evict_vm(&ggtt->base, false);
+	if (err) {
+		pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",
+		       err);
+		goto cleanup;
+	}
+
+cleanup:
+	cleanup_objects(i915);
+	return err;
+}
+
+int i915_gem_evict_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_evict_something),
+		SUBTEST(igt_evict_for_vma),
+		SUBTEST(igt_evict_vm),
+		SUBTEST(igt_overcommit),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_subtests(tests, i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index b450eab7e6e1..cfbd3f5486ae 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -16,3 +16,4 @@ selftest(requests, i915_gem_request_mock_selftests)
 selftest(objects, i915_gem_object_mock_selftests)
 selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
 selftest(vma, i915_vma_mock_selftests)
+selftest(evict, i915_gem_evict_mock_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 36/38] drm/i915: Add mock exercise for i915_gem_gtt_reserve
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (34 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 35/38] drm/i915: Initial selftests for exercising eviction Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-25 13:30   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 37/38] drm/i915: Add mock exercise for i915_gem_gtt_insert Chris Wilson
                   ` (2 subsequent siblings)
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

i915_gem_gtt_reserve should put the node exactly as requested in the
GTT, evicting as required.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      | 195 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 2 files changed, 196 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 28915e4225e3..6f87a0400ad8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -28,6 +28,7 @@
 #include "i915_random.h"
 #include "mock_drm.h"
 #include "huge_gem_object.h"
+#include "mock_gem_device.h"
 
 static int igt_ppgtt_alloc(void *arg)
 {
@@ -561,6 +562,200 @@ static int igt_ggtt_drunk(void *arg)
 	return err;
 }
 
+static void track_vma_bind(struct i915_vma *vma)
+{
+	struct drm_i915_gem_object *obj = vma->obj;
+
+	obj->bind_count++; /* track for eviction later */
+	__i915_gem_object_pin_pages(obj);
+
+	vma->pages = obj->mm.pages;
+	list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
+}
+
+static int igt_gtt_reserve(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj, *on;
+	LIST_HEAD(objects);
+	u64 total;
+	int err;
+
+	/* i915_gem_gtt_reserve() tries to reserve the precise range
+	 * for the node, and evicts if it has to. So our test checks that
+	 * it can give us the requsted space and prevent overlaps.
+	 */
+
+	/* Start by filling the GGTT */
+	for (total = 0;
+	     total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     total += 2*I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, 2*PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto err;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			goto err;
+		}
+
+		list_add(&obj->batch_pool_link, &objects);
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err;
+		}
+
+		err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
+					   obj->base.size,
+					   total,
+					   obj->cache_level,
+					   0);
+		if (err) {
+			pr_err("i915_gem_gtt_reserve (pass 1) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto err;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		if (vma->node.start != total ||
+		    vma->node.size != 2*I915_GTT_PAGE_SIZE) {
+			pr_err("i915_gem_gtt_reserve (pass 1) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
+			       vma->node.start, vma->node.size,
+			       total, 2*I915_GTT_PAGE_SIZE);
+			err = -EINVAL;
+			goto err;
+		}
+	}
+
+	/* Now we start forcing evictions */
+	for (total = I915_GTT_PAGE_SIZE;
+	     total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     total += 2*I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, 2*PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto err;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			goto err;
+		}
+
+		list_add(&obj->batch_pool_link, &objects);
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err;
+		}
+
+		err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
+					   obj->base.size,
+					   total,
+					   obj->cache_level,
+					   0);
+		if (err) {
+			pr_err("i915_gem_gtt_reserve (pass 2) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto err;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		if (vma->node.start != total ||
+		    vma->node.size != 2*I915_GTT_PAGE_SIZE) {
+			pr_err("i915_gem_gtt_reserve (pass 2) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
+			       vma->node.start, vma->node.size,
+			       total, 2*I915_GTT_PAGE_SIZE);
+			err = -EINVAL;
+			goto err;
+		}
+	}
+
+	/* And then try at random */
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link) {
+		struct i915_vma *vma;
+		u64 offset;
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err;
+		}
+
+		err = i915_vma_unbind(vma);
+		if (err) {
+			pr_err("i915_vma_unbind failed with err=%d!\n", err);
+			goto err;
+		}
+
+		offset = random_offset(0, i915->ggtt.base.total,
+				       2*I915_GTT_PAGE_SIZE,
+				       I915_GTT_MIN_ALIGNMENT);
+
+		err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
+					   obj->base.size,
+					   offset,
+					   obj->cache_level,
+					   0);
+		if (err) {
+			pr_err("i915_gem_gtt_reserve (pass 3) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto err;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		if (vma->node.start != offset ||
+		    vma->node.size != 2*I915_GTT_PAGE_SIZE) {
+			pr_err("i915_gem_gtt_reserve (pass 3) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
+			       vma->node.start, vma->node.size,
+			       offset, 2*I915_GTT_PAGE_SIZE);
+			err = -EINVAL;
+			goto err;
+		}
+	}
+
+err:
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link) {
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+	return err;
+}
+
+int i915_gem_gtt_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_gtt_reserve),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_subtests(tests, i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index cfbd3f5486ae..be9a9ebf5692 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -17,3 +17,4 @@ selftest(objects, i915_gem_object_mock_selftests)
 selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
 selftest(vma, i915_vma_mock_selftests)
 selftest(evict, i915_gem_evict_mock_selftests)
+selftest(gtt, i915_gem_gtt_mock_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 37/38] drm/i915: Add mock exercise for i915_gem_gtt_insert
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (35 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 36/38] drm/i915: Add mock exercise for i915_gem_gtt_reserve Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-01-25 13:31   ` Joonas Lahtinen
  2017-01-19 11:41 ` [PATCH v2 38/38] drm/i915: Add initial selftests for hang detection and resets Chris Wilson
  2017-01-19 13:54 ` ✗ Fi.CI.BAT: failure for series starting with [v2,01/38] drm: Provide a driver hook for drm_dev_release() Patchwork
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

i915_gem_gtt_insert should allocate from the available free space in the
GTT, evicting as necessary to create space.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 208 ++++++++++++++++++++++++++
 1 file changed, 208 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 6f87a0400ad8..b6d07d351f62 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -736,10 +736,218 @@ static int igt_gtt_reserve(void *arg)
 	return err;
 }
 
+static int igt_gtt_insert(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj, *on;
+	struct drm_mm_node tmp = {};
+	const struct invalid_insert {
+		u64 size;
+		u64 alignment;
+		u64 start, end;
+	} invalid_insert[] = {
+		{
+			i915->ggtt.base.total + I915_GTT_PAGE_SIZE, 0,
+			0, i915->ggtt.base.total,
+		},
+		{
+			2*I915_GTT_PAGE_SIZE, 0,
+			0, I915_GTT_PAGE_SIZE,
+		},
+		{
+			-(u64)I915_GTT_PAGE_SIZE, 0,
+			0, 4*I915_GTT_PAGE_SIZE,
+		},
+		{
+			-(u64)2*I915_GTT_PAGE_SIZE, 2*I915_GTT_PAGE_SIZE,
+			0, 4*I915_GTT_PAGE_SIZE,
+		},
+		{
+			I915_GTT_PAGE_SIZE, I915_GTT_MIN_ALIGNMENT << 1,
+			I915_GTT_MIN_ALIGNMENT, I915_GTT_MIN_ALIGNMENT << 1,
+		},
+		{}
+	}, *ii;
+	LIST_HEAD(objects);
+	u64 total;
+	int err;
+
+	/* i915_gem_gtt_insert() tries to allocate some free space in the GTT
+	 * to the node, evicting if required.
+	 */
+
+	/* Check a couple of obviously invalid requests */
+	for (ii = invalid_insert; ii->size; ii++) {
+		err = i915_gem_gtt_insert(&i915->ggtt.base, &tmp,
+					  ii->size, ii->alignment,
+					  I915_COLOR_UNEVICTABLE,
+					  ii->start, ii->end,
+					  0);
+		if (err != -ENOSPC) {
+			pr_err("Invalid i915_gem_gtt_insert(.size=%llx, .alignment=%llx, .start=%llx, .end=%llx) succeeded (err=%d)\n",
+			       ii->size, ii->alignment, ii->start, ii->end,
+			       err);
+			return -EINVAL;
+		}
+	}
+
+	/* Start by filling the GGTT */
+	for (total = 0;
+	     total + I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     total += I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, I915_GTT_PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto err;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			goto err;
+		}
+
+		list_add(&obj->batch_pool_link, &objects);
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err;
+		}
+
+		err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
+					   obj->base.size, 0, obj->cache_level,
+					   0, i915->ggtt.base.total,
+					   0);
+		if (err == -ENOSPC) {
+			/* maxed out the GGTT space */
+			i915_gem_object_put(obj);
+			break;
+		}
+		if (err) {
+			pr_err("i915_gem_gtt_insert (pass 1) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto err;
+		}
+		track_vma_bind(vma);
+		__i915_vma_pin(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+	}
+
+	list_for_each_entry(obj, &objects, batch_pool_link) {
+		struct i915_vma *vma;
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err;
+		}
+
+		if (!drm_mm_node_allocated(&vma->node)) {
+			pr_err("VMA was unexpectedly evicted!\n");
+			err = -EINVAL;
+			goto err;
+		}
+
+		__i915_vma_unpin(vma);
+	}
+
+	/* If we then reinsert, we should find the same hole */
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link) {
+		struct i915_vma *vma;
+		u64 offset;
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err;
+		}
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		offset = vma->node.start;
+
+		err = i915_vma_unbind(vma);
+		if (err) {
+			pr_err("i915_vma_unbind failed with err=%d!\n", err);
+			goto err;
+		}
+
+		err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
+					   obj->base.size, 0, obj->cache_level,
+					   0, i915->ggtt.base.total,
+					   0);
+		if (err) {
+			pr_err("i915_gem_gtt_insert (pass 2) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto err;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		if (vma->node.start != offset) {
+			pr_err("i915_gem_gtt_insert did not return node to its previous location (the only hole), expected address %llx, found %llx\n",
+			       offset, vma->node.start);
+			err = -EINVAL;
+			goto err;
+		}
+	}
+
+	/* And then force evictions */
+	for (total = 0;
+	     total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     total += 2*I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, 2*I915_GTT_PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto err;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			goto err;
+		}
+
+		list_add(&obj->batch_pool_link, &objects);
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err;
+		}
+
+		err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
+					   obj->base.size, 0, obj->cache_level,
+					   0, i915->ggtt.base.total,
+					   0);
+		if (err) {
+			pr_err("i915_gem_gtt_insert (pass 1) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto err;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+	}
+
+err:
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link) {
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+	return err;
+}
+
 int i915_gem_gtt_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_gtt_reserve),
+		SUBTEST(igt_gtt_insert),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v2 38/38] drm/i915: Add initial selftests for hang detection and resets
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (36 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 37/38] drm/i915: Add mock exercise for i915_gem_gtt_insert Chris Wilson
@ 2017-01-19 11:41 ` Chris Wilson
  2017-02-01 11:43   ` Mika Kuoppala
  2017-01-19 13:54 ` ✗ Fi.CI.BAT: failure for series starting with [v2,01/38] drm: Provide a driver hook for drm_dev_release() Patchwork
  38 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-01-19 11:41 UTC (permalink / raw)
  To: intel-gfx

Check that we can reset the GPU and continue executing from the next
request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_hangcheck.c             |   4 +
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/intel_hangcheck.c   | 463 +++++++++++++++++++++
 3 files changed, 468 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_hangcheck.c

diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c
index f05971f5586f..dce742243ba6 100644
--- a/drivers/gpu/drm/i915/intel_hangcheck.c
+++ b/drivers/gpu/drm/i915/intel_hangcheck.c
@@ -480,3 +480,7 @@ void intel_hangcheck_init(struct drm_i915_private *i915)
 	INIT_DELAYED_WORK(&i915->gpu_error.hangcheck_work,
 			  i915_hangcheck_elapsed);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/intel_hangcheck.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 0c925f17b445..e6699c59f244 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -15,3 +15,4 @@ selftest(object, i915_gem_object_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
 selftest(gtt, i915_gem_gtt_live_selftests)
 selftest(context, i915_gem_context_live_selftests)
+selftest(hangcheck, intel_hangcheck_live_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
new file mode 100644
index 000000000000..d306890ba7eb
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
@@ -0,0 +1,463 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_selftest.h"
+
+struct hang {
+	struct drm_i915_private *i915;
+	struct drm_i915_gem_object *hws;
+	struct drm_i915_gem_object *obj;
+	u32 *seqno;
+	u32 *batch;
+};
+
+static int hang_init(struct hang *h, struct drm_i915_private *i915)
+{
+	void *vaddr;
+
+	memset(h, 0, sizeof(*h));
+	h->i915 = i915;
+
+	h->hws = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(h->hws))
+		return PTR_ERR(h->hws);
+
+	h->obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(h->obj)) {
+		i915_gem_object_put(h->obj);
+		return PTR_ERR(h->obj);
+	}
+
+	i915_gem_object_set_cache_level(h->hws, I915_CACHE_LLC);
+	vaddr = i915_gem_object_pin_map(h->hws, I915_MAP_WB);
+	if (IS_ERR(vaddr)) {
+		i915_gem_object_put(h->hws);
+		i915_gem_object_put(h->obj);
+		return PTR_ERR(vaddr);
+	}
+	h->seqno = memset(vaddr, 0xff, PAGE_SIZE);
+
+	vaddr = i915_gem_object_pin_map(h->obj,
+					HAS_LLC(i915) ? I915_MAP_WB : I915_MAP_WC);
+	if (IS_ERR(vaddr)) {
+		i915_gem_object_unpin_map(h->hws);
+		i915_gem_object_put(h->hws);
+		i915_gem_object_put(h->obj);
+		return PTR_ERR(vaddr);
+	}
+	h->batch = vaddr;
+
+	return 0;
+}
+
+static u64 hws_address(const struct i915_vma *hws,
+		       const struct drm_i915_gem_request *rq)
+{
+	return hws->node.start + offset_in_page(sizeof(u32)*rq->fence.context);
+}
+
+static int emit_recurse_batch(struct hang *h,
+			      struct drm_i915_gem_request *rq)
+{
+	struct drm_i915_private *i915 = h->i915;
+	struct i915_address_space *vm = rq->ctx->ppgtt ? &rq->ctx->ppgtt->base : &i915->ggtt.base;
+	struct i915_vma *hws, *vma;
+	u32 *batch;
+	int err;
+
+	vma = i915_vma_instance(h->obj, vm, NULL);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	hws = i915_vma_instance(h->hws, vm, NULL);
+	if (IS_ERR(hws))
+		return PTR_ERR(hws);
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		return err;
+
+	err = i915_vma_pin(hws, 0, 0, PIN_USER);
+	if (err)
+		goto unpin_vma;
+
+	i915_vma_move_to_active(vma, rq, 0);
+	i915_vma_move_to_active(hws, rq, 0);
+
+	batch = h->batch;
+	if (INTEL_GEN(i915) >= 8) {
+		*batch++ = MI_STORE_DWORD_IMM_GEN4;
+		*batch++ = lower_32_bits(hws_address(hws, rq));
+		*batch++ = upper_32_bits(hws_address(hws, rq));
+		*batch++ = rq->fence.seqno;
+		*batch++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
+		*batch++ = lower_32_bits(vma->node.start);
+		*batch++ = upper_32_bits(vma->node.start);
+	} else if (INTEL_GEN(i915) >= 6) {
+		*batch++ = MI_STORE_DWORD_IMM_GEN4;
+		*batch++ = 0;
+		*batch++ = lower_32_bits(hws_address(hws, rq));
+		*batch++ = rq->fence.seqno;
+		*batch++ = MI_BATCH_BUFFER_START | 1 << 8;
+		*batch++ = lower_32_bits(vma->node.start);
+	} else if (INTEL_GEN(i915) >= 4) {
+		*batch++ = MI_STORE_DWORD_IMM_GEN4;
+		*batch++ = 0;
+		*batch++ = lower_32_bits(hws_address(hws, rq));
+		*batch++ = rq->fence.seqno;
+		*batch++ = MI_BATCH_BUFFER_START | 2 << 6;
+		*batch++ = lower_32_bits(vma->node.start);
+	} else {
+		*batch++ = MI_STORE_DWORD_IMM;
+		*batch++ = lower_32_bits(hws_address(hws, rq));
+		*batch++ = rq->fence.seqno;
+		*batch++ = MI_BATCH_BUFFER_START | 2 << 6 | 1;
+		*batch++ = lower_32_bits(vma->node.start);
+	}
+
+	err = rq->engine->emit_bb_start(rq, vma->node.start, PAGE_SIZE, 0);
+
+	i915_vma_unpin(hws);
+unpin_vma:
+	i915_vma_unpin(vma);
+	return err;
+}
+
+static struct drm_i915_gem_request *
+hang_create_request(struct hang *h,
+		    struct intel_engine_cs *engine,
+		    struct i915_gem_context *ctx)
+{
+	struct drm_i915_gem_request *rq;
+	int err;
+
+	if (i915_gem_object_is_active(h->obj)) {
+		struct drm_i915_gem_object *obj;
+		void *vaddr;
+
+		obj = i915_gem_object_create_internal(h->i915, PAGE_SIZE);
+		if (IS_ERR(obj))
+			return ERR_CAST(obj);
+
+		vaddr = i915_gem_object_pin_map(obj,
+						HAS_LLC(h->i915) ? I915_MAP_WB : I915_MAP_WC);
+		if (IS_ERR(vaddr)) {
+			i915_gem_object_put(obj);
+			return ERR_CAST(vaddr);
+		}
+
+		i915_gem_object_unpin_map(h->obj);
+		__i915_gem_object_release_unless_active(h->obj);
+
+		h->obj = obj;
+		h->batch = vaddr;
+	}
+
+	rq = i915_gem_request_alloc(engine, ctx);
+	if (IS_ERR(rq))
+		return rq;
+
+	err = emit_recurse_batch(h, rq);
+	if (err) {
+		__i915_add_request(rq, false);
+		return ERR_PTR(err);
+	}
+
+	return rq;
+}
+
+static u32 hws_seqno(const struct hang *h,
+		     const struct drm_i915_gem_request *rq)
+{
+	return READ_ONCE(h->seqno[rq->fence.context % (PAGE_SIZE/sizeof(u32))]);
+}
+
+static void hang_fini(struct hang *h)
+{
+	*h->batch = MI_BATCH_BUFFER_END;
+
+	i915_gem_object_unpin_map(h->obj);
+	__i915_gem_object_release_unless_active(h->obj);
+
+	i915_gem_object_unpin_map(h->hws);
+	__i915_gem_object_release_unless_active(h->hws);
+}
+
+static int igt_hang_sanitycheck(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *rq;
+	struct hang h;
+	int err;
+
+	/* Basic check that we can execute our hanging batch */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = hang_init(&h, i915);
+	if (err)
+		goto unlock;
+
+	rq = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto fini;
+	}
+
+	i915_gem_request_get(rq);
+
+	*h.batch = MI_BATCH_BUFFER_END;
+	__i915_add_request(rq, true);
+
+	i915_wait_request(rq, I915_WAIT_LOCKED, MAX_SCHEDULE_TIMEOUT);
+	i915_gem_request_put(rq);
+
+fini:
+	hang_fini(&h);
+unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
+static int igt_global_reset(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	unsigned int reset_count;
+	int err = 0;
+
+	/* Check that we can issue a global GPU reset */
+
+	if (!intel_has_gpu_reset(i915))
+		return 0;
+
+	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);
+
+	mutex_lock(&i915->drm.struct_mutex);
+	reset_count = i915_reset_count(&i915->gpu_error);
+
+	i915_reset(i915);
+
+	if (i915_reset_count(&i915->gpu_error) == reset_count) {
+		pr_err("No GPU reset recorded!\n");
+		err = -EINVAL;
+	}
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
+	if (i915_terminally_wedged(&i915->gpu_error))
+		err = -EIO;
+
+	return err;
+}
+
+static u32 fake_hangcheck(struct drm_i915_gem_request *rq)
+{
+	u32 reset_count;
+
+	rq->engine->hangcheck.stalled = true;
+	rq->engine->hangcheck.seqno = intel_engine_get_seqno(rq->engine);
+
+	reset_count = i915_reset_count(&rq->i915->gpu_error);
+
+	set_bit(I915_RESET_IN_PROGRESS, &rq->i915->gpu_error.flags);
+	wake_up_all(&rq->i915->gpu_error.wait_queue);
+
+	return reset_count;
+}
+
+static bool wait_for_hang(struct hang *h, struct drm_i915_gem_request *rq)
+{
+	return !(wait_for_us(i915_seqno_passed(hws_seqno(h, rq),
+					       rq->fence.seqno),
+			     10) &&
+		 wait_for(i915_seqno_passed(hws_seqno(h, rq),
+					    rq->fence.seqno),
+			  1000));
+}
+
+static int igt_wait_reset(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *rq;
+	unsigned int reset_count;
+	struct hang h;
+	long timeout;
+	int err;
+
+	/* Check that we detect a stuck waiter and issue a reset */
+
+	if (!intel_has_gpu_reset(i915))
+		return 0;
+
+	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = hang_init(&h, i915);
+	if (err)
+		goto unlock;
+
+	rq = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto fini;
+	}
+
+	i915_gem_request_get(rq);
+	__i915_add_request(rq, true);
+
+	if (!wait_for_hang(&h, rq)) {
+		pr_err("Failed to start request %x\n", rq->fence.seqno);
+		err = -EIO;
+		goto fini;
+	}
+
+	reset_count = fake_hangcheck(rq);
+
+	timeout = i915_wait_request(rq, I915_WAIT_LOCKED, 10);
+	if (timeout < 0) {
+		pr_err("i915_wait_request failed on a stuck request: err=%ld\n",
+		       timeout);
+		err = timeout;
+		goto fini;
+	}
+	GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
+
+	if (i915_reset_count(&i915->gpu_error) == reset_count) {
+		pr_err("No GPU reset recorded!\n");
+		err = -EINVAL;
+		goto fini;
+	}
+
+fini:
+	hang_fini(&h);
+unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	if (i915_terminally_wedged(&i915->gpu_error))
+		return -EIO;
+
+	return err;
+}
+
+static int igt_reset_queue(void *arg)
+{
+	IGT_TIMEOUT(end_time);
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *prev;
+	unsigned int count;
+	struct hang h;
+	int err;
+
+	/* Check that we replay pending requests following a hang */
+
+	if (!intel_has_gpu_reset(i915))
+		return 0;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = hang_init(&h, i915);
+	if (err)
+		goto unlock;
+
+	prev = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
+	if (IS_ERR(prev)) {
+		err = PTR_ERR(prev);
+		goto fini;
+	}
+
+	i915_gem_request_get(prev);
+	__i915_add_request(prev, true);
+
+	count = 0;
+	do {
+		struct drm_i915_gem_request *rq;
+		unsigned int reset_count;
+
+		rq = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
+		if (IS_ERR(rq)) {
+			err = PTR_ERR(rq);
+			goto fini;
+		}
+
+		i915_gem_request_get(rq);
+		__i915_add_request(rq, true);
+
+		if (!wait_for_hang(&h, prev)) {
+			pr_err("Failed to start request %x\n",
+			       prev->fence.seqno);
+			err = -EIO;
+			goto fini;
+		}
+
+		reset_count = fake_hangcheck(prev);
+
+		i915_reset(i915);
+
+		GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
+		if (prev->fence.error != -EIO) {
+			pr_err("GPU reset not recorded on hanging request [fence.error=%d]!\n",
+			       prev->fence.error);
+			err = -EINVAL;
+			goto fini;
+		}
+
+		if (rq->fence.error) {
+			pr_err("Fence error status not zero [%d] after unrelated reset\n",
+			       rq->fence.error);
+			err = -EINVAL;
+			goto fini;
+		}
+
+		if (i915_reset_count(&i915->gpu_error) == reset_count) {
+			pr_err("No GPU reset recorded!\n");
+			err = -EINVAL;
+			goto fini;
+		}
+
+		i915_gem_request_put(prev);
+		prev = rq;
+		count++;
+	} while (time_before(jiffies, end_time));
+	pr_info("Completed %d resets\n", count);
+	i915_gem_request_put(prev);
+
+fini:
+	hang_fini(&h);
+unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	if (i915_terminally_wedged(&i915->gpu_error))
+		return -EIO;
+
+	return err;
+}
+
+int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_hang_sanitycheck),
+		SUBTEST(igt_global_reset),
+		SUBTEST(igt_wait_reset),
+		SUBTEST(igt_reset_queue),
+	};
+	return i915_subtests(tests, i915);
+}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 20/38] drm/i915: Test coherency of and barriers between cache domains
  2017-01-19 11:41 ` [PATCH v2 20/38] drm/i915: Test coherency of and barriers between cache domains Chris Wilson
@ 2017-01-19 13:01   ` Matthew Auld
  0 siblings, 0 replies; 73+ messages in thread
From: Matthew Auld @ 2017-01-19 13:01 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On 19 January 2017 at 11:41, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Write into an object using WB, WC, GTT, and GPU paths and make sure that
> our internal API is sufficient to ensure coherent reads and writes.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem.c                    |   1 +
>  .../gpu/drm/i915/selftests/i915_gem_coherency.c    | 363 +++++++++++++++++++++
>  .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
>  3 files changed, 365 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0772a4e0e3ef..2b6c0f9b02d0 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -4944,4 +4944,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
>  #include "selftests/mock_gem_device.c"
>  #include "selftests/huge_gem_object.c"
>  #include "selftests/i915_gem_object.c"
> +#include "selftests/i915_gem_coherency.c"
>  #endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
> new file mode 100644
> index 000000000000..0a5ef721c501
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
> @@ -0,0 +1,363 @@
> +/*
> + * Copyright © 2017 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include <linux/prime_numbers.h>
> +
> +#include "i915_selftest.h"
> +#include "i915_random.h"
> +
> +static int cpu_set(struct drm_i915_gem_object *obj,
> +                  unsigned long offset,
> +                  u32 v)
> +{
> +       unsigned int needs_clflush;
> +       struct page *page;
> +       typeof(v) *map;
> +       int err;
> +
> +       err = i915_gem_obj_prepare_shmem_write(obj, &needs_clflush);
> +       if (err)
> +               return err;
> +
> +       page = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
> +       map = kmap_atomic(page);
> +       if (needs_clflush & CLFLUSH_BEFORE)
> +               clflush(map+offset_in_page(offset) / sizeof(*map));
> +       map[offset_in_page(offset) / sizeof(*map)] = v;
> +       if (needs_clflush & CLFLUSH_AFTER)
> +               clflush(map+offset_in_page(offset) / sizeof(*map));
> +       kunmap_atomic(map);
> +
> +       i915_gem_obj_finish_shmem_access(obj);
> +       return 0;
> +}
> +
> +static int cpu_get(struct drm_i915_gem_object *obj,
> +                  unsigned long offset,
> +                  u32 *v)
> +{
> +       unsigned int needs_clflush;
> +       struct page *page;
> +       typeof(v) map;
> +       int err;
> +
> +       err = i915_gem_obj_prepare_shmem_read(obj, &needs_clflush);
> +       if (err)
> +               return err;
> +
> +       page = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
> +       map = kmap_atomic(page);
> +       if (needs_clflush & CLFLUSH_BEFORE)
> +               clflush(map+offset_in_page(offset) / sizeof(*map));
> +       *v = map[offset_in_page(offset) / sizeof(*map)];
> +       kunmap_atomic(map);
> +
> +       i915_gem_obj_finish_shmem_access(obj);
> +       return 0;
> +}
> +
> +static int gtt_set(struct drm_i915_gem_object *obj,
> +                  unsigned long offset,
> +                  u32 v)
> +{
> +       struct i915_vma *vma;
> +       typeof(v) *map;
> +       int err;
> +
> +       err = i915_gem_object_set_to_gtt_domain(obj, true);
> +       if (err)
> +               return err;
> +
> +       vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, PIN_MAPPABLE);
> +       if (IS_ERR(vma))
> +               return PTR_ERR(vma);
> +
> +       map = i915_vma_pin_iomap(vma);
> +       i915_vma_unpin(vma);
> +       if (IS_ERR(map))
> +               return PTR_ERR(map);
> +
> +       map[offset / sizeof(*map)] = v;
> +       i915_vma_unpin_iomap(vma);
> +
> +       return 0;
> +}
> +
> +static int gtt_get(struct drm_i915_gem_object *obj,
> +                  unsigned long offset,
> +                  u32 *v)
> +{
> +       struct i915_vma *vma;
> +       typeof(v) map;
> +       int err;
> +
> +       err = i915_gem_object_set_to_gtt_domain(obj, false);
> +       if (err)
> +               return err;
> +
> +       vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, PIN_MAPPABLE);
> +       if (IS_ERR(vma))
> +               return PTR_ERR(vma);
> +
> +       map = i915_vma_pin_iomap(vma);
> +       i915_vma_unpin(vma);
> +       if (IS_ERR(map))
> +               return PTR_ERR(map);
> +
> +       *v = map[offset / sizeof(*map)];
> +       i915_vma_unpin_iomap(vma);
> +
> +       return 0;
> +}
> +
> +static int wc_set(struct drm_i915_gem_object *obj,
> +                 unsigned long offset,
> +                 u32 v)
> +{
> +       typeof(v) *map;
> +       int err;
> +
> +       /* XXX GTT write followed by WC write go missing */
> +       i915_gem_object_flush_gtt_write_domain(obj);
> +
> +       err = i915_gem_object_set_to_gtt_domain(obj, true);
> +       if (err)
> +               return err;
> +
> +       map = i915_gem_object_pin_map(obj, I915_MAP_WC);
> +       if (IS_ERR(map))
> +               return PTR_ERR(map);
> +
> +       map[offset / sizeof(*map)] = v;
> +       i915_gem_object_unpin_map(obj);
> +
> +       return 0;
> +}
> +
> +static int wc_get(struct drm_i915_gem_object *obj,
> +                 unsigned long offset,
> +                 u32 *v)
> +{
> +       typeof(v) map;
> +       int err;
> +
> +       /* XXX WC write followed by GTT write go missing */
> +       i915_gem_object_flush_gtt_write_domain(obj);
> +
> +       err = i915_gem_object_set_to_gtt_domain(obj, false);
> +       if (err)
> +               return err;
> +
> +       map = i915_gem_object_pin_map(obj, I915_MAP_WC);
> +       if (IS_ERR(map))
> +               return PTR_ERR(map);
> +
> +       *v = map[offset / sizeof(*map)];
> +       i915_gem_object_unpin_map(obj);
> +
> +       return 0;
> +}
> +
> +static int gpu_set(struct drm_i915_gem_object *obj,
> +                  unsigned long offset,
> +                  u32 v)
> +{
> +       struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +       struct drm_i915_gem_request *rq;
> +       struct i915_vma *vma;
> +       int err;
> +
> +       err = i915_gem_object_set_to_gtt_domain(obj, true);
> +       if (err)
> +               return err;
> +
> +       vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
> +       if (IS_ERR(vma))
> +               return PTR_ERR(vma);
> +
> +       rq = i915_gem_request_alloc(i915->engine[RCS], i915->kernel_context);
> +       if (IS_ERR(rq)) {
> +               i915_vma_unpin(vma);
> +               return PTR_ERR(rq);
> +       }
> +
> +       err = intel_ring_begin(rq, 4);
> +       if (err) {
> +               __i915_add_request(rq, false);
> +               i915_vma_unpin(vma);
> +               return err;
> +       }
> +
> +       if (INTEL_GEN(i915) >= 8) {
> +               intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM_GEN4 | 1 << 22);
> +               intel_ring_emit(rq->ring, lower_32_bits(i915_ggtt_offset(vma) + offset));
> +               intel_ring_emit(rq->ring, upper_32_bits(i915_ggtt_offset(vma) + offset));
> +               intel_ring_emit(rq->ring, v);
> +       } else if (INTEL_GEN(i915) >= 4) {
> +               intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM_GEN4 | 1 << 22);
> +               intel_ring_emit(rq->ring, 0);
> +               intel_ring_emit(rq->ring, i915_ggtt_offset(vma) + offset);
> +               intel_ring_emit(rq->ring, v);
> +       } else {
> +               intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM | 1 << 22);
> +               intel_ring_emit(rq->ring, i915_ggtt_offset(vma) + offset);
> +               intel_ring_emit(rq->ring, v);
> +               intel_ring_emit(rq->ring, MI_NOOP);
> +       }
> +       intel_ring_advance(rq->ring);
> +
> +       i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
> +       i915_vma_unpin(vma);
> +
> +       reservation_object_lock(obj->resv, NULL);
> +       reservation_object_add_excl_fence(obj->resv, &rq->fence);
> +       reservation_object_unlock(obj->resv);
> +
> +       __i915_add_request(rq, true);
> +
> +       return 0;
> +}
> +
> +static const struct igt_coherency_mode {
> +       const char *name;
> +       int (*set)(struct drm_i915_gem_object *, unsigned long offset, u32 v);
> +       int (*get)(struct drm_i915_gem_object *, unsigned long offset, u32 *v);
> +} igt_coherency_mode[] = {
> +       { "cpu", cpu_set, cpu_get },
> +       { "gtt", gtt_set, gtt_get },
> +       { "wc", wc_set, wc_get },
> +       { "gpu", gpu_set, NULL },
> +       { },
> +};
> +
> +static int igt_gem_coherency(void *arg)
> +{
> +       const unsigned int ncachelines = PAGE_SIZE/64;
> +       I915_RND_STATE(prng);
> +       struct drm_i915_private *i915 = arg;
> +       const struct igt_coherency_mode *read, *write, *over;
> +       struct drm_i915_gem_object *obj;
> +       unsigned long count, n;
> +       u32 *offsets, *values;
> +       int err;
> +
> +       /* We repeatedly write, overwrite and read from a sequence of
> +        * cachelines in order to try and detect incoherency (unflushed writes
> +        * from either the CPU or GPU). Each setter/getter uses our cache
> +        * domain API which should prevent incoherency.
> +        */
> +
> +       offsets = kmalloc_array(ncachelines, 2*sizeof(u32), GFP_KERNEL);
> +       if (!offsets)
> +               return -ENOMEM;
> +       for (count = 0; count < ncachelines; count++)
> +               offsets[count] = count * 64 + 4 * (count % 16);
> +
> +       values = offsets + ncachelines;
> +
> +       mutex_lock(&i915->drm.struct_mutex);
> +       for (over = igt_coherency_mode; over->name; over++) {
> +               if (!over->set)
> +                       continue;
> +
> +               for (write = igt_coherency_mode; write->name; write++) {
> +                       if (!write->set)
> +                               continue;
> +
> +                       for (read = igt_coherency_mode; read->name; read++) {
> +                               if (!read->get)
> +                                       continue;
> +
> +                               for_each_prime_number_from(count, 1, ncachelines) {
> +                                       obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
> +                                       if (IS_ERR(obj)) {
> +                                               err = PTR_ERR(obj);
> +                                               goto unlock;
You forgot about the if (obj) check...

> +                                       }
> +
> +                                       i915_random_reorder(offsets, ncachelines, &prng);
> +                                       for (n = 0; n < count; n++)
> +                                               values[n] = prandom_u32_state(&prng);
> +
> +                                       for (n = 0; n < count; n++) {
> +                                               err = over->set(obj, offsets[n], ~values[n]);
> +                                               if (err) {
> +                                                       pr_err("Failed to set stale value[%ld/%ld] in object using %s, err=%d\n",
> +                                                              n, count, over->name, err);
> +                                                       goto unlock;
> +                                               }
> +                                       }
> +
> +                                       for (n = 0; n < count; n++) {
> +                                               err = write->set(obj, offsets[n], values[n]);
> +                                               if (err) {
> +                                                       pr_err("Failed to set value[%ld/%ld] in object using %s, err=%d\n",
> +                                                              n, count, write->name, err);
> +                                                       goto unlock;
> +                                               }
> +                                       }
> +
> +                                       for (n = 0; n < count; n++) {
> +                                               u32 found;
> +
> +                                               err = read->get(obj, offsets[n], &found);
> +                                               if (err) {
> +                                                       pr_err("Failed to get value[%ld/%ld] in object using %s, err=%d\n",
> +                                                              n, count, read->name, err);
> +                                                       goto unlock;
> +                                               }
> +
> +                                               if (found != values[n]) {
> +                                                       pr_err("Value[%ld/%ld] mismatch, (overwrite with %s) wrote [%s] %x read [%s] %x (inverse %x), at offset %x\n",
> +                                                              n, count, over->name,
> +                                                              write->name, values[n],
> +                                                              read->name, found,
> +                                                              ~values[n], offsets[n]);
> +                                                       err = -EINVAL;
> +                                                       goto unlock;
> +                                               }
> +                                       }
> +
> +                                       __i915_gem_object_release_unless_active(obj);
> +                                       obj = NULL;
> +                               }
> +                       }
> +               }
> +       }
> +unlock:
> +       if (obj)
> +               __i915_gem_object_release_unless_active(obj);
> +       mutex_unlock(&i915->drm.struct_mutex);
> +       kfree(offsets);
> +       return err;
> +}
> +
> +int i915_gem_coherency_live_selftests(struct drm_i915_private *i915)
> +{
> +       static const struct i915_subtest tests[] = {
> +               SUBTEST(igt_gem_coherency),
> +       };
> +
> +       return i915_subtests(tests, i915);
> +}
> diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> index 1822ac99d577..fde9ef22cfe8 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> @@ -11,3 +11,4 @@
>  selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
>  selftest(requests, i915_gem_request_live_selftests)
>  selftest(object, i915_gem_object_live_selftests)
> +selftest(coherency, i915_gem_coherency_live_selftests)
> --
> 2.11.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 10/38] drm/i915: Create a fake object for testing huge allocations
  2017-01-19 11:41 ` [PATCH v2 10/38] drm/i915: Create a fake object for testing huge allocations Chris Wilson
@ 2017-01-19 13:09   ` Matthew Auld
  0 siblings, 0 replies; 73+ messages in thread
From: Matthew Auld @ 2017-01-19 13:09 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On 19 January 2017 at 11:41, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> We would like to be able to exercise huge allocations even on memory
> constrained devices. To do this we create an object that allocates only
> a few pages and remaps them across its whole range - each page is reused
> multiple times. We can therefore pretend we are rendering into a much
> larger object.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* ✗ Fi.CI.BAT: failure for series starting with [v2,01/38] drm: Provide a driver hook for drm_dev_release()
  2017-01-19 11:41 More selftests Chris Wilson
                   ` (37 preceding siblings ...)
  2017-01-19 11:41 ` [PATCH v2 38/38] drm/i915: Add initial selftests for hang detection and resets Chris Wilson
@ 2017-01-19 13:54 ` Patchwork
  38 siblings, 0 replies; 73+ messages in thread
From: Patchwork @ 2017-01-19 13:54 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v2,01/38] drm: Provide a driver hook for drm_dev_release()
URL   : https://patchwork.freedesktop.org/series/18227/
State : failure

== Summary ==

Series 18227v1 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/18227/revisions/1/mbox/

Test gem_busy:
        Subgroup basic-busy-default:
                pass       -> FAIL       (fi-ivb-3520m)
Test kms_pipe_crc_basic:
        Subgroup nonblocking-crc-pipe-b:
                pass       -> DMESG-WARN (fi-snb-2520m)
        Subgroup suspend-read-crc-pipe-b:
                pass       -> FAIL       (fi-skl-6700k)

fi-bdw-5557u     total:246  pass:232  dwarn:0   dfail:0   fail:0   skip:14 
fi-bsw-n3050     total:246  pass:207  dwarn:0   dfail:0   fail:0   skip:39 
fi-bxt-j4205     total:246  pass:224  dwarn:0   dfail:0   fail:0   skip:22 
fi-bxt-t5700     total:79   pass:66   dwarn:0   dfail:0   fail:0   skip:12 
fi-byt-j1900     total:246  pass:218  dwarn:1   dfail:0   fail:0   skip:27 
fi-byt-n2820     total:246  pass:215  dwarn:0   dfail:0   fail:0   skip:31 
fi-hsw-4770      total:246  pass:227  dwarn:0   dfail:0   fail:0   skip:19 
fi-hsw-4770r     total:246  pass:227  dwarn:0   dfail:0   fail:0   skip:19 
fi-ivb-3520m     total:246  pass:223  dwarn:1   dfail:0   fail:1   skip:21 
fi-ivb-3770      total:246  pass:224  dwarn:1   dfail:0   fail:0   skip:21 
fi-kbl-7500u     total:246  pass:225  dwarn:0   dfail:0   fail:0   skip:21 
fi-skl-6260u     total:246  pass:233  dwarn:0   dfail:0   fail:0   skip:13 
fi-skl-6700hq    total:246  pass:226  dwarn:0   dfail:0   fail:0   skip:20 
fi-skl-6700k     total:246  pass:221  dwarn:3   dfail:0   fail:1   skip:21 
fi-skl-6770hq    total:246  pass:233  dwarn:0   dfail:0   fail:0   skip:13 
fi-snb-2520m     total:246  pass:213  dwarn:2   dfail:0   fail:0   skip:31 
fi-snb-2600      total:246  pass:213  dwarn:1   dfail:0   fail:0   skip:32 

758aa09aa53d8eaa2040b999197639f7c97eddb1 drm-tip: 2017y-01m-19d-11h-12m-46s UTC integration manifest
932ffea drm/i915: Add initial selftests for hang detection and resets
6b0dbb9 drm/i915: Add mock exercise for i915_gem_gtt_insert
2f03794 drm/i915: Add mock exercise for i915_gem_gtt_reserve
1268837 drm/i915: Initial selftests for exercising eviction
514327b drm/i915: Live testing for context execution
12200cf drm/i915: Test creation of partial VMA
0d8bc5a drm/i915: Verify page layout for rotated VMA
b4c07f6 drm/i915: Exercise i915_vma_pin/i915_vma_insert
c26daee drm/i915: Test creation of VMA
a685a00 drm/i915: Exercise filling and removing random ranges from the live GTT
ede562e drm/i915: Fill different pages of the GTT
745e80a drm/i915: Exercise filling the top/bottom portions of the global GTT
72f37e6c drm/i915: Exercise filling the top/bottom portions of the ppgtt
0034112 drm/i915: Add initial selftests for i915_gem_gtt
0a045b0 drm/i915: Add some mock tests for dmabuf interop
998ad9f drm/i915: Sanity check all registers for matching fw domains
a444625 drm/i915: Test all fw tables during mock selftests
33dce9b drm/i915: Move uncore selfchecks to live selftest infrastructure
6dee4fa drm/i915: Test coherency of and barriers between cache domains
6877417 drm/i915: Test exhaustion of the mmap space
9907e1f drm/i915: Test partial mappings
128d38a drm/i915: Add a live seftest for GEM objects
b93e744 drm/i915: Add selftests for object allocation, phys
d70c6f7 drm/i915: Test simultaneously submitting requests to all engines
334a64e drm/i915: Simple selftest to exercise live requests
0cffe94 drm/i915: Add a simple fence selftest to i915_gem_request
3c0c30e drm/i915: Add a simple request selftest for waiting
f39ec82 drm/i915: Add selftests for i915_gem_request
63f4bd6 drm/i915: Create a fake object for testing huge allocations
3aae3d9 drm/i915: Mock infrastructure for request emission
e60fc78 drm/i915: Mock a GGTT for self-testing
0612ba9 drm/i915: Mock the GEM device for self-testing
c174156 drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
186b8bf drm/i915: Add unit tests for the breadcrumb rbtree, completion
4425bd9 drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove
25586d4 drm/i915: Add some selftests for sg_table manipulation
1c0dd99 drm/i915: Provide a hook for selftests
61e1d18 drm: Provide a driver hook for drm_dev_release()

== Logs ==

For more details see: https://intel-gfx-ci.01.org/CI/Patchwork_3546/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 29/38] drm/i915: Exercise filling and removing random ranges from the live GTT
  2017-01-19 11:41 ` [PATCH v2 29/38] drm/i915: Exercise filling and removing random ranges from the live GTT Chris Wilson
@ 2017-01-20 10:39   ` Matthew Auld
  0 siblings, 0 replies; 73+ messages in thread
From: Matthew Auld @ 2017-01-20 10:39 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On 19 January 2017 at 11:41, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Test the low-level i915_address_space interfaces to sanity check the
> live insertion/removal of address ranges.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 151 ++++++++++++++++++++++++++
>  1 file changed, 151 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> index 81aa2abddb68..28915e4225e3 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> @@ -25,6 +25,7 @@
>  #include <linux/prime_numbers.h>
>
>  #include "i915_selftest.h"
> +#include "i915_random.h"
>  #include "mock_drm.h"
>  #include "huge_gem_object.h"
>
> @@ -280,6 +281,86 @@ static int walk_hole(struct drm_i915_private *i915,
>         return err;
>  }
>
> +static int fill_random_hole(struct drm_i915_private *i915,
> +                           struct i915_address_space *vm,
> +                           u64 hole_start, u64 hole_end,
> +                           unsigned long end_time)
Slightly confusing name imo, the hole is not random, but rather how we
choose to fill it.

> +{
> +       I915_RND_STATE(seed_prng);
> +       unsigned int size;
> +
> +       /* Keep creating larger objects until one cannot fit into the hole */
> +       for (size = 12; (hole_end - hole_start) >> size; size++) {
> +               I915_RND_SUBSTATE(prng, seed_prng);
> +               struct drm_i915_gem_object *obj;
> +               unsigned int *order, count, n;
> +               u64 hole_size;
> +
> +               hole_size = (hole_end - hole_start) >> size;
> +               if (hole_size > KMALLOC_MAX_SIZE / sizeof(u32))
> +                       hole_size = KMALLOC_MAX_SIZE / sizeof(u32);
> +               count = hole_size;
> +               do {
> +                       count >>= 1;
> +                       order = i915_random_order(count, &prng);
> +               } while (!order && count);
> +               if (!order)
> +                       break;
> +
> +               /* Ignore allocation failures (i.e. don't report them as
> +                * a test failure) as we are purposefully allocating very
> +                * larger objects without checking that we have sufficient
> +                * memory. We expect to hit ENOMEM.
> +                */
> +
> +               obj = huge_gem_object(i915, PAGE_SIZE, BIT_ULL(size));
> +               if (IS_ERR(obj)) {
> +                       kfree(order);
> +                       break;
> +               }
> +
> +               GEM_BUG_ON(obj->base.size != BIT_ULL(size));
> +
> +               if (i915_gem_object_pin_pages(obj)) {
> +                       i915_gem_object_put(obj);
> +                       kfree(order);
> +                       break;
> +               }
> +
> +               for (n = 0; n < count; n++) {
> +                       if (vm->allocate_va_range &&
> +                           vm->allocate_va_range(vm,
> +                                                 order[n] * BIT_ULL(size),
> +                                                 BIT_ULL(size)))
> +                               break;
> +
> +                       vm->insert_entries(vm, obj->mm.pages,
> +                                          order[n] * BIT_ULL(size),
> +                                          I915_CACHE_NONE, 0);
> +                       if (igt_timeout(end_time,
> +                                       "%s timed out after %d/%d\n",
> +                                       __func__, n, count)) {
> +                               hole_start = hole_end; /* quit */
> +                               break;
> +                       }
> +               }
> +               count = n;
> +
> +               i915_random_reorder(order, count, &prng);
> +               for (n = 0; n < count; n++)
> +                       vm->clear_range(vm,
> +                                       order[n]* BIT_ULL(size),
> +                                       BIT_ULL(size));
> +
> +               i915_gem_object_unpin_pages(obj);
> +               i915_gem_object_put(obj);
> +
> +               kfree(order);
> +       }
> +
> +       return 0;
> +}
> +
>  static int igt_ppgtt_fill(void *arg)
>  {
>         struct drm_i915_private *dev_priv = arg;
> @@ -352,6 +433,44 @@ static int igt_ppgtt_walk(void *arg)
>         return err;
>  }
>
> +static int igt_ppgtt_drunk(void *arg)
> +{
> +       struct drm_i915_private *dev_priv = arg;
> +       struct drm_file *file;
> +       struct i915_hw_ppgtt *ppgtt;
> +       IGT_TIMEOUT(end_time);
> +       int err;
> +
> +       /* Try binding many VMA in a random pattern within the ppgtt */
> +
> +       if (!USES_FULL_PPGTT(dev_priv))
> +               return 0;
> +
> +       file = mock_file(dev_priv);
> +       if (IS_ERR(file))
> +               return PTR_ERR(file);
> +
> +       mutex_lock(&dev_priv->drm.struct_mutex);
> +       ppgtt = i915_ppgtt_create(dev_priv, file->driver_priv, "mock");
> +       if (IS_ERR(ppgtt)) {
> +               err = PTR_ERR(ppgtt);
> +               goto err_unlock;
> +       }
> +       GEM_BUG_ON(offset_in_page(ppgtt->base.total));
> +
> +       err = fill_random_hole(dev_priv, &ppgtt->base,
> +                              0, ppgtt->base.total,
> +                              end_time);
> +
> +       i915_ppgtt_close(&ppgtt->base);
> +       i915_ppgtt_put(ppgtt);
> +err_unlock:
> +       mutex_unlock(&dev_priv->drm.struct_mutex);
> +
> +       mock_file_free(dev_priv, file);
> +       return err;
> +}
> +
>  static int igt_ggtt_fill(void *arg)
>  {
>         struct drm_i915_private *i915 = arg;
> @@ -412,12 +531,44 @@ static int igt_ggtt_walk(void *arg)
>         return err;
>  }
>
> +static int igt_ggtt_drunk(void *arg)
> +{
> +       struct drm_i915_private *i915 = arg;
> +       struct i915_ggtt *ggtt = &i915->ggtt;
> +       u64 hole_start, hole_end;
> +       struct drm_mm_node *node;
> +       IGT_TIMEOUT(end_time);
> +       int err;
> +
> +       /* Try binding many VMA in a random pattern within the ggtt */
> +
> +       mutex_lock(&i915->drm.struct_mutex);
> +       drm_mm_for_each_hole(node, &ggtt->base.mm, hole_start, hole_end) {
> +               if (ggtt->base.mm.color_adjust)
> +                       ggtt->base. mm.color_adjust(node, 0,
> +                                                   &hole_start, &hole_end);
Odd looking space.

Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 01/38] drm: Provide a driver hook for drm_dev_release()
  2017-01-19 11:41 ` [PATCH v2 01/38] drm: Provide a driver hook for drm_dev_release() Chris Wilson
@ 2017-01-25 11:12   ` Joonas Lahtinen
  2017-01-25 11:16     ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Joonas Lahtinen @ 2017-01-25 11:12 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> Some state is coupled into the device lifetime outside of the
> load/unload timeframe and requires teardown during final unreference
> from drm_dev_release(). For example, dmabufs hold both a device and
> module reference and may live longer than expected (i.e. the current
> pattern of the driver tearing down its state and then releasing a
> reference to the drm device) and yet touch driver private state when
> destroyed.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

I'm pretty sure I sent this once already, but here goes;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 01/38] drm: Provide a driver hook for drm_dev_release()
  2017-01-25 11:12   ` Joonas Lahtinen
@ 2017-01-25 11:16     ` Chris Wilson
  0 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-01-25 11:16 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx

On Wed, Jan 25, 2017 at 01:12:21PM +0200, Joonas Lahtinen wrote:
> On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> > Some state is coupled into the device lifetime outside of the
> > load/unload timeframe and requires teardown during final unreference
> > from drm_dev_release(). For example, dmabufs hold both a device and
> > module reference and may live longer than expected (i.e. the current
> > pattern of the driver tearing down its state and then releasing a
> > reference to the drm device) and yet touch driver private state when
> > destroyed.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> I'm pretty sure I sent this once already, but here goes;

This is the old one now, as it had to undergo some surgery to make
Laurent happy.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 02/38] drm/i915: Provide a hook for selftests
  2017-01-19 11:41 ` [PATCH v2 02/38] drm/i915: Provide a hook for selftests Chris Wilson
@ 2017-01-25 11:50   ` Joonas Lahtinen
  2017-02-01 13:57     ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Joonas Lahtinen @ 2017-01-25 11:50 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> Some pieces of code are independent of hardware but are very tricky to
> exercise through the normal userspace ABI or via debugfs hooks. Being
> able to create mock unit tests and execute them through CI is vital.
> Start by adding a central point where we can execute unit tests and
> a parameter to enable them. This is disabled by default as the
> expectation is that these tests will occasionally explode.
> 
> To facilitate integration with igt, any parameter beginning with
> i915.igt__ is interpreted as a subtest executable independently via
> igt/drv_selftest.
> 
> Two classes of selftests are recognised: mock unit tests and integration
> tests. Mock unit tests are run as soon as the module is loaded, before
> the device is probed. At that point there is no driver instantiated and
> all hw interactions must be "mocked". This is very useful for writing
> universal tests to exercise code not typically run on a broad range of
> architectures. Alternatively, you can hook into the live selftests and
> run when the device has been instantiated - hw interactions are real.
> 
> v2: Add a macro for compiling conditional code for mock objects inside
> real objects.
> v3: Differentiate between mock unit tests and late integration test.
> v4: List the tests in natural order, use igt to sort after modparam.
> v5: s/late/live/
> v6: s/unsigned long/unsigned int/
> v7: Use igt_ prefixes for long helpers.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1

<SNIP>

> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -3,6 +3,7 @@
>  # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
>  
>  subdir-ccflags-$(CONFIG_DRM_I915_WERROR) := -Werror
> +subdir-ccflags-$(CONFIG_DRM_I915_SELFTEST) += -I$(src) -I$(src)/selftests

Similar to drm, add selftests/Makefile, to get rid of this.

> @@ -116,6 +117,9 @@ i915-y += dvo_ch7017.o \
>  
>  # Post-mortem debug and GPU hang state capture
>  i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o
> +i915-$(CONFIG_DRM_I915_SELFTEST) += \
> +	selftests/i915_random.o \
> +	selftests/i915_selftest.o
> 

Ditto.

> @@ -499,7 +501,17 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>  	if (vga_switcheroo_client_probe_defer(pdev))
>  		return -EPROBE_DEFER;
>  
> -	return i915_driver_load(pdev, ent);
> +	err = i915_driver_load(pdev, ent);
> +	if (err)
> +		return err;
> +
> +	err = i915_live_selftests(pdev);
> +	if (err) {
> +		i915_driver_unload(pci_get_drvdata(pdev));
> +		return err > 0 ? -ENOTTY : err;

What's up with this?
 
>  static void i915_pci_remove(struct pci_dev *pdev)
> @@ -521,6 +533,11 @@ static struct pci_driver i915_pci_driver = {
>  static int __init i915_init(void)
>  {
>  	bool use_kms = true;
> +	int err;
> +
> +	err = i915_mock_selftests();
> +	if (err)
> +		return err > 0 ? 0 : err;

Especially this, is this for skipping the device init completely?

> +int i915_subtests(const char *caller,
> +		  const struct i915_subtest *st,
> +		  unsigned int count,
> +		  void *data);
> +#define i915_subtests(T, data) \
> +	(i915_subtests)(__func__, T, ARRAY_SIZE(T), data)

Argh, why not __i915_subtests like good people do?

> +/* Using the i915_selftest_ prefix becomes a little unwieldy with the helpers.
> + * Instead we use the igt_ shorthand, in reference to the intel-gpu-tools
> + * suite of uabi test cases (which includes a test runner for our selftests).
> + */

I'd ask for an ack from Daniel/Jani on this.

> +static inline u32 i915_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
> +{
> > +	return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);
> +}

Upstream material. Also I remember this stuff is in DRM too, so I
assume you cleanly copy pasted, and skip this randomization code.

> +++ b/drivers/gpu/drm/i915/selftests/i915_selftest.c

<SNIP>

> +/* Embed the line number into the parameter name so that we can order tests */
> +#define selftest(n, func) selftest_0(n, func, param(n))
> +#define param(n) __PASTE(igt__, __PASTE(__PASTE(__LINE__, __), mock_##n))

Hmm, you could reduce one __PASTE by making the ending __mock_##n?

> +static int run_selftests(const char *name,
> +			 struct selftest *st,
> +			 unsigned int count,
> +			 void *data)
> +{
> +	int err = 0;
> +
> +	while (!i915_selftest.random_seed)
> +		i915_selftest.random_seed = get_random_int();

You know that in theory this might take an eternity. I'm not sure why
zero is not OK after this point?

> +
> +	i915_selftest.timeout_jiffies =
> +		i915_selftest.timeout_ms ?
> +		msecs_to_jiffies_timeout(i915_selftest.timeout_ms) :
> +		MAX_SCHEDULE_TIMEOUT;

You had a default value for the variable too, I guess that's not needed
now, and gets some bytes off .data.

> +
> +	set_default_test_all(st, count);
> +
> +	pr_info("i915: Performing %s selftests with st_random_seed=0x%x st_timeout=%u\n",
> +		name, i915_selftest.random_seed, i915_selftest.timeout_ms);
> +
> +	/* Tests are listed in order in i915_*_selftests.h */
> +	for (; count--; st++) {
> +		if (!st->enabled)
> +			continue;
> +
> +		cond_resched();
> +		if (signal_pending(current))
> +			return -EINTR;
> +
> +		pr_debug("i915: Running %s\n", st->name);

I guess we shouldn't be hardcoding "i915" in strings.

> +		if (data)
> +			err = st->live(data);
> +		else
> +			err = st->mock();

I'd newline here.

> +		if (err == -EINTR && !signal_pending(current))
> +			err = 0;
> +		if (err)
> +			break;
> +	}
> +
> +	if (WARN(err > 0 || err == -ENOTTY,
> +		 "%s returned %d, conflicting with selftest's magic values!\n",
> +		 st->name, err))
> +		err = -1;
> +
> +	rcu_barrier();

Our tests themselves use no RCU, so at least drop a comment here,
internal driver implementation seems to bleed here.

> +	return err;
> +}
> +
> +#define run_selftests(x, data) \
> +	(run_selftests)(#x, x##_selftests, ARRAY_SIZE(x##_selftests), data)

Nooooo....

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 36/38] drm/i915: Add mock exercise for i915_gem_gtt_reserve
  2017-01-19 11:41 ` [PATCH v2 36/38] drm/i915: Add mock exercise for i915_gem_gtt_reserve Chris Wilson
@ 2017-01-25 13:30   ` Joonas Lahtinen
  0 siblings, 0 replies; 73+ messages in thread
From: Joonas Lahtinen @ 2017-01-25 13:30 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> i915_gem_gtt_reserve should put the node exactly as requested in the
> GTT, evicting as required.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

I might de-magic 2*I915_GTT_PAGE_SIZE and couple of expect helpers, but
regardless;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 37/38] drm/i915: Add mock exercise for i915_gem_gtt_insert
  2017-01-19 11:41 ` [PATCH v2 37/38] drm/i915: Add mock exercise for i915_gem_gtt_insert Chris Wilson
@ 2017-01-25 13:31   ` Joonas Lahtinen
  0 siblings, 0 replies; 73+ messages in thread
From: Joonas Lahtinen @ 2017-01-25 13:31 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> i915_gem_gtt_insert should allocate from the available free space in the
> GTT, evicting as necessary to create space.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +		list_add(&obj->batch_pool_link, &objects);

Still dislike the obscurity. The least you could do is make a comment
on abusing the variable.

> +
> +	/* And then force evictions */
> +	for (total = 0;
> +	     total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
> +	     total += 2*I915_GTT_PAGE_SIZE) {
> +		struct i915_vma *vma;

<SNIP>

> +		err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
> +					   obj->base.size, 0, obj->cache_level,
> +					   0, i915->ggtt.base.total,
> +					   0);
> +		if (err) {
> +			pr_err("i915_gem_gtt_insert (pass 1) failed at %llu/%llu with err=%d\n",

pass 3?

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 34/38] drm/i915: Live testing for context execution
  2017-01-19 11:41 ` [PATCH v2 34/38] drm/i915: Live testing for context execution Chris Wilson
@ 2017-01-25 14:51   ` Joonas Lahtinen
  0 siblings, 0 replies; 73+ messages in thread
From: Joonas Lahtinen @ 2017-01-25 14:51 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> Check we can create and execution within a context.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static struct i915_vma *
> +gpu_fill_pages(struct i915_vma *vma, u64 offset, unsigned long count, u32 value)
> +{

It smells like goto err; in this function.

> +}

Other than that, -EMAGIC, too many numbers and weakly named variables,
can't really follow the test.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 30/38] drm/i915: Test creation of VMA
  2017-01-19 11:41 ` [PATCH v2 30/38] drm/i915: Test creation of VMA Chris Wilson
@ 2017-01-31 10:50   ` Joonas Lahtinen
  2017-02-01 14:07     ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Joonas Lahtinen @ 2017-01-31 10:50 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> Simple test to exercise creation and lookup of VMA within an object.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static bool assert_vma(struct i915_vma *vma,
> +		       struct drm_i915_gem_object *obj,
> +		       struct i915_gem_context *ctx)
> +{
> +	if (vma->vm != &ctx->ppgtt->base) {
> +		pr_err("VMA created with wrong VM\n");
> +		return false;
> +	}

maybe "bool correct = true;" and list all the errors he VMA has? and
finally return correct;

> +	for_each_prime_number(num_obj, ULONG_MAX - 1) {
> +		for (; no < num_obj; no++) {
> +			obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
> +			if (IS_ERR(obj))
> +				goto err;
> +
> +			list_add(&obj->batch_pool_link, &objects);

See below;

> +		}
> +
> +		nc = 0;
> +		for_each_prime_number(num_ctx, MAX_CONTEXT_HW_ID) {
> +			for (; nc < num_ctx; nc++) {
> +				ctx = mock_context(i915, "mock");
> +				if (!ctx)
> +					goto err;
> +
> +				list_move(&ctx->link, &contexts);

Why the difference?

> +			}
> +
> +			err = create_vmas(i915, &objects, &contexts);
> +			if (err)
> +				goto err;
> +
> +			if (igt_timeout(end_time,
> +					"%s timed out: after %lu objects\n",
> +					__func__, no))

Maybe also context count, because it's available.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 33/38] drm/i915: Test creation of partial VMA
  2017-01-19 11:41 ` [PATCH v2 33/38] drm/i915: Test creation of partial VMA Chris Wilson
@ 2017-01-31 12:03   ` Joonas Lahtinen
  0 siblings, 0 replies; 73+ messages in thread
From: Joonas Lahtinen @ 2017-01-31 12:03 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> Mock testing to ensure we can create and lookup partial VMA.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static bool assert_partial(struct drm_i915_gem_object *obj,
> +			   struct i915_vma *vma,
> +			   unsigned long offset,
> +			   unsigned long size)
> +{
> +	struct sgt_iter sgt;

Confusing name, could rather be "sgti" or just "i", or "iter".

> +static int igt_vma_partial(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	const unsigned int npages = 1021; /* prime! */

#define THE_MAGIC_PRIME 1021

> +	for (loop = 0; loop <= 1; loop++) { /* exercise both create/lookup */

I'd like the phase array/variable more. "loop" variable is kinda
confusing easily.

> +		unsigned int count, nvma;
> +

Make a comment here that a whole VMA is also created at the end and it
needs to be accounted. This is why the phase array might be more
readable.

> +		nvma = loop;
> +		for_each_prime_number_from(sz, 1, npages) {
> +			for_each_prime_number_from(offset, 0, npages - sz) {
> +				struct i915_address_space *vm =
> +					&i915->ggtt.base;

Could be out of the loop, too.

<SNIP>

> +
> +		/* Create a mapping for the entire object, just for extra fun */
> +		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);

No helper for this block?

With the variable renamed;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 26/38] drm/i915: Exercise filling the top/bottom portions of the ppgtt
  2017-01-19 11:41 ` [PATCH v2 26/38] drm/i915: Exercise filling the top/bottom portions of the ppgtt Chris Wilson
@ 2017-01-31 12:32   ` Joonas Lahtinen
  0 siblings, 0 replies; 73+ messages in thread
From: Joonas Lahtinen @ 2017-01-31 12:32 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> Allocate objects with varying number of pages (which should hopefully
> consist of a mixture of contiguous page chunks and so coalesced sg
> lists) and check that the sg walkers in insert_pages cope.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static int fill_hole(struct drm_i915_private *i915,
> +		     struct i915_address_space *vm,
> +		     u64 hole_start, u64 hole_end,
> +		     unsigned long end_time)
> +{
> +	const u64 hole_size = hole_end - hole_start;
> +	struct drm_i915_gem_object *obj;
> +	const unsigned long max_pages =
> +		min_t(u64, 1 << 20, hole_size/2 >> PAGE_SHIFT);

At least make a comment, why this specific number. It's good to know if
something is a hard limit vs. pulled out of thin air.

> +	for_each_prime_number_from(prime, 2, 13) {

SMALL_PRIME_MAX or something similar? Also, what are we targeting with
the selected number, staying below X bytes, N seconds or what?

I think all the tests could be clarified with such comments.

<SNIP>

> +			GEM_BUG_ON(!full_size);

This could be in huge_gem_object too?

> +			obj = huge_gem_object(i915, PAGE_SIZE, full_size);
> +			if (IS_ERR(obj))
> +				break;
> +
> +			list_add(&obj->batch_pool_link, &objects);
> +
> +			/* Align differing sized objects against the edges, and
> +			 * check we don't walk off into the void when binding
> +			 * them into the GTT.
> +			 */
> +			for (p = phases; p->name; p++) {
> +				u64 flags;
> +
> +				flags = p->base;

"offset" and "flags" could be separate variables, just for readability
as this is a test.

> +				list_for_each_entry(obj, &objects, batch_pool_link) {
> +					vma = i915_vma_instance(obj, vm, NULL);
> +					if (IS_ERR(vma))
> +						continue;
> +
> +					err = i915_vma_pin(vma, 0, 0, flags);
> +					if (err) {
> +						pr_err("Fill %s pin failed with err=%d on size=%lu pages (prime=%lu), flags=%llx\n", p->name, err, npages, prime, flags);
> +						goto err;
> +					}
> +
> +					i915_vma_unpin(vma);
> +
> +					flags += p->step;
> +					if (flags < hole_start ||
> +					    flags > hole_end)

This is also why I'd prefer the variables to be separate, you could
check <= and >= .

> +						break;

Make a comment for this block, each previous object is smaller, and
that we rely on the list for ordering.

Even when the lack of comments tried to deceive me, I think I
understood it right;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 15/38] drm/i915: Test simultaneously submitting requests to all engines
  2017-01-19 11:41 ` [PATCH v2 15/38] drm/i915: Test simultaneously submitting requests to all engines Chris Wilson
@ 2017-02-01  8:03   ` Joonas Lahtinen
  2017-02-01 10:15     ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Joonas Lahtinen @ 2017-02-01  8:03 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> Use a recursive-batch to busy spin on each to ensure that each is being
> run simultaneously.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> ---
>  drivers/gpu/drm/i915/selftests/i915_gem_request.c | 178 ++++++++++++++++++++++
>  1 file changed, 178 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
> index 19103d87a4c3..fb6f8acc1429 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
> @@ -249,10 +249,188 @@ static int live_nop_request(void *arg)
> >  	return err;
>  }
>  
> +static struct i915_vma *recursive_batch(struct drm_i915_private *i915)
> +{

<SNIP>

> +	if (gen >= 8) {
> +		*cmd++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
> +		*cmd++ = lower_32_bits(vma->node.start);
> +		*cmd++ = upper_32_bits(vma->node.start);
> +	} else if (gen >= 6) {
> +		*cmd++ = MI_BATCH_BUFFER_START | 1 << 8;
> +		*cmd++ = lower_32_bits(vma->node.start);
> +	} else if (gen >= 4) {
> +		*cmd++ = MI_BATCH_BUFFER_START | MI_BATCH_GTT;
> +		*cmd++ = lower_32_bits(vma->node.start);
> +	} else {
> +		*cmd++ = MI_BATCH_BUFFER_START | MI_BATCH_GTT | 1;
> +		*cmd++ = lower_32_bits(vma->node.start);
> +	}

I'm sure this is not first time I see this hunk.

<SNIP>

> +	if (i915->gpu_error.missed_irq_rings) {
> +		pr_err("%s: Missed interrupts on rings %lx\n", __func__,
> +		       i915->gpu_error.missed_irq_rings);
> +		err = -EIO;
> +		goto out_request;
> +	}

Should we have a running missed_irqs counter too? Just wondering.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 14/38] drm/i915: Simple selftest to exercise live requests
  2017-01-19 11:41 ` [PATCH v2 14/38] drm/i915: Simple selftest to exercise live requests Chris Wilson
@ 2017-02-01  8:14   ` Joonas Lahtinen
  2017-02-01 10:31     ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Joonas Lahtinen @ 2017-02-01  8:14 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> Just create several batches of requests and expect it to not fall over!
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static int live_nop_request(void *arg)
> +{

<SNIP>

> +		for_each_prime_number_from(prime, 1, 8192) {

#define MAGIC_PRIME_2 8192

> +			times[1] = ktime_get_raw();
> +
> +			for (n = 0; n < prime; n++) {
> +				request = i915_gem_request_alloc(engine,
> +								 i915->kernel_context);
> +				if (IS_ERR(request)) {
> +					err = PTR_ERR(request);
> +					goto out_unlock;
> +				}

Emitting even a single MI_NOOP or making a comment that the request
will contain instructions even if we do nothing here, might help a
newcomer.

> +
> +				i915_add_request(request);
> +			}
> +			i915_wait_request(request,
> +					  I915_WAIT_LOCKED,
> +					  MAX_SCHEDULE_TIMEOUT);
> +
> +			times[1] = ktime_sub(ktime_get_raw(), times[1]);
> +			if (prime == 1)
> +				times[0] = times[1];

Having this as an array is just hiding names and gaining nothing, how
about calling them just time_first, time_last?

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 15/38] drm/i915: Test simultaneously submitting requests to all engines
  2017-02-01  8:03   ` Joonas Lahtinen
@ 2017-02-01 10:15     ` Chris Wilson
  0 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 10:15 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx

On Wed, Feb 01, 2017 at 10:03:07AM +0200, Joonas Lahtinen wrote:
> On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> > +	if (gen >= 8) {
> > +		*cmd++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
> > +		*cmd++ = lower_32_bits(vma->node.start);
> > +		*cmd++ = upper_32_bits(vma->node.start);
> > +	} else if (gen >= 6) {
> > +		*cmd++ = MI_BATCH_BUFFER_START | 1 << 8;
> > +		*cmd++ = lower_32_bits(vma->node.start);
> > +	} else if (gen >= 4) {
> > +		*cmd++ = MI_BATCH_BUFFER_START | MI_BATCH_GTT;
> > +		*cmd++ = lower_32_bits(vma->node.start);
> > +	} else {
> > +		*cmd++ = MI_BATCH_BUFFER_START | MI_BATCH_GTT | 1;
> > +		*cmd++ = lower_32_bits(vma->node.start);
> > +	}
> 
> I'm sure this is not first time I see this hunk.

Of this variant, it is. What's really frustating is that we almost have
the right vfuncs. Tempted to build a dummy request to reuse the current
emitters....

> <SNIP>
> 
> > +	if (i915->gpu_error.missed_irq_rings) {
> > +		pr_err("%s: Missed interrupts on rings %lx\n", __func__,
> > +		       i915->gpu_error.missed_irq_rings);
> > +		err = -EIO;
> > +		goto out_request;
> > +	}
> 
> Should we have a running missed_irqs counter too? Just wondering.

Since then we now have begin_live_test(&t); end_live_test(&t) that does
the hang/missed checking.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 14/38] drm/i915: Simple selftest to exercise live requests
  2017-02-01  8:14   ` Joonas Lahtinen
@ 2017-02-01 10:31     ` Chris Wilson
  0 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 10:31 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx

On Wed, Feb 01, 2017 at 10:14:28AM +0200, Joonas Lahtinen wrote:
> On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> > Just create several batches of requests and expect it to not fall over!
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> <SNIP>
> 
> > +static int live_nop_request(void *arg)
> > +{
> 
> <SNIP>
> 
> > +		for_each_prime_number_from(prime, 1, 8192) {
> 
> #define MAGIC_PRIME_2 8192

I'm sorry, but that is worse than having a very clear arbitrary number.

> > +
> > +				i915_add_request(request);
> > +			}
> > +			i915_wait_request(request,
> > +					  I915_WAIT_LOCKED,
> > +					  MAX_SCHEDULE_TIMEOUT);
> > +
> > +			times[1] = ktime_sub(ktime_get_raw(), times[1]);
> > +			if (prime == 1)
> > +				times[0] = times[1];
> 
> Having this as an array is just hiding names and gaining nothing, how
> about calling them just time_first, time_last?

time_0, time_N? The array is because I did/am considering that the graph
may be interesting. There should be a plateau at the point the ring is
full - not world shattering, just mildly interesting. Still, probably
want the times for cold submit_1 vs warm submit_1.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation
  2017-01-19 11:41 ` [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation Chris Wilson
@ 2017-02-01 11:17   ` Tvrtko Ursulin
  2017-02-01 11:34     ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Tvrtko Ursulin @ 2017-02-01 11:17 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/01/2017 11:41, Chris Wilson wrote:
> Start exercising the scattergather lists, especially looking at
> iteration after coalescing.
>
> v2: Comment on the peculiarity of table construction (i.e. why this
> sg_table might be interesting).

Just a comment added as request, cool, I can r-b then! Erm no, it is not 
just a comment but a lot of other changes as well...

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem.c                    |  11 +-
>  .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
>  drivers/gpu/drm/i915/selftests/scatterlist.c       | 331 +++++++++++++++++++++
>  3 files changed, 340 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/gpu/drm/i915/selftests/scatterlist.c
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 7c3895230a8a..04edbcaffa25 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2215,17 +2215,17 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
>  	mutex_unlock(&obj->mm.lock);
>  }
>
> -static void i915_sg_trim(struct sg_table *orig_st)
> +static bool i915_sg_trim(struct sg_table *orig_st)
>  {
>  	struct sg_table new_st;
>  	struct scatterlist *sg, *new_sg;
>  	unsigned int i;
>
>  	if (orig_st->nents == orig_st->orig_nents)
> -		return;
> +		return false;
>
>  	if (sg_alloc_table(&new_st, orig_st->nents, GFP_KERNEL | __GFP_NOWARN))
> -		return;
> +		return false;
>
>  	new_sg = new_st.sgl;
>  	for_each_sg(orig_st->sgl, sg, orig_st->nents, i) {
> @@ -2238,6 +2238,7 @@ static void i915_sg_trim(struct sg_table *orig_st)
>  	sg_free_table(orig_st);
>
>  	*orig_st = new_st;
> +	return true;
>  }
>
>  static struct sg_table *
> @@ -4937,3 +4938,7 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
>  	sg = i915_gem_object_get_sg(obj, n, &offset);
>  	return sg_dma_address(sg) + (offset << PAGE_SHIFT);
>  }
> +
> +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> +#include "selftests/scatterlist.c"
> +#endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> index 69e97a2ba4a6..5f0bdda42ed8 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> @@ -9,3 +9,4 @@
>   * Tests are executed in order by igt/drv_selftest
>   */
>  selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
> +selftest(scatterlist, scatterlist_mock_selftests)
> diff --git a/drivers/gpu/drm/i915/selftests/scatterlist.c b/drivers/gpu/drm/i915/selftests/scatterlist.c
> new file mode 100644
> index 000000000000..4000fdd1b7db
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/scatterlist.c
> @@ -0,0 +1,331 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <linux/prime_numbers.h>
> +#include <linux/random.h>
> +
> +#include "i915_selftest.h"
> +
> +#define PFN_BIAS (1 << 10)
> +
> +struct pfn_table {
> +	struct sg_table st;
> +	unsigned long start, end;
> +};
> +
> +typedef unsigned int (*npages_fn_t)(unsigned long n,
> +				    unsigned long count,

sg table cannot store unsigned long of pages but call it future 
proofing. :)

> +				    struct rnd_state *rnd);
> +
> +static noinline int expect_pfn_sg(struct pfn_table *pt,

Why noinline?

> +				  npages_fn_t npages_fn,
> +				  struct rnd_state *rnd,
> +				  const char *who,
> +				  unsigned long timeout)
> +{
> +	struct scatterlist *sg;
> +	unsigned long pfn, n;
> +
> +	pfn = pt->start;
> +	for_each_sg(pt->st.sgl, sg, pt->st.nents, n) {
> +		struct page *page = sg_page(sg);
> +		unsigned int npages = npages_fn(n, pt->st.nents, rnd);
> +
> +		if (page_to_pfn(page) != pfn) {
> +			pr_err("%s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sg)\n",
> +			       who, pfn, page_to_pfn(page));

No __func__ here compared to other messages.

> +			return -EINVAL;
> +		}
> +
> +		if (sg->length != npages * PAGE_SIZE) {
> +			pr_err("%s: %s copied wrong sg length, expected size %lu, found %u (using for_each_sg)\n",
> +			       __func__, who, npages * PAGE_SIZE, sg->length);
> +			return -EINVAL;
> +		}
> +
> +		if (igt_timeout(timeout, "%s timed out\n", who))
> +			return -EINTR;
> +
> +		pfn += npages;
> +	}
> +	if (pfn != pt->end) {
> +		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
> +		       __func__, who, pt->end, pfn);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static noinline int expect_pfn_sg_page_iter(struct pfn_table *pt,
> +					    const char *who,
> +					    unsigned long timeout)
> +{
> +	struct sg_page_iter sgiter;
> +	unsigned long pfn;
> +
> +	pfn = pt->start;
> +	for_each_sg_page(pt->st.sgl, &sgiter, pt->st.nents, 0) {
> +		struct page *page = sg_page_iter_page(&sgiter);
> +
> +		if (page != pfn_to_page(pfn)) {
> +			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sg_page)\n",
> +			       __func__, who, pfn, page_to_pfn(page));
> +			return -EINVAL;
> +		}
> +
> +		if (igt_timeout(timeout, "%s timed out\n", who))
> +			return -EINTR;
> +
> +		pfn++;
> +	}
> +	if (pfn != pt->end) {
> +		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
> +		       __func__, who, pt->end, pfn);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static noinline int expect_pfn_sgtiter(struct pfn_table *pt,
> +				       const char *who,
> +				       unsigned long timeout)
> +{
> +	struct sgt_iter sgt;
> +	struct page *page;
> +	unsigned long pfn;
> +
> +	pfn = pt->start;
> +	for_each_sgt_page(page, sgt, &pt->st) {
> +		if (page != pfn_to_page(pfn)) {
> +			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sgt_page)\n",
> +			       __func__, who, pfn, page_to_pfn(page));
> +			return -EINVAL;
> +		}
> +
> +		if (igt_timeout(timeout, "%s timed out\n", who))
> +			return -EINTR;
> +
> +		pfn++;
> +	}
> +	if (pfn != pt->end) {
> +		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
> +		       __func__, who, pt->end, pfn);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int expect_pfn_sgtable(struct pfn_table *pt,
> +			      npages_fn_t npages_fn,
> +			      struct rnd_state *rnd,
> +			      const char *who,
> +			      unsigned long timeout)
> +{
> +	int err;
> +
> +	err = expect_pfn_sg(pt, npages_fn, rnd, who, timeout);
> +	if (err)
> +		return err;
> +
> +	err = expect_pfn_sg_page_iter(pt, who, timeout);
> +	if (err)
> +		return err;
> +
> +	err = expect_pfn_sgtiter(pt, who, timeout);
> +	if (err)
> +		return err;
> +
> +	return 0;
> +}
> +
> +static unsigned int one(unsigned long n,
> +			unsigned long count,
> +			struct rnd_state *rnd)
> +{
> +	return 1;
> +}
> +
> +static unsigned int grow(unsigned long n,
> +			 unsigned long count,
> +			 struct rnd_state *rnd)
> +{
> +	return n + 1;
> +}
> +
> +static unsigned int shrink(unsigned long n,
> +			   unsigned long count,
> +			   struct rnd_state *rnd)
> +{
> +	return count - n;
> +}
> +
> +static unsigned int random(unsigned long n,
> +			   unsigned long count,
> +			   struct rnd_state *rnd)
> +{
> +	return 1 + (prandom_u32_state(rnd) % 1024);
> +}
> +
> +static bool alloc_table(struct pfn_table *pt,
> +			unsigned long count, unsigned long max,
> +			npages_fn_t npages_fn,
> +			struct rnd_state *rnd)
> +{
> +	struct scatterlist *sg;
> +	unsigned long n, pfn;
> +
> +	if (sg_alloc_table(&pt->st, max,
> +			   GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN))
> +		return false;
> +
> +	/* count should be less than 20 to prevent overflowing sg->length */
> +	GEM_BUG_ON(overflows_type(count * PAGE_SIZE, sg->length));
> +
> +	/* Construct a table where each scatterlist contains different number
> +	 * of entries. The idea is to check that we can iterate the individual
> +	 * pages from inside the coalesced lists.
> +	 */
> +	pt->start = PFN_BIAS;
> +	pfn = pt->start;
> +	sg = pt->st.sgl;
> +	for (n = 0; n < count; n++) {
> +		unsigned long npages = npages_fn(n, count, rnd);
> +
> +		if (n)
> +			sg = sg_next(sg);
> +		sg_set_page(sg, pfn_to_page(pfn), npages * PAGE_SIZE, 0);
> +
> +		GEM_BUG_ON(page_to_pfn(sg_page(sg)) != pfn);
> +		GEM_BUG_ON(sg->length != npages * PAGE_SIZE);
> +		GEM_BUG_ON(sg->offset != 0);
> +
> +		pfn += npages;
> +	}
> +	sg_mark_end(sg);
> +	pt->st.nents = n;
> +	pt->end = pfn;
> +
> +	return true;
> +}
> +
> +static const npages_fn_t npages_funcs[] = {
> +	one,
> +	grow,
> +	shrink,
> +	random,
> +	NULL,
> +};
> +
> +static int igt_sg_alloc(void *ignored)
> +{
> +	IGT_TIMEOUT(end_time);
> +	const unsigned long max_order = 20; /* approximating a 4GiB object */
> +	struct rnd_state prng;
> +	unsigned long prime;
> +
> +	for_each_prime_number(prime, max_order) {
> +		unsigned long size = BIT(prime);
> +		int offset;
> +
> +		for (offset = -1; offset <= 1; offset++) {
> +			unsigned long sz = size + offset;
> +			const npages_fn_t *npages;
> +			struct pfn_table pt;
> +			int err;
> +
> +			for (npages = npages_funcs; *npages; npages++) {
> +				prandom_seed_state(&prng,
> +						   i915_selftest.random_seed);
> +				if (!alloc_table(&pt, sz, sz, *npages, &prng))
> +					return 0; /* out of memory, give up */

You don't have skip status? Sounds not ideal to silently abort.

> +
> +				prandom_seed_state(&prng,
> +						   i915_selftest.random_seed);
> +				err = expect_pfn_sgtable(&pt, *npages, &prng,
> +							 "sg_alloc_table",
> +							 end_time);

Random numbers you use are guaranteed to be the same sequence after you 
re-set the seed? Probably yes since otherwise this wouldn't have ever 
worked.. I just remember some discussion on what source we use and it 
looked like we might be using proper random numbers on some CPUs, or 
even urandom which I didn't think has that property.

> +				sg_free_table(&pt.st);
> +				if (err)
> +					return err;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int igt_sg_trim(void *ignored)
> +{
> +	IGT_TIMEOUT(end_time);
> +	const unsigned long max = PAGE_SIZE; /* not prime! */
> +	struct pfn_table pt;
> +	unsigned long prime;
> +
> +	for_each_prime_number(prime, max) {
> +		const npages_fn_t *npages;
> +		int err;
> +
> +		for (npages = npages_funcs; *npages; npages++) {
> +			struct rnd_state prng;
> +
> +			prandom_seed_state(&prng, i915_selftest.random_seed);
> +			if (!alloc_table(&pt, prime, max, *npages, &prng))
> +				return 0; /* out of memory, give up */
> +
> +			err = 0;
> +			if (i915_sg_trim(&pt.st)) {
> +				if (pt.st.orig_nents != prime ||
> +				    pt.st.nents != prime) {
> +					pr_err("i915_sg_trim failed (nents %u, orig_nents %u), expected %lu\n",
> +					       pt.st.nents, pt.st.orig_nents, prime);
> +					err = -EINVAL;
> +				} else {
> +					prandom_seed_state(&prng,
> +							   i915_selftest.random_seed);
> +					err = expect_pfn_sgtable(&pt,
> +								 *npages, &prng,
> +								 "i915_sg_trim",
> +								 end_time);
> +				}
> +			}

Similar to alloc_table failures above - no log or action when 
i915_sg_trim fails due out of memory?

> +			sg_free_table(&pt.st);
> +			if (err)
> +				return err;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +int scatterlist_mock_selftests(void)
> +{
> +	static const struct i915_subtest tests[] = {
> +		SUBTEST(igt_sg_alloc),
> +		SUBTEST(igt_sg_trim),
> +	};
> +
> +	return i915_subtests(tests, NULL);
> +}
>

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 06/38] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-01-19 11:41 ` [PATCH v2 06/38] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups Chris Wilson
@ 2017-02-01 11:27   ` Tvrtko Ursulin
  2017-02-01 11:43     ` Chris Wilson
  2017-02-01 13:19     ` [PATCH v3] " Chris Wilson
  0 siblings, 2 replies; 73+ messages in thread
From: Tvrtko Ursulin @ 2017-02-01 11:27 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/01/2017 11:41, Chris Wilson wrote:
> Third retroactive test, make sure that the seqno waiters are woken.

No changelog and you haven't added any of the comments I've asked for. I 
really think it is time to add more documentation in the tests. I don't 
feel like reverse engineering what this test does in the future (like I 
had to when initially reviewing it).

For what I can remember, comments for the general state machine, or 
maybe better call it sequence machine, the test uses are required. 
Overall description of the approach it is taking with the ready, set and 
done stages at least.

Then comment on the rationale behind the sleep.

Comment for the dec_and_test after the loop as well.

The wake up strategy as well probably.

Just re-read my old comments. bool stop_thread instead of flags as well 
would be worthwhile change towards making the test easier to understand.

Regards,

Tvrtko

>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 171 +++++++++++++++++++++
>  1 file changed, 171 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> index 245e5f1b8373..fe45c4c7c757 100644
> --- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> +++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> @@ -263,11 +263,182 @@ static int igt_insert_complete(void *arg)
>  	return err;
>  }
>
> +struct igt_wakeup {
> +	struct task_struct *tsk;
> +	atomic_t *ready, *set, *done;
> +	struct intel_engine_cs *engine;
> +	unsigned long flags;
> +	wait_queue_head_t *wq;
> +	u32 seqno;
> +};
> +
> +static int wait_atomic(atomic_t *p)
> +{
> +	schedule();
> +	return 0;
> +}
> +
> +static int wait_atomic_timeout(atomic_t *p)
> +{
> +	return schedule_timeout(10 * HZ) ? 0 : -ETIMEDOUT;
> +}
> +
> +static int igt_wakeup_thread(void *arg)
> +{
> +	struct igt_wakeup *w = arg;
> +	struct intel_wait wait;
> +
> +	while (!kthread_should_stop()) {
> +		DEFINE_WAIT(ready);
> +
> +		for (;;) {
> +			prepare_to_wait(w->wq, &ready, TASK_INTERRUPTIBLE);
> +			if (atomic_read(w->ready) == 0)
> +				break;
> +
> +			schedule();
> +		}
> +		finish_wait(w->wq, &ready);
> +		if (atomic_dec_and_test(w->set))
> +			wake_up_atomic_t(w->set);
> +
> +		if (test_bit(0, &w->flags))
> +			break;
> +
> +		intel_wait_init(&wait, w->seqno);
> +		intel_engine_add_wait(w->engine, &wait);
> +		for (;;) {
> +			set_current_state(TASK_UNINTERRUPTIBLE);
> +			if (i915_seqno_passed(intel_engine_get_seqno(w->engine),
> +					      w->seqno))
> +				break;
> +
> +			schedule();
> +		}
> +		intel_engine_remove_wait(w->engine, &wait);
> +		__set_current_state(TASK_RUNNING);
> +
> +		if (atomic_dec_and_test(w->done))
> +			wake_up_atomic_t(w->done);
> +	}
> +
> +	if (atomic_dec_and_test(w->done))
> +		wake_up_atomic_t(w->done);
> +	return 0;
> +}
> +
> +static void igt_wake_all_sync(atomic_t *ready,
> +			      atomic_t *set,
> +			      atomic_t *done,
> +			      wait_queue_head_t *wq,
> +			      int count)
> +{
> +	atomic_set(set, count);
> +	atomic_set(done, count);
> +
> +	atomic_set(ready, 0);
> +	wake_up_all(wq);
> +
> +	wait_on_atomic_t(set, wait_atomic, TASK_UNINTERRUPTIBLE);
> +	atomic_set(ready, count);
> +}
> +
> +static int igt_wakeup(void *arg)
> +{
> +	const int state = TASK_UNINTERRUPTIBLE;
> +	struct intel_engine_cs *engine = arg;
> +	struct igt_wakeup *waiters;
> +	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
> +	const int count = 4096;
> +	const u32 max_seqno = count / 4;
> +	atomic_t ready, set, done;
> +	int err = -ENOMEM;
> +	int n, step;
> +
> +	mock_engine_reset(engine);
> +
> +	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
> +	if (!waiters)
> +		goto out_engines;
> +
> +	/* Create a large number of threads, each waiting on a random seqno.
> +	 * Multiple waiters will be waiting for the same seqno.
> +	 */
> +	atomic_set(&ready, count);
> +	for (n = 0; n < count; n++) {
> +		waiters[n].wq = &wq;
> +		waiters[n].ready = &ready;
> +		waiters[n].set = &set;
> +		waiters[n].done = &done;
> +		waiters[n].engine = engine;
> +		waiters[n].flags = 0;
> +
> +		waiters[n].tsk = kthread_run(igt_wakeup_thread, &waiters[n],
> +					     "i915/igt:%d", n);
> +		if (IS_ERR(waiters[n].tsk))
> +			goto out_waiters;
> +
> +		get_task_struct(waiters[n].tsk);
> +	}
> +
> +	for (step = 1; step <= max_seqno; step <<= 1) {
> +		u32 seqno;
> +
> +		for (n = 0; n < count; n++)
> +			waiters[n].seqno = 1 + get_random_int() % max_seqno;
> +
> +		mock_seqno_advance(engine, 0);
> +		igt_wake_all_sync(&ready, &set, &done, &wq, count);
> +
> +		for (seqno = 1; seqno <= max_seqno + step; seqno += step) {
> +			usleep_range(50, 500);
> +			mock_seqno_advance(engine, seqno);
> +		}
> +		GEM_BUG_ON(intel_engine_get_seqno(engine) < 1 + max_seqno);
> +
> +		err = wait_on_atomic_t(&done, wait_atomic_timeout, state);
> +		if (err) {
> +			pr_err("Timed out waiting for %d remaining waiters\n",
> +			       atomic_read(&done));
> +			break;
> +		}
> +
> +		err = check_rbtree_empty(engine);
> +		if (err)
> +			break;
> +	}
> +
> +out_waiters:
> +	for (n = 0; n < count; n++) {
> +		if (IS_ERR(waiters[n].tsk))
> +			break;
> +
> +		set_bit(0, &waiters[n].flags);
> +	}
> +
> +	igt_wake_all_sync(&ready, &set, &done, &wq, n);
> +	wait_on_atomic_t(&done, wait_atomic, state);
> +
> +	for (n = 0; n < count; n++) {
> +		if (IS_ERR(waiters[n].tsk))
> +			break;
> +
> +		kthread_stop(waiters[n].tsk);
> +		put_task_struct(waiters[n].tsk);
> +	}
> +
> +	drm_free_large(waiters);
> +out_engines:
> +	mock_engine_flush(engine);
> +	return err;
> +}
> +
>  int intel_breadcrumbs_mock_selftests(void)
>  {
>  	static const struct i915_subtest tests[] = {
>  		SUBTEST(igt_random_insert_remove),
>  		SUBTEST(igt_insert_complete),
> +		SUBTEST(igt_wakeup),
>  	};
>  	struct intel_engine_cs *engine;
>  	int err;
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation
  2017-02-01 11:17   ` Tvrtko Ursulin
@ 2017-02-01 11:34     ` Chris Wilson
  2017-02-02 12:41       ` Tvrtko Ursulin
  0 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 11:34 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Wed, Feb 01, 2017 at 11:17:39AM +0000, Tvrtko Ursulin wrote:
> >+static noinline int expect_pfn_sg(struct pfn_table *pt,
> 
> Why noinline?

So they show up in perf individually.

> >+
> >+			for (npages = npages_funcs; *npages; npages++) {
> >+				prandom_seed_state(&prng,
> >+						   i915_selftest.random_seed);
> >+				if (!alloc_table(&pt, sz, sz, *npages, &prng))
> >+					return 0; /* out of memory, give up */
> 
> You don't have skip status? Sounds not ideal to silently abort.

It runs until we use all physical memory, if left to its own devices. It's
not a skip if we have already completed some tests. ENOMEM of the test
setup itself is not what I'm testing for here, the test is for the
iterators.
 
> >+
> >+				prandom_seed_state(&prng,
> >+						   i915_selftest.random_seed);
> >+				err = expect_pfn_sgtable(&pt, *npages, &prng,
> >+							 "sg_alloc_table",
> >+							 end_time);
> 
> Random numbers you use are guaranteed to be the same sequence after
> you re-set the seed? Probably yes since otherwise this wouldn't have
> ever worked.. I just remember some discussion on what source we use
> and it looked like we might be using proper random numbers on some
> CPUs, or even urandom which I didn't think has that property.

It's a completely deterministic prng. get_random_int() is the urandom
equivalent.

> >+static int igt_sg_trim(void *ignored)
> >+{
> >+	IGT_TIMEOUT(end_time);
> >+	const unsigned long max = PAGE_SIZE; /* not prime! */
> >+	struct pfn_table pt;
> >+	unsigned long prime;
> >+
> >+	for_each_prime_number(prime, max) {
> >+		const npages_fn_t *npages;
> >+		int err;
> >+
> >+		for (npages = npages_funcs; *npages; npages++) {
> >+			struct rnd_state prng;
> >+
> >+			prandom_seed_state(&prng, i915_selftest.random_seed);
> >+			if (!alloc_table(&pt, prime, max, *npages, &prng))
> >+				return 0; /* out of memory, give up */
> >+
> >+			err = 0;
> >+			if (i915_sg_trim(&pt.st)) {
> >+				if (pt.st.orig_nents != prime ||
> >+				    pt.st.nents != prime) {
> >+					pr_err("i915_sg_trim failed (nents %u, orig_nents %u), expected %lu\n",
> >+					       pt.st.nents, pt.st.orig_nents, prime);
> >+					err = -EINVAL;
> >+				} else {
> >+					prandom_seed_state(&prng,
> >+							   i915_selftest.random_seed);
> >+					err = expect_pfn_sgtable(&pt,
> >+								 *npages, &prng,
> >+								 "i915_sg_trim",
> >+								 end_time);
> >+				}
> >+			}
> 
> Similar to alloc_table failures above - no log or action when
> i915_sg_trim fails due out of memory?

No, simply because that's an expected and acceptable result. The
question should be whether we always want to check after sg_trim.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 06/38] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-02-01 11:27   ` Tvrtko Ursulin
@ 2017-02-01 11:43     ` Chris Wilson
  2017-02-01 13:19     ` [PATCH v3] " Chris Wilson
  1 sibling, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 11:43 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Wed, Feb 01, 2017 at 11:27:00AM +0000, Tvrtko Ursulin wrote:
> 
> On 19/01/2017 11:41, Chris Wilson wrote:
> >Third retroactive test, make sure that the seqno waiters are woken.
> 
> No changelog and you haven't added any of the comments I've asked
> for. I really think it is time to add more documentation in the
> tests. I don't feel like reverse engineering what this test does in
> the future (like I had to when initially reviewing it).

No. I commented on your reply which I felt was sufficient for the
simplicity of the test.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 38/38] drm/i915: Add initial selftests for hang detection and resets
  2017-01-19 11:41 ` [PATCH v2 38/38] drm/i915: Add initial selftests for hang detection and resets Chris Wilson
@ 2017-02-01 11:43   ` Mika Kuoppala
  2017-02-01 13:31     ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Mika Kuoppala @ 2017-02-01 11:43 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Check that we can reset the GPU and continue executing from the next
> request.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/intel_hangcheck.c             |   4 +
>  .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
>  drivers/gpu/drm/i915/selftests/intel_hangcheck.c   | 463 +++++++++++++++++++++
>  3 files changed, 468 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/selftests/intel_hangcheck.c
>
> diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c
> index f05971f5586f..dce742243ba6 100644
> --- a/drivers/gpu/drm/i915/intel_hangcheck.c
> +++ b/drivers/gpu/drm/i915/intel_hangcheck.c
> @@ -480,3 +480,7 @@ void intel_hangcheck_init(struct drm_i915_private *i915)
>  	INIT_DELAYED_WORK(&i915->gpu_error.hangcheck_work,
>  			  i915_hangcheck_elapsed);
>  }
> +
> +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> +#include "selftests/intel_hangcheck.c"
> +#endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> index 0c925f17b445..e6699c59f244 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> @@ -15,3 +15,4 @@ selftest(object, i915_gem_object_live_selftests)
>  selftest(coherency, i915_gem_coherency_live_selftests)
>  selftest(gtt, i915_gem_gtt_live_selftests)
>  selftest(context, i915_gem_context_live_selftests)
> +selftest(hangcheck, intel_hangcheck_live_selftests)
> diff --git a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
> new file mode 100644
> index 000000000000..d306890ba7eb
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
> @@ -0,0 +1,463 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include "i915_selftest.h"
> +
> +struct hang {
> +	struct drm_i915_private *i915;
> +	struct drm_i915_gem_object *hws;
> +	struct drm_i915_gem_object *obj;
> +	u32 *seqno;
> +	u32 *batch;
> +};
> +
> +static int hang_init(struct hang *h, struct drm_i915_private *i915)
> +{
> +	void *vaddr;
> +
> +	memset(h, 0, sizeof(*h));
> +	h->i915 = i915;
> +
> +	h->hws = i915_gem_object_create_internal(i915, PAGE_SIZE);
> +	if (IS_ERR(h->hws))
> +		return PTR_ERR(h->hws);
> +
> +	h->obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
> +	if (IS_ERR(h->obj)) {
> +		i915_gem_object_put(h->obj);

i915_gem_object_put(h->hws);

> +		return PTR_ERR(h->obj);
> +	}
> +
> +	i915_gem_object_set_cache_level(h->hws, I915_CACHE_LLC);
> +	vaddr = i915_gem_object_pin_map(h->hws, I915_MAP_WB);
> +	if (IS_ERR(vaddr)) {
> +		i915_gem_object_put(h->hws);
> +		i915_gem_object_put(h->obj);
> +		return PTR_ERR(vaddr);
> +	}
> +	h->seqno = memset(vaddr, 0xff, PAGE_SIZE);
> +
> +	vaddr = i915_gem_object_pin_map(h->obj,
> +					HAS_LLC(i915) ? I915_MAP_WB : I915_MAP_WC);
> +	if (IS_ERR(vaddr)) {
> +		i915_gem_object_unpin_map(h->hws);
> +		i915_gem_object_put(h->hws);
> +		i915_gem_object_put(h->obj);
> +		return PTR_ERR(vaddr);
> +	}
> +	h->batch = vaddr;
> +
> +	return 0;
> +}
> +
> +static u64 hws_address(const struct i915_vma *hws,
> +		       const struct drm_i915_gem_request *rq)
> +{
> +	return hws->node.start + offset_in_page(sizeof(u32)*rq->fence.context);

fence.context is something unique returned by dma_fence_context_alloc()
and we assume we don't collide in the scope of this test?

> +}
> +
> +static int emit_recurse_batch(struct hang *h,
> +			      struct drm_i915_gem_request *rq)
> +{
> +	struct drm_i915_private *i915 = h->i915;
> +	struct i915_address_space *vm = rq->ctx->ppgtt ? &rq->ctx->ppgtt->base : &i915->ggtt.base;
> +	struct i915_vma *hws, *vma;
> +	u32 *batch;
> +	int err;
> +
> +	vma = i915_vma_instance(h->obj, vm, NULL);
> +	if (IS_ERR(vma))
> +		return PTR_ERR(vma);
> +
> +	hws = i915_vma_instance(h->hws, vm, NULL);
> +	if (IS_ERR(hws))
> +		return PTR_ERR(hws);
> +
> +	err = i915_vma_pin(vma, 0, 0, PIN_USER);
> +	if (err)
> +		return err;
> +
> +	err = i915_vma_pin(hws, 0, 0, PIN_USER);
> +	if (err)
> +		goto unpin_vma;
> +
> +	i915_vma_move_to_active(vma, rq, 0);
> +	i915_vma_move_to_active(hws, rq, 0);
> +
> +	batch = h->batch;
> +	if (INTEL_GEN(i915) >= 8) {
> +		*batch++ = MI_STORE_DWORD_IMM_GEN4;
> +		*batch++ = lower_32_bits(hws_address(hws, rq));
> +		*batch++ = upper_32_bits(hws_address(hws, rq));
> +		*batch++ = rq->fence.seqno;
> +		*batch++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
> +		*batch++ = lower_32_bits(vma->node.start);
> +		*batch++ = upper_32_bits(vma->node.start);
> +	} else if (INTEL_GEN(i915) >= 6) {
> +		*batch++ = MI_STORE_DWORD_IMM_GEN4;
> +		*batch++ = 0;
> +		*batch++ = lower_32_bits(hws_address(hws, rq));
> +		*batch++ = rq->fence.seqno;
> +		*batch++ = MI_BATCH_BUFFER_START | 1 << 8;
> +		*batch++ = lower_32_bits(vma->node.start);
> +	} else if (INTEL_GEN(i915) >= 4) {
> +		*batch++ = MI_STORE_DWORD_IMM_GEN4;
> +		*batch++ = 0;
> +		*batch++ = lower_32_bits(hws_address(hws, rq));
> +		*batch++ = rq->fence.seqno;
> +		*batch++ = MI_BATCH_BUFFER_START | 2 << 6;
> +		*batch++ = lower_32_bits(vma->node.start);
> +	} else {
> +		*batch++ = MI_STORE_DWORD_IMM;
> +		*batch++ = lower_32_bits(hws_address(hws, rq));
> +		*batch++ = rq->fence.seqno;
> +		*batch++ = MI_BATCH_BUFFER_START | 2 << 6 | 1;
> +		*batch++ = lower_32_bits(vma->node.start);
> +	}
> +
> +	err = rq->engine->emit_bb_start(rq, vma->node.start, PAGE_SIZE, 0);
> +
> +	i915_vma_unpin(hws);
> +unpin_vma:
> +	i915_vma_unpin(vma);
> +	return err;
> +}
> +
> +static struct drm_i915_gem_request *
> +hang_create_request(struct hang *h,
> +		    struct intel_engine_cs *engine,
> +		    struct i915_gem_context *ctx)
> +{
> +	struct drm_i915_gem_request *rq;
> +	int err;
> +
> +	if (i915_gem_object_is_active(h->obj)) {
> +		struct drm_i915_gem_object *obj;
> +		void *vaddr;
> +
> +		obj = i915_gem_object_create_internal(h->i915, PAGE_SIZE);
> +		if (IS_ERR(obj))
> +			return ERR_CAST(obj);
> +
> +		vaddr = i915_gem_object_pin_map(obj,
> +						HAS_LLC(h->i915) ? I915_MAP_WB : I915_MAP_WC);
> +		if (IS_ERR(vaddr)) {
> +			i915_gem_object_put(obj);
> +			return ERR_CAST(vaddr);
> +		}
> +
> +		i915_gem_object_unpin_map(h->obj);
> +		__i915_gem_object_release_unless_active(h->obj);
> +
> +		h->obj = obj;
> +		h->batch = vaddr;

This whole block confuses me. Is it about the reset queue test
if something went wrong with the previous request?

> +	}
> +
> +	rq = i915_gem_request_alloc(engine, ctx);
> +	if (IS_ERR(rq))
> +		return rq;
> +
> +	err = emit_recurse_batch(h, rq);
> +	if (err) {
> +		__i915_add_request(rq, false);
> +		return ERR_PTR(err);
> +	}
> +
> +	return rq;
> +}
> +
> +static u32 hws_seqno(const struct hang *h,
> +		     const struct drm_i915_gem_request *rq)
> +{
> +	return READ_ONCE(h->seqno[rq->fence.context % (PAGE_SIZE/sizeof(u32))]);
> +}
> +
> +static void hang_fini(struct hang *h)
> +{
> +	*h->batch = MI_BATCH_BUFFER_END;
> +
> +	i915_gem_object_unpin_map(h->obj);
> +	__i915_gem_object_release_unless_active(h->obj);
> +
> +	i915_gem_object_unpin_map(h->hws);
> +	__i915_gem_object_release_unless_active(h->hws);
> +}
> +
> +static int igt_hang_sanitycheck(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct drm_i915_gem_request *rq;
> +	struct hang h;
> +	int err;
> +
> +	/* Basic check that we can execute our hanging batch */
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	err = hang_init(&h, i915);
> +	if (err)
> +		goto unlock;
> +
> +	rq = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
> +	if (IS_ERR(rq)) {
> +		err = PTR_ERR(rq);
> +		goto fini;
> +	}
> +
> +	i915_gem_request_get(rq);
> +
> +	*h.batch = MI_BATCH_BUFFER_END;
> +	__i915_add_request(rq, true);
> +
> +	i915_wait_request(rq, I915_WAIT_LOCKED, MAX_SCHEDULE_TIMEOUT);
> +	i915_gem_request_put(rq);
> +
> +fini:
> +	hang_fini(&h);
> +unlock:
> +	mutex_unlock(&i915->drm.struct_mutex);
> +	return err;
> +}
> +
> +static int igt_global_reset(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	unsigned int reset_count;
> +	int err = 0;
> +
> +	/* Check that we can issue a global GPU reset */
> +
> +	if (!intel_has_gpu_reset(i915))
> +		return 0;
> +
> +	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	reset_count = i915_reset_count(&i915->gpu_error);
> +
> +	i915_reset(i915);
> +
> +	if (i915_reset_count(&i915->gpu_error) == reset_count) {
> +		pr_err("No GPU reset recorded!\n");
> +		err = -EINVAL;
> +	}
> +	mutex_unlock(&i915->drm.struct_mutex);
> +
> +	GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
> +	if (i915_terminally_wedged(&i915->gpu_error))
> +		err = -EIO;
> +
> +	return err;
> +}
> +
> +static u32 fake_hangcheck(struct drm_i915_gem_request *rq)
> +{
> +	u32 reset_count;
> +
> +	rq->engine->hangcheck.stalled = true;
> +	rq->engine->hangcheck.seqno = intel_engine_get_seqno(rq->engine);
> +
> +	reset_count = i915_reset_count(&rq->i915->gpu_error);
> +
> +	set_bit(I915_RESET_IN_PROGRESS, &rq->i915->gpu_error.flags);
> +	wake_up_all(&rq->i915->gpu_error.wait_queue);
> +
> +	return reset_count;
> +}
> +
> +static bool wait_for_hang(struct hang *h, struct drm_i915_gem_request *rq)
> +{
> +	return !(wait_for_us(i915_seqno_passed(hws_seqno(h, rq),
> +					       rq->fence.seqno),
> +			     10) &&
> +		 wait_for(i915_seqno_passed(hws_seqno(h, rq),
> +					    rq->fence.seqno),
> +			  1000));
> +}
> +
> +static int igt_wait_reset(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct drm_i915_gem_request *rq;
> +	unsigned int reset_count;
> +	struct hang h;
> +	long timeout;
> +	int err;
> +
> +	/* Check that we detect a stuck waiter and issue a reset */
> +
> +	if (!intel_has_gpu_reset(i915))
> +		return 0;
> +
> +	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);

Noticed that you do this early. In this test the fake_hangcheck
would do it for you also but I suspect you want this to gain
exclusive access after this point?

-Mika

> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	err = hang_init(&h, i915);
> +	if (err)
> +		goto unlock;
> +
> +	rq = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
> +	if (IS_ERR(rq)) {
> +		err = PTR_ERR(rq);
> +		goto fini;
> +	}
> +
> +	i915_gem_request_get(rq);
> +	__i915_add_request(rq, true);
> +
> +	if (!wait_for_hang(&h, rq)) {
> +		pr_err("Failed to start request %x\n", rq->fence.seqno);
> +		err = -EIO;
> +		goto fini;
> +	}
> +
> +	reset_count = fake_hangcheck(rq);
> +
> +	timeout = i915_wait_request(rq, I915_WAIT_LOCKED, 10);
> +	if (timeout < 0) {
> +		pr_err("i915_wait_request failed on a stuck request: err=%ld\n",
> +		       timeout);
> +		err = timeout;
> +		goto fini;
> +	}
> +	GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
> +
> +	if (i915_reset_count(&i915->gpu_error) == reset_count) {
> +		pr_err("No GPU reset recorded!\n");
> +		err = -EINVAL;
> +		goto fini;
> +	}
> +
> +fini:
> +	hang_fini(&h);
> +unlock:
> +	mutex_unlock(&i915->drm.struct_mutex);
> +
> +	if (i915_terminally_wedged(&i915->gpu_error))
> +		return -EIO;
> +
> +	return err;
> +}
> +
> +static int igt_reset_queue(void *arg)
> +{
> +	IGT_TIMEOUT(end_time);
> +	struct drm_i915_private *i915 = arg;
> +	struct drm_i915_gem_request *prev;
> +	unsigned int count;
> +	struct hang h;
> +	int err;
> +
> +	/* Check that we replay pending requests following a hang */
> +
> +	if (!intel_has_gpu_reset(i915))
> +		return 0;
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	err = hang_init(&h, i915);
> +	if (err)
> +		goto unlock;
> +
> +	prev = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
> +	if (IS_ERR(prev)) {
> +		err = PTR_ERR(prev);
> +		goto fini;
> +	}
> +
> +	i915_gem_request_get(prev);
> +	__i915_add_request(prev, true);
> +
> +	count = 0;
> +	do {
> +		struct drm_i915_gem_request *rq;
> +		unsigned int reset_count;
> +
> +		rq = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
> +		if (IS_ERR(rq)) {
> +			err = PTR_ERR(rq);
> +			goto fini;
> +		}
> +
> +		i915_gem_request_get(rq);
> +		__i915_add_request(rq, true);
> +
> +		if (!wait_for_hang(&h, prev)) {
> +			pr_err("Failed to start request %x\n",
> +			       prev->fence.seqno);
> +			err = -EIO;
> +			goto fini;
> +		}
> +
> +		reset_count = fake_hangcheck(prev);
> +
> +		i915_reset(i915);
> +
> +		GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
> +		if (prev->fence.error != -EIO) {
> +			pr_err("GPU reset not recorded on hanging request [fence.error=%d]!\n",
> +			       prev->fence.error);
> +			err = -EINVAL;
> +			goto fini;
> +		}
> +
> +		if (rq->fence.error) {
> +			pr_err("Fence error status not zero [%d] after unrelated reset\n",
> +			       rq->fence.error);
> +			err = -EINVAL;
> +			goto fini;
> +		}
> +
> +		if (i915_reset_count(&i915->gpu_error) == reset_count) {
> +			pr_err("No GPU reset recorded!\n");
> +			err = -EINVAL;
> +			goto fini;
> +		}
> +
> +		i915_gem_request_put(prev);
> +		prev = rq;
> +		count++;
> +	} while (time_before(jiffies, end_time));
> +	pr_info("Completed %d resets\n", count);
> +	i915_gem_request_put(prev);
> +
> +fini:
> +	hang_fini(&h);
> +unlock:
> +	mutex_unlock(&i915->drm.struct_mutex);
> +
> +	if (i915_terminally_wedged(&i915->gpu_error))
> +		return -EIO;
> +
> +	return err;
> +}
> +
> +int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
> +{
> +	static const struct i915_subtest tests[] = {
> +		SUBTEST(igt_hang_sanitycheck),
> +		SUBTEST(igt_global_reset),
> +		SUBTEST(igt_wait_reset),
> +		SUBTEST(igt_reset_queue),
> +	};
> +	return i915_subtests(tests, i915);
> +}
> -- 
> 2.11.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH v3] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-02-01 11:27   ` Tvrtko Ursulin
  2017-02-01 11:43     ` Chris Wilson
@ 2017-02-01 13:19     ` Chris Wilson
  2017-02-01 16:57       ` Tvrtko Ursulin
  1 sibling, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 13:19 UTC (permalink / raw)
  To: intel-gfx

Third retroactive test, make sure that the seqno waiters are woken.

v2: Smattering of comments, rearrange code

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 202 +++++++++++++++++++++
 1 file changed, 202 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
index 245e5f1b8373..907503901644 100644
--- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -263,11 +263,213 @@ static int igt_insert_complete(void *arg)
 	return err;
 }
 
+struct igt_wakeup {
+	struct task_struct *tsk;
+	atomic_t *ready, *set, *done;
+	struct intel_engine_cs *engine;
+	unsigned long flags;
+#define STOP 0
+#define WAITING 1
+	wait_queue_head_t *wq;
+	u32 seqno;
+};
+
+static int wait_atomic(atomic_t *p)
+{
+	schedule();
+	return 0;
+}
+
+static int wait_atomic_timeout(atomic_t *p)
+{
+	return schedule_timeout(10 * HZ) ? 0 : -ETIMEDOUT;
+}
+
+static bool wait_for_ready(struct igt_wakeup *w)
+{
+	DEFINE_WAIT(ready);
+
+	if (atomic_dec_and_test(w->done))
+		wake_up_atomic_t(w->done);
+
+	if (test_bit(STOP, &w->flags))
+		goto out;
+
+	set_bit(WAITING, &w->flags);
+	for (;;) {
+		prepare_to_wait(w->wq, &ready, TASK_INTERRUPTIBLE);
+		if (atomic_read(w->ready) == 0)
+			break;
+
+		schedule();
+	}
+	finish_wait(w->wq, &ready);
+	clear_bit(WAITING, &w->flags);
+
+out:
+	if (atomic_dec_and_test(w->set))
+		wake_up_atomic_t(w->set);
+
+	return !test_bit(STOP, &w->flags);
+}
+
+static int igt_wakeup_thread(void *arg)
+{
+	struct igt_wakeup *w = arg;
+	struct intel_wait wait;
+
+	while (wait_for_ready(w)) {
+		GEM_BUG_ON(kthread_should_stop());
+
+		intel_wait_init(&wait, w->seqno);
+		intel_engine_add_wait(w->engine, &wait);
+		for (;;) {
+			set_current_state(TASK_UNINTERRUPTIBLE);
+			if (i915_seqno_passed(intel_engine_get_seqno(w->engine),
+					      w->seqno))
+				break;
+
+			if (test_bit(STOP, &w->flags)) /* emergency escape */
+				break;
+
+			schedule();
+		}
+		intel_engine_remove_wait(w->engine, &wait);
+		__set_current_state(TASK_RUNNING);
+	}
+
+	return 0;
+}
+
+static void igt_wake_all_sync(atomic_t *ready,
+			      atomic_t *set,
+			      atomic_t *done,
+			      wait_queue_head_t *wq,
+			      int count)
+{
+	atomic_set(set, count);
+	atomic_set(done, count);
+
+	atomic_set(ready, 0);
+	wake_up_all(wq);
+
+	wait_on_atomic_t(set, wait_atomic, TASK_UNINTERRUPTIBLE);
+	atomic_set(ready, count);
+}
+
+static int igt_wakeup(void *arg)
+{
+	I915_RND_STATE(prng);
+	const int state = TASK_UNINTERRUPTIBLE;
+	struct intel_engine_cs *engine = arg;
+	struct igt_wakeup *waiters;
+	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	const int count = 4096;
+	const u32 max_seqno = count / 4;
+	atomic_t ready, set, done;
+	int err = -ENOMEM;
+	int n, step;
+
+	mock_engine_reset(engine);
+
+	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
+	if (!waiters)
+		goto out_engines;
+
+	/* Create a large number of threads, each waiting on a random seqno.
+	 * Multiple waiters will be waiting for the same seqno.
+	 */
+	atomic_set(&ready, count);
+	for (n = 0; n < count; n++) {
+		waiters[n].wq = &wq;
+		waiters[n].ready = &ready;
+		waiters[n].set = &set;
+		waiters[n].done = &done;
+		waiters[n].engine = engine;
+		waiters[n].flags = 0;
+
+		waiters[n].tsk = kthread_run(igt_wakeup_thread, &waiters[n],
+					     "i915/igt:%d", n);
+		if (IS_ERR(waiters[n].tsk))
+			goto out_waiters;
+
+		get_task_struct(waiters[n].tsk);
+	}
+
+	for (step = 1; step <= max_seqno; step <<= 1) {
+		u32 seqno;
+
+		/* The waiter threads start paused as we assign them a random
+		 * seqno and reset the engine. Once the engine is reset,
+		 * we signal that the threads may begin their, and we wait
+		 * until all threads are woken.
+		 */
+		for (n = 0; n < count; n++) {
+			GEM_BUG_ON(!test_bit(WAITING, &waiters[n].flags));
+			waiters[n].seqno =
+				1 + prandom_u32_state(&prng) % max_seqno;
+		}
+		mock_seqno_advance(engine, 0);
+		igt_wake_all_sync(&ready, &set, &done, &wq, count);
+
+		/* Simulate GPU doing chunks of work, with one or more seqno
+		 * appearing to finish at the same time. A random number of
+		 * threads will be waiting upon the update and hopefully be
+		 * woken.
+		 */
+		for (seqno = 1; seqno <= max_seqno + step; seqno += step) {
+			usleep_range(50, 500);
+			mock_seqno_advance(engine, seqno);
+		}
+		GEM_BUG_ON(intel_engine_get_seqno(engine) < 1 + max_seqno);
+
+		/* With the seqno now beyond any of the waiting threads, they
+		 * should all be woken, see that they are complete and signal
+		 * that they are ready for the next test. We wait until all
+		 * threads are waiting for us (and not a seqno) again.
+		 */
+		err = wait_on_atomic_t(&done, wait_atomic_timeout, state);
+		if (err) {
+			pr_err("Timed out waiting for %d remaining waiters\n",
+			       atomic_read(&done));
+			break;
+		}
+
+		err = check_rbtree_empty(engine);
+		if (err)
+			break;
+	}
+
+out_waiters:
+	mock_seqno_advance(engine, INT_MAX); /* wakeup any broken waiters */
+	for (n = 0; n < count; n++) {
+		if (IS_ERR(waiters[n].tsk))
+			break;
+
+		set_bit(STOP, &waiters[n].flags);
+	}
+	igt_wake_all_sync(&ready, &set, &done, &wq, n);
+
+	for (n = 0; n < count; n++) {
+		if (IS_ERR(waiters[n].tsk))
+			break;
+
+		kthread_stop(waiters[n].tsk);
+		put_task_struct(waiters[n].tsk);
+	}
+
+	drm_free_large(waiters);
+out_engines:
+	mock_engine_flush(engine);
+	return err;
+}
+
 int intel_breadcrumbs_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_random_insert_remove),
 		SUBTEST(igt_insert_complete),
+		SUBTEST(igt_wakeup),
 	};
 	struct intel_engine_cs *engine;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA
  2017-01-19 11:41 ` [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA Chris Wilson
@ 2017-02-01 13:26   ` Matthew Auld
  2017-02-01 14:33   ` Tvrtko Ursulin
  1 sibling, 0 replies; 73+ messages in thread
From: Matthew Auld @ 2017-02-01 13:26 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On 19 January 2017 at 11:41, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Exercise creating rotated VMA and checking the page order within.
>
> v2: Be more creative in rotated params
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/selftests/i915_vma.c | 177 ++++++++++++++++++++++++++++++
>  1 file changed, 177 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
> index b45b392444e4..2bda93f53b47 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_vma.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
> @@ -310,11 +310,188 @@ static int igt_vma_pin1(void *arg)
>         return err;
>  }
>
> +static unsigned long rotated_index(const struct intel_rotation_info *r,
> +                                  unsigned int n,
> +                                  unsigned int x,
> +                                  unsigned int y)
> +{
> +       return (r->plane[n].stride * (r->plane[n].height - y - 1) +
> +               r->plane[n].offset + x);
> +}
> +
> +static struct scatterlist *
> +assert_rotated(struct drm_i915_gem_object *obj,
> +              const struct intel_rotation_info *r, unsigned int n,
> +              struct scatterlist *sg)
> +{
> +       unsigned int x, y;
> +
> +       for (x = 0; x < r->plane[n].width; x++) {
> +               for (y = 0; y < r->plane[n].height; y++) {
> +                       unsigned long src_idx;
> +                       dma_addr_t src;
> +
> +                       src_idx = rotated_index(r, n, x, y);
> +                       src = i915_gem_object_get_dma_address(obj, src_idx);
> +
> +                       if (sg_dma_len(sg) != PAGE_SIZE) {
> +                               pr_err("Invalid sg.length, found %d, expected %lu for rotated page (%d, %d) [src index %lu]\n",
> +                                      sg_dma_len(sg), PAGE_SIZE,
> +                                      x, y, src_idx);
> +                               return ERR_PTR(-EINVAL);
> +                       }
> +
> +                       if (sg_dma_address(sg) != src) {
> +                               pr_err("Invalid address for rotated page (%d, %d) [src index %lu]\n",
> +                                      x, y, src_idx);
> +                               return ERR_PTR(-EINVAL);
> +                       }
> +
> +                       sg = ____sg_next(sg);
> +               }
> +       }
> +
> +       return sg;
> +}
> +
> +static unsigned int rotated_size(const struct intel_rotation_plane_info *a,
> +                                const struct intel_rotation_plane_info *b)
> +{
> +       return a->width * a->height + b->width * b->height;
> +}
> +
> +static int igt_vma_rotate(void *arg)
> +{
> +       struct drm_i915_private *i915 = arg;
> +       struct drm_i915_gem_object *obj;
> +       const struct intel_rotation_plane_info planes[] = {
> +               { .width = 1, .height = 1, .stride = 1 },
> +               { .width = 3, .height = 5, .stride = 4 },
> +               { .width = 5, .height = 3, .stride = 7 },
> +               { .width = 6, .height = 4, .stride = 6 },
> +               { }
> +       }, *a, *b;
> +       const unsigned int max_pages = 64;
> +       int err = -ENOMEM;
> +
> +       /* Create VMA for many different combinations of planes and check
> +        * that the page layout within the rotated VMA match our expectations.
> +        */
> +
> +       obj = i915_gem_object_create_internal(i915, max_pages * PAGE_SIZE);
> +       if (IS_ERR(obj))
> +               goto err;
> +
> +       for (a = planes; a->width; a++) {
> +               for (b = planes + ARRAY_SIZE(planes); b-- != planes; ) {
> +                       struct i915_ggtt_view view;
> +                       struct scatterlist *sg;
> +                       unsigned int n, max_offset;
> +
> +                       max_offset = max(a->stride * a->height,
> +                                        b->stride * b->height);
> +                       GEM_BUG_ON(max_offset >= max_pages);
> +                       max_offset = max_pages - max_offset;
> +
> +                       view.type = I915_GGTT_VIEW_ROTATED;
> +                       view.rotated.plane[0] = *a;
> +                       view.rotated.plane[1] = *b;
> +
> +                       for_each_prime_number_from(view.rotated.plane[0].offset, 0, max_offset) {
> +                               for_each_prime_number_from(view.rotated.plane[1].offset, 0, max_offset) {
> +                                       struct i915_address_space *vm =
> +                                               &i915->ggtt.base;
> +                                       struct i915_vma *vma;
> +
> +                                       vma = i915_vma_instance(obj, vm, &view);
> +                                       if (IS_ERR(vma)) {
> +                                               err = PTR_ERR(vma);
> +                                               goto err_object;
> +                                       }
> +
> +                                       if (!i915_vma_is_ggtt(vma) ||
> +                                           vma->vm != vm) {
> +                                               pr_err("VMA is not in the GGTT!\n");
> +                                               err = -EINVAL;
> +                                               goto err_object;
> +                                       }
> +
> +                                       if (memcmp(&vma->ggtt_view, &view, sizeof(view))) {
> +                                               pr_err("VMA mismatch upon creation!\n");
> +                                               err = -EINVAL;
> +                                               goto err_object;
> +                                       }
> +
> +                                       if (i915_vma_compare(vma,
> +                                                            vma->vm,
> +                                                            &vma->ggtt_view)) {
> +                                               pr_err("VMA compmare failed with itself\n");
s/compmare/compare/

Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 38/38] drm/i915: Add initial selftests for hang detection and resets
  2017-02-01 11:43   ` Mika Kuoppala
@ 2017-02-01 13:31     ` Chris Wilson
  0 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 13:31 UTC (permalink / raw)
  To: Mika Kuoppala; +Cc: intel-gfx

On Wed, Feb 01, 2017 at 01:43:41PM +0200, Mika Kuoppala wrote:
> Chris Wilson <chris@chris-wilson.co.uk> writes:
> > +static u64 hws_address(const struct i915_vma *hws,
> > +		       const struct drm_i915_gem_request *rq)
> > +{
> > +	return hws->node.start + offset_in_page(sizeof(u32)*rq->fence.context);
> 
> fence.context is something unique returned by dma_fence_context_alloc()
> and we assume we don't collide in the scope of this test?

Correct, fence.context is unique to a timeline/engine.
(dma_fence_context_alloc() doesn't prevent collisions, but that is a
topic for another day.)

> > +static struct drm_i915_gem_request *
> > +hang_create_request(struct hang *h,
> > +		    struct intel_engine_cs *engine,
> > +		    struct i915_gem_context *ctx)
> > +{
> > +	struct drm_i915_gem_request *rq;
> > +	int err;
> > +
> > +	if (i915_gem_object_is_active(h->obj)) {
> > +		struct drm_i915_gem_object *obj;
> > +		void *vaddr;
> > +
> > +		obj = i915_gem_object_create_internal(h->i915, PAGE_SIZE);
> > +		if (IS_ERR(obj))
> > +			return ERR_CAST(obj);
> > +
> > +		vaddr = i915_gem_object_pin_map(obj,
> > +						HAS_LLC(h->i915) ? I915_MAP_WB : I915_MAP_WC);
> > +		if (IS_ERR(vaddr)) {
> > +			i915_gem_object_put(obj);
> > +			return ERR_CAST(vaddr);
> > +		}
> > +
> > +		i915_gem_object_unpin_map(h->obj);
> > +		__i915_gem_object_release_unless_active(h->obj);
> > +
> > +		h->obj = obj;
> > +		h->batch = vaddr;
> 
> This whole block confuses me. Is it about the reset queue test
> if something went wrong with the previous request?

I was trying to write a generic struct hang to handle tests I haven't
thought of yet. In this case, I want to create a new request whilst the
old one is inflight, and so need to replace the pointers with a fresh
bo. It can be more fancy, we could just reuse the same bo until it is
full, but I was trying to get something up and running.

> > +static int igt_wait_reset(void *arg)
> > +{
> > +	struct drm_i915_private *i915 = arg;
> > +	struct drm_i915_gem_request *rq;
> > +	unsigned int reset_count;
> > +	struct hang h;
> > +	long timeout;
> > +	int err;
> > +
> > +	/* Check that we detect a stuck waiter and issue a reset */
> > +
> > +	if (!intel_has_gpu_reset(i915))
> > +		return 0;
> > +
> > +	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);
> 
> Noticed that you do this early. In this test the fake_hangcheck
> would do it for you also but I suspect you want this to gain
> exclusive access after this point?

Yes, I was mostly trying to apply a common pattern, claiming exclusivity
was a bonus. The difference is more apparent later.

What is missing to complete this set of tests are per-engine resets.
Hooking those in will be a fun exercise in improving both.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 02/38] drm/i915: Provide a hook for selftests
  2017-01-25 11:50   ` Joonas Lahtinen
@ 2017-02-01 13:57     ` Chris Wilson
  0 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 13:57 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx

On Wed, Jan 25, 2017 at 01:50:01PM +0200, Joonas Lahtinen wrote:
> On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> > @@ -499,7 +501,17 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> >  	if (vga_switcheroo_client_probe_defer(pdev))
> >  		return -EPROBE_DEFER;
> >  
> > -	return i915_driver_load(pdev, ent);
> > +	err = i915_driver_load(pdev, ent);
> > +	if (err)
> > +		return err;
> > +
> > +	err = i915_live_selftests(pdev);
> > +	if (err) {
> > +		i915_driver_unload(pci_get_drvdata(pdev));
> > +		return err > 0 ? -ENOTTY : err;
> 
> What's up with this?

What's up with what? We want to bail from the pci initialisation, so
need to return some error. ENOTTY is chosen as we don't (and I expect
should never) use it from the selftests and the internal routines used.

> >  static void i915_pci_remove(struct pci_dev *pdev)
> > @@ -521,6 +533,11 @@ static struct pci_driver i915_pci_driver = {
> >  static int __init i915_init(void)
> >  {
> >  	bool use_kms = true;
> > +	int err;
> > +
> > +	err = i915_mock_selftests();
> > +	if (err)
> > +		return err > 0 ? 0 : err;
> 
> Especially this, is this for skipping the device init completely?

Yes. 

> > +static int run_selftests(const char *name,
> > +			 struct selftest *st,
> > +			 unsigned int count,
> > +			 void *data)
> > +{
> > +	int err = 0;
> > +
> > +	while (!i915_selftest.random_seed)
> > +		i915_selftest.random_seed = get_random_int();
> 
> You know that in theory this might take an eternity. I'm not sure why
> zero is not OK after this point?

You wanted each run to be with a different seed!

The prng generator does produces 0 if state = { 0 }, but that is avoided
by prandom_seed_state().

> > +
> > +	i915_selftest.timeout_jiffies =
> > +		i915_selftest.timeout_ms ?
> > +		msecs_to_jiffies_timeout(i915_selftest.timeout_ms) :
> > +		MAX_SCHEDULE_TIMEOUT;
> 
> You had a default value for the variable too, I guess that's not needed
> now, and gets some bytes off .data.

I can move this into every user, if that's what you mean?
-chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 30/38] drm/i915: Test creation of VMA
  2017-01-31 10:50   ` Joonas Lahtinen
@ 2017-02-01 14:07     ` Chris Wilson
  0 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 14:07 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx

On Tue, Jan 31, 2017 at 12:50:20PM +0200, Joonas Lahtinen wrote:
> On to, 2017-01-19 at 11:41 +0000, Chris Wilson wrote:
> > +	for_each_prime_number(num_obj, ULONG_MAX - 1) {
> > +		for (; no < num_obj; no++) {
> > +			obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
> > +			if (IS_ERR(obj))
> > +				goto err;
> > +
> > +			list_add(&obj->batch_pool_link, &objects);
> 
> See below;
> 
> > +		}
> > +
> > +		nc = 0;
> > +		for_each_prime_number(num_ctx, MAX_CONTEXT_HW_ID) {
> > +			for (; nc < num_ctx; nc++) {
> > +				ctx = mock_context(i915, "mock");
> > +				if (!ctx)
> > +					goto err;
> > +
> > +				list_move(&ctx->link, &contexts);
> 
> Why the difference?

I was acquiring ownership of the context from another list.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA
  2017-01-19 11:41 ` [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA Chris Wilson
  2017-02-01 13:26   ` Matthew Auld
@ 2017-02-01 14:33   ` Tvrtko Ursulin
  2017-02-01 14:55     ` Chris Wilson
  1 sibling, 1 reply; 73+ messages in thread
From: Tvrtko Ursulin @ 2017-02-01 14:33 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/01/2017 11:41, Chris Wilson wrote:
> Exercise creating rotated VMA and checking the page order within.
>
> v2: Be more creative in rotated params
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/selftests/i915_vma.c | 177 ++++++++++++++++++++++++++++++
>  1 file changed, 177 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
> index b45b392444e4..2bda93f53b47 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_vma.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
> @@ -310,11 +310,188 @@ static int igt_vma_pin1(void *arg)
>  	return err;
>  }
>
> +static unsigned long rotated_index(const struct intel_rotation_info *r,
> +				   unsigned int n,
> +				   unsigned int x,
> +				   unsigned int y)
> +{
> +	return (r->plane[n].stride * (r->plane[n].height - y - 1) +
> +		r->plane[n].offset + x);
> +}
> +
> +static struct scatterlist *
> +assert_rotated(struct drm_i915_gem_object *obj,
> +	       const struct intel_rotation_info *r, unsigned int n,
> +	       struct scatterlist *sg)
> +{
> +	unsigned int x, y;
> +
> +	for (x = 0; x < r->plane[n].width; x++) {
> +		for (y = 0; y < r->plane[n].height; y++) {
> +			unsigned long src_idx;
> +			dma_addr_t src;
> +
> +			src_idx = rotated_index(r, n, x, y);
> +			src = i915_gem_object_get_dma_address(obj, src_idx);
> +
> +			if (sg_dma_len(sg) != PAGE_SIZE) {
> +				pr_err("Invalid sg.length, found %d, expected %lu for rotated page (%d, %d) [src index %lu]\n",
> +				       sg_dma_len(sg), PAGE_SIZE,
> +				       x, y, src_idx);
> +				return ERR_PTR(-EINVAL);
> +			}
> +
> +			if (sg_dma_address(sg) != src) {
> +				pr_err("Invalid address for rotated page (%d, %d) [src index %lu]\n",
> +				       x, y, src_idx);
> +				return ERR_PTR(-EINVAL);
> +			}
> +
> +			sg = ____sg_next(sg);
> +		}
> +	}
> +
> +	return sg;
> +}
> +
> +static unsigned int rotated_size(const struct intel_rotation_plane_info *a,
> +				 const struct intel_rotation_plane_info *b)
> +{
> +	return a->width * a->height + b->width * b->height;
> +}
> +
> +static int igt_vma_rotate(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct drm_i915_gem_object *obj;
> +	const struct intel_rotation_plane_info planes[] = {
> +		{ .width = 1, .height = 1, .stride = 1 },
> +		{ .width = 3, .height = 5, .stride = 4 },
> +		{ .width = 5, .height = 3, .stride = 7 },
> +		{ .width = 6, .height = 4, .stride = 6 },

4x6 stride 4 could be added if you were going for all combinations of 
wide/tall, equal stride and wider stride.

> +		{ }
> +	}, *a, *b;
> +	const unsigned int max_pages = 64;
> +	int err = -ENOMEM;
> +
> +	/* Create VMA for many different combinations of planes and check
> +	 * that the page layout within the rotated VMA match our expectations.
> +	 */
> +
> +	obj = i915_gem_object_create_internal(i915, max_pages * PAGE_SIZE);
> +	if (IS_ERR(obj))
> +		goto err;
> +
> +	for (a = planes; a->width; a++) {
> +		for (b = planes + ARRAY_SIZE(planes); b-- != planes; ) {
> +			struct i915_ggtt_view view;
> +			struct scatterlist *sg;
> +			unsigned int n, max_offset;
> +
> +			max_offset = max(a->stride * a->height,
> +					 b->stride * b->height);

It shouldn't be min?

> +			GEM_BUG_ON(max_offset >= max_pages);
> +			max_offset = max_pages - max_offset;
> +
> +			view.type = I915_GGTT_VIEW_ROTATED;
> +			view.rotated.plane[0] = *a;
> +			view.rotated.plane[1] = *b;

Single plane tests could be added as well.

> +
> +			for_each_prime_number_from(view.rotated.plane[0].offset, 0, max_offset) {
> +				for_each_prime_number_from(view.rotated.plane[1].offset, 0, max_offset) {

I would try all offsets here and not only primes since it is super fast 
and more importantly more realistic.

> +					struct i915_address_space *vm =
> +						&i915->ggtt.base;
> +					struct i915_vma *vma;
> +
> +					vma = i915_vma_instance(obj, vm, &view);
> +					if (IS_ERR(vma)) {
> +						err = PTR_ERR(vma);
> +						goto err_object;
> +					}
> +
> +					if (!i915_vma_is_ggtt(vma) ||
> +					    vma->vm != vm) {
> +						pr_err("VMA is not in the GGTT!\n");
> +						err = -EINVAL;
> +						goto err_object;
> +					}
> +
> +					if (memcmp(&vma->ggtt_view, &view, sizeof(view))) {

Just because rotation is the largest view! :) Need to use the "type" here.

> +						pr_err("VMA mismatch upon creation!\n");
> +						err = -EINVAL;
> +						goto err_object;
> +					}
> +
> +					if (i915_vma_compare(vma,
> +							     vma->vm,
> +							     &vma->ggtt_view)) {
> +						pr_err("VMA compmare failed with itself\n");

typo in compare

> +						err = -EINVAL;
> +						goto err_object;
> +					}
> +
> +					err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
> +					if (err) {
> +						pr_err("Failed to pin VMA, err=%d\n", err);
> +						goto err_object;
> +					}
> +
> +					if (vma->size != rotated_size(a, b) * PAGE_SIZE) {
> +						pr_err("VMA is wrong size, expected %lu, found %llu\n",
> +						       PAGE_SIZE * rotated_size(a, b), vma->size);
> +						err = -EINVAL;
> +						goto err_object;
> +					}
> +
> +					if (vma->node.size < vma->size) {
> +						pr_err("VMA binding too small, expected %llu, found %llu\n",
> +						       vma->size, vma->node.size);
> +						err = -EINVAL;
> +						goto err_object;
> +					}
> +
> +					if (vma->pages == obj->mm.pages) {
> +						pr_err("VMA using unrotated object pages!\n");
> +						err = -EINVAL;
> +						goto err_object;
> +					}
> +
> +					sg = vma->pages->sgl;
> +					for (n = 0; n < ARRAY_SIZE(view.rotated.plane); n++) {
> +						sg = assert_rotated(obj, &view.rotated, n, sg);
> +						if (IS_ERR(sg)) {
> +							pr_err("Inconsistent VMA pages for plane %d: [(%d, %d, %d, %d), (%d, %d, %d, %d)]\n", n,
> +							view.rotated.plane[0].width,
> +							view.rotated.plane[0].height,
> +							view.rotated.plane[0].stride,
> +							view.rotated.plane[0].offset,
> +							view.rotated.plane[1].width,
> +							view.rotated.plane[1].height,
> +							view.rotated.plane[1].stride,
> +							view.rotated.plane[1].offset);
> +							err = -EINVAL;
> +							goto err_object;
> +						}
> +					}
> +
> +					i915_vma_unpin(vma);
> +				}
> +			}
> +		}
> +	}
> +
> +err_object:
> +	i915_gem_object_put(obj);
> +err:
> +	return err;
> +}
> +
>  int i915_vma_mock_selftests(void)
>  {
>  	static const struct i915_subtest tests[] = {
>  		SUBTEST(igt_vma_create),
>  		SUBTEST(igt_vma_pin1),
> +		SUBTEST(igt_vma_rotate),
>  	};
>  	struct drm_i915_private *i915;
>  	int err;
>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA
  2017-02-01 14:33   ` Tvrtko Ursulin
@ 2017-02-01 14:55     ` Chris Wilson
  2017-02-01 15:44       ` Tvrtko Ursulin
  0 siblings, 1 reply; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 14:55 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Wed, Feb 01, 2017 at 02:33:22PM +0000, Tvrtko Ursulin wrote:
> 
> On 19/01/2017 11:41, Chris Wilson wrote:
> >Exercise creating rotated VMA and checking the page order within.
> >
> >v2: Be more creative in rotated params
> >
> >Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >---
> > drivers/gpu/drm/i915/selftests/i915_vma.c | 177 ++++++++++++++++++++++++++++++
> > 1 file changed, 177 insertions(+)
> >
> >diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
> >index b45b392444e4..2bda93f53b47 100644
> >--- a/drivers/gpu/drm/i915/selftests/i915_vma.c
> >+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
> >@@ -310,11 +310,188 @@ static int igt_vma_pin1(void *arg)
> > 	return err;
> > }
> >
> >+static unsigned long rotated_index(const struct intel_rotation_info *r,
> >+				   unsigned int n,
> >+				   unsigned int x,
> >+				   unsigned int y)
> >+{
> >+	return (r->plane[n].stride * (r->plane[n].height - y - 1) +
> >+		r->plane[n].offset + x);
> >+}
> >+
> >+static struct scatterlist *
> >+assert_rotated(struct drm_i915_gem_object *obj,
> >+	       const struct intel_rotation_info *r, unsigned int n,
> >+	       struct scatterlist *sg)
> >+{
> >+	unsigned int x, y;
> >+
> >+	for (x = 0; x < r->plane[n].width; x++) {
> >+		for (y = 0; y < r->plane[n].height; y++) {
> >+			unsigned long src_idx;
> >+			dma_addr_t src;
> >+
> >+			src_idx = rotated_index(r, n, x, y);
> >+			src = i915_gem_object_get_dma_address(obj, src_idx);
> >+
> >+			if (sg_dma_len(sg) != PAGE_SIZE) {
> >+				pr_err("Invalid sg.length, found %d, expected %lu for rotated page (%d, %d) [src index %lu]\n",
> >+				       sg_dma_len(sg), PAGE_SIZE,
> >+				       x, y, src_idx);
> >+				return ERR_PTR(-EINVAL);
> >+			}
> >+
> >+			if (sg_dma_address(sg) != src) {
> >+				pr_err("Invalid address for rotated page (%d, %d) [src index %lu]\n",
> >+				       x, y, src_idx);
> >+				return ERR_PTR(-EINVAL);
> >+			}
> >+
> >+			sg = ____sg_next(sg);
> >+		}
> >+	}
> >+
> >+	return sg;
> >+}
> >+
> >+static unsigned int rotated_size(const struct intel_rotation_plane_info *a,
> >+				 const struct intel_rotation_plane_info *b)
> >+{
> >+	return a->width * a->height + b->width * b->height;
> >+}
> >+
> >+static int igt_vma_rotate(void *arg)
> >+{
> >+	struct drm_i915_private *i915 = arg;
> >+	struct drm_i915_gem_object *obj;
> >+	const struct intel_rotation_plane_info planes[] = {
> >+		{ .width = 1, .height = 1, .stride = 1 },
> >+		{ .width = 3, .height = 5, .stride = 4 },
> >+		{ .width = 5, .height = 3, .stride = 7 },
> >+		{ .width = 6, .height = 4, .stride = 6 },
> 
> 4x6 stride 4 could be added if you were going for all combinations
> of wide/tall, equal stride and wider stride.

Just trying to pick some interesting ones. No rhyme or reason.

> >+		{ }
> >+	}, *a, *b;
> >+	const unsigned int max_pages = 64;
> >+	int err = -ENOMEM;
> >+
> >+	/* Create VMA for many different combinations of planes and check
> >+	 * that the page layout within the rotated VMA match our expectations.
> >+	 */
> >+
> >+	obj = i915_gem_object_create_internal(i915, max_pages * PAGE_SIZE);
> >+	if (IS_ERR(obj))
> >+		goto err;
> >+
> >+	for (a = planes; a->width; a++) {
> >+		for (b = planes + ARRAY_SIZE(planes); b-- != planes; ) {
> >+			struct i915_ggtt_view view;
> >+			struct scatterlist *sg;
> >+			unsigned int n, max_offset;
> >+
> >+			max_offset = max(a->stride * a->height,
> >+					 b->stride * b->height);
> 
> It shouldn't be min?
> 
> >+			GEM_BUG_ON(max_offset >= max_pages);
> >+			max_offset = max_pages - max_offset;

No, because it is inverted ^

> >+			view.type = I915_GGTT_VIEW_ROTATED;
> >+			view.rotated.plane[0] = *a;
> >+			view.rotated.plane[1] = *b;
> 
> Single plane tests could be added as well.

There are. Second plane is set to {0}. That's the only way to do single
plane tests, as I was thinking second plane with a first plane would be
illegal?

> >+
> >+			for_each_prime_number_from(view.rotated.plane[0].offset, 0, max_offset) {
> >+				for_each_prime_number_from(view.rotated.plane[1].offset, 0, max_offset) {
> 
> I would try all offsets here and not only primes since it is super
> fast and more importantly more realistic.

I was worried about the combinatorial explosion. We could have upto
65536 checks for each pair of planes (currently x20).

> >+					struct i915_address_space *vm =
> >+						&i915->ggtt.base;
> >+					struct i915_vma *vma;
> >+
> >+					vma = i915_vma_instance(obj, vm, &view);
> >+					if (IS_ERR(vma)) {
> >+						err = PTR_ERR(vma);
> >+						goto err_object;
> >+					}
> >+
> >+					if (!i915_vma_is_ggtt(vma) ||
> >+					    vma->vm != vm) {
> >+						pr_err("VMA is not in the GGTT!\n");
> >+						err = -EINVAL;
> >+						goto err_object;
> >+					}
> >+
> >+					if (memcmp(&vma->ggtt_view, &view, sizeof(view))) {
> 
> Just because rotation is the largest view! :) Need to use the "type" here.

I wasn't really sure the value in doing both memcmp() and
i915_vma_compare(). I think I'm just going to stick with
i915_vma_compare() only.
 
> >+						pr_err("VMA mismatch upon creation!\n");
> >+						err = -EINVAL;
> >+						goto err_object;
> >+					}
> >+
> >+					if (i915_vma_compare(vma,
> >+							     vma->vm,
> >+							     &vma->ggtt_view)) {
> >+						pr_err("VMA compmare failed with itself\n");

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA
  2017-02-01 14:55     ` Chris Wilson
@ 2017-02-01 15:44       ` Tvrtko Ursulin
  0 siblings, 0 replies; 73+ messages in thread
From: Tvrtko Ursulin @ 2017-02-01 15:44 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx, Joonas Lahtinen


On 01/02/2017 14:55, Chris Wilson wrote:
> On Wed, Feb 01, 2017 at 02:33:22PM +0000, Tvrtko Ursulin wrote:

[snip]

>>> +		{ }
>>> +	}, *a, *b;
>>> +	const unsigned int max_pages = 64;
>>> +	int err = -ENOMEM;
>>> +
>>> +	/* Create VMA for many different combinations of planes and check
>>> +	 * that the page layout within the rotated VMA match our expectations.
>>> +	 */
>>> +
>>> +	obj = i915_gem_object_create_internal(i915, max_pages * PAGE_SIZE);
>>> +	if (IS_ERR(obj))
>>> +		goto err;
>>> +
>>> +	for (a = planes; a->width; a++) {
>>> +		for (b = planes + ARRAY_SIZE(planes); b-- != planes; ) {
>>> +			struct i915_ggtt_view view;
>>> +			struct scatterlist *sg;
>>> +			unsigned int n, max_offset;
>>> +
>>> +			max_offset = max(a->stride * a->height,
>>> +					 b->stride * b->height);
>>
>> It shouldn't be min?
>>
>>> +			GEM_BUG_ON(max_offset >= max_pages);
>>> +			max_offset = max_pages - max_offset;
>
> No, because it is inverted ^

I see.

>>> +			view.type = I915_GGTT_VIEW_ROTATED;
>>> +			view.rotated.plane[0] = *a;
>>> +			view.rotated.plane[1] = *b;
>>
>> Single plane tests could be added as well.
>
> There are. Second plane is set to {0}. That's the only way to do single
> plane tests, as I was thinking second plane with a first plane would be
> illegal?

Missed that.


>>> +
>>> +			for_each_prime_number_from(view.rotated.plane[0].offset, 0, max_offset) {
>>> +				for_each_prime_number_from(view.rotated.plane[1].offset, 0, max_offset) {
>>
>> I would try all offsets here and not only primes since it is super
>> fast and more importantly more realistic.
>
> I was worried about the combinatorial explosion. We could have upto
> 65536 checks for each pair of planes (currently x20).

There is at least one even offset so OK. :)

>>> +					struct i915_address_space *vm =
>>> +						&i915->ggtt.base;
>>> +					struct i915_vma *vma;
>>> +
>>> +					vma = i915_vma_instance(obj, vm, &view);
>>> +					if (IS_ERR(vma)) {
>>> +						err = PTR_ERR(vma);
>>> +						goto err_object;
>>> +					}
>>> +
>>> +					if (!i915_vma_is_ggtt(vma) ||
>>> +					    vma->vm != vm) {
>>> +						pr_err("VMA is not in the GGTT!\n");
>>> +						err = -EINVAL;
>>> +						goto err_object;
>>> +					}
>>> +
>>> +					if (memcmp(&vma->ggtt_view, &view, sizeof(view))) {
>>
>> Just because rotation is the largest view! :) Need to use the "type" here.
>
> I wasn't really sure the value in doing both memcmp() and
> i915_vma_compare(). I think I'm just going to stick with
> i915_vma_compare() only.

I'm OK with that. Wanted even to suggest dropping the is_ggtt test since 
that feels should happen in a more basic VMA creation test. But if such 
doesn't exist then it's fine.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v3] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-02-01 13:19     ` [PATCH v3] " Chris Wilson
@ 2017-02-01 16:57       ` Tvrtko Ursulin
  2017-02-01 17:08         ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Tvrtko Ursulin @ 2017-02-01 16:57 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 01/02/2017 13:19, Chris Wilson wrote:
> Third retroactive test, make sure that the seqno waiters are woken.
>
> v2: Smattering of comments, rearrange code
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 202 +++++++++++++++++++++
>  1 file changed, 202 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> index 245e5f1b8373..907503901644 100644
> --- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> +++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> @@ -263,11 +263,213 @@ static int igt_insert_complete(void *arg)
>  	return err;
>  }
>
> +struct igt_wakeup {
> +	struct task_struct *tsk;
> +	atomic_t *ready, *set, *done;
> +	struct intel_engine_cs *engine;
> +	unsigned long flags;
> +#define STOP 0
> +#define WAITING 1
> +	wait_queue_head_t *wq;
> +	u32 seqno;
> +};
> +
> +static int wait_atomic(atomic_t *p)
> +{
> +	schedule();
> +	return 0;
> +}
> +
> +static int wait_atomic_timeout(atomic_t *p)
> +{
> +	return schedule_timeout(10 * HZ) ? 0 : -ETIMEDOUT;
> +}
> +
> +static bool wait_for_ready(struct igt_wakeup *w)
> +{
> +	DEFINE_WAIT(ready);
> +
> +	if (atomic_dec_and_test(w->done))
> +		wake_up_atomic_t(w->done);
> +
> +	if (test_bit(STOP, &w->flags))
> +		goto out;
> +
> +	set_bit(WAITING, &w->flags);
> +	for (;;) {
> +		prepare_to_wait(w->wq, &ready, TASK_INTERRUPTIBLE);
> +		if (atomic_read(w->ready) == 0)
> +			break;
> +
> +		schedule();
> +	}
> +	finish_wait(w->wq, &ready);
> +	clear_bit(WAITING, &w->flags);
> +
> +out:
> +	if (atomic_dec_and_test(w->set))
> +		wake_up_atomic_t(w->set);
> +
> +	return !test_bit(STOP, &w->flags);
> +}
> +
> +static int igt_wakeup_thread(void *arg)
> +{
> +	struct igt_wakeup *w = arg;
> +	struct intel_wait wait;
> +
> +	while (wait_for_ready(w)) {
> +		GEM_BUG_ON(kthread_should_stop());
> +
> +		intel_wait_init(&wait, w->seqno);
> +		intel_engine_add_wait(w->engine, &wait);
> +		for (;;) {
> +			set_current_state(TASK_UNINTERRUPTIBLE);
> +			if (i915_seqno_passed(intel_engine_get_seqno(w->engine),
> +					      w->seqno))
> +				break;
> +
> +			if (test_bit(STOP, &w->flags)) /* emergency escape */
> +				break;
> +
> +			schedule();
> +		}
> +		intel_engine_remove_wait(w->engine, &wait);
> +		__set_current_state(TASK_RUNNING);
> +	}
> +
> +	return 0;
> +}
> +
> +static void igt_wake_all_sync(atomic_t *ready,
> +			      atomic_t *set,
> +			      atomic_t *done,
> +			      wait_queue_head_t *wq,
> +			      int count)
> +{
> +	atomic_set(set, count);
> +	atomic_set(done, count);
> +
> +	atomic_set(ready, 0);
> +	wake_up_all(wq);
> +
> +	wait_on_atomic_t(set, wait_atomic, TASK_UNINTERRUPTIBLE);
> +	atomic_set(ready, count);
> +}
> +
> +static int igt_wakeup(void *arg)
> +{
> +	I915_RND_STATE(prng);
> +	const int state = TASK_UNINTERRUPTIBLE;
> +	struct intel_engine_cs *engine = arg;
> +	struct igt_wakeup *waiters;
> +	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
> +	const int count = 4096;
> +	const u32 max_seqno = count / 4;
> +	atomic_t ready, set, done;
> +	int err = -ENOMEM;
> +	int n, step;
> +
> +	mock_engine_reset(engine);
> +
> +	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
> +	if (!waiters)
> +		goto out_engines;
> +
> +	/* Create a large number of threads, each waiting on a random seqno.
> +	 * Multiple waiters will be waiting for the same seqno.
> +	 */
> +	atomic_set(&ready, count);
> +	for (n = 0; n < count; n++) {
> +		waiters[n].wq = &wq;
> +		waiters[n].ready = &ready;
> +		waiters[n].set = &set;
> +		waiters[n].done = &done;
> +		waiters[n].engine = engine;
> +		waiters[n].flags = 0;
> +
> +		waiters[n].tsk = kthread_run(igt_wakeup_thread, &waiters[n],
> +					     "i915/igt:%d", n);
> +		if (IS_ERR(waiters[n].tsk))
> +			goto out_waiters;
> +
> +		get_task_struct(waiters[n].tsk);
> +	}
> +
> +	for (step = 1; step <= max_seqno; step <<= 1) {
> +		u32 seqno;
> +
> +		/* The waiter threads start paused as we assign them a random
> +		 * seqno and reset the engine. Once the engine is reset,
> +		 * we signal that the threads may begin their, and we wait
> +		 * until all threads are woken.
> +		 */
> +		for (n = 0; n < count; n++) {
> +			GEM_BUG_ON(!test_bit(WAITING, &waiters[n].flags));

Looks like a race condition between thread startup and checking this 
bit. I think the assert is just unnecessary.

> +			waiters[n].seqno =
> +				1 + prandom_u32_state(&prng) % max_seqno;
> +		}
> +		mock_seqno_advance(engine, 0);
> +		igt_wake_all_sync(&ready, &set, &done, &wq, count);
> +
> +		/* Simulate GPU doing chunks of work, with one or more seqno
> +		 * appearing to finish at the same time. A random number of
> +		 * threads will be waiting upon the update and hopefully be
> +		 * woken.
> +		 */
> +		for (seqno = 1; seqno <= max_seqno + step; seqno += step) {
> +			usleep_range(50, 500);
> +			mock_seqno_advance(engine, seqno);
> +		}
> +		GEM_BUG_ON(intel_engine_get_seqno(engine) < 1 + max_seqno);
> +
> +		/* With the seqno now beyond any of the waiting threads, they
> +		 * should all be woken, see that they are complete and signal
> +		 * that they are ready for the next test. We wait until all
> +		 * threads are waiting for us (and not a seqno) again.
> +		 */
> +		err = wait_on_atomic_t(&done, wait_atomic_timeout, state);
> +		if (err) {
> +			pr_err("Timed out waiting for %d remaining waiters\n",
> +			       atomic_read(&done));
> +			break;
> +		}
> +
> +		err = check_rbtree_empty(engine);
> +		if (err)
> +			break;
> +	}
> +
> +out_waiters:
> +	mock_seqno_advance(engine, INT_MAX); /* wakeup any broken waiters */
> +	for (n = 0; n < count; n++) {
> +		if (IS_ERR(waiters[n].tsk))
> +			break;
> +
> +		set_bit(STOP, &waiters[n].flags);
> +	}
> +	igt_wake_all_sync(&ready, &set, &done, &wq, n);
> +
> +	for (n = 0; n < count; n++) {
> +		if (IS_ERR(waiters[n].tsk))
> +			break;
> +
> +		kthread_stop(waiters[n].tsk);
> +		put_task_struct(waiters[n].tsk);
> +	}
> +
> +	drm_free_large(waiters);
> +out_engines:
> +	mock_engine_flush(engine);
> +	return err;
> +}
> +
>  int intel_breadcrumbs_mock_selftests(void)
>  {
>  	static const struct i915_subtest tests[] = {
>  		SUBTEST(igt_random_insert_remove),
>  		SUBTEST(igt_insert_complete),
> +		SUBTEST(igt_wakeup),
>  	};
>  	struct intel_engine_cs *engine;
>  	int err;
>

Thanks for adding the comments. It will pay off in the future I am sure.

Other than the dodgy assert it looks good to me.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v3] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-02-01 16:57       ` Tvrtko Ursulin
@ 2017-02-01 17:08         ` Chris Wilson
  0 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-02-01 17:08 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Wed, Feb 01, 2017 at 04:57:53PM +0000, Tvrtko Ursulin wrote:
> >+static bool wait_for_ready(struct igt_wakeup *w)
> >+{
> >+	DEFINE_WAIT(ready);
> >+
> >+	if (atomic_dec_and_test(w->done))
> >+		wake_up_atomic_t(w->done);
> >+
> >+	if (test_bit(STOP, &w->flags))
> >+		goto out;
> >+
> >+	set_bit(WAITING, &w->flags);
> >+	for (;;) {
> >+		prepare_to_wait(w->wq, &ready, TASK_INTERRUPTIBLE);
> >+		if (atomic_read(w->ready) == 0)
> >+			break;
> >+
> >+		schedule();
> >+	}
> >+	finish_wait(w->wq, &ready);
> >+	clear_bit(WAITING, &w->flags);
> >+
> >+out:
> >+	if (atomic_dec_and_test(w->set))
> >+		wake_up_atomic_t(w->set);
> >+
> >+	return !test_bit(STOP, &w->flags);
> >+}



> >+	atomic_set(&ready, count);
> >+	for (n = 0; n < count; n++) {
> >+		waiters[n].wq = &wq;
> >+		waiters[n].ready = &ready;
> >+		waiters[n].set = &set;
> >+		waiters[n].done = &done;
> >+		waiters[n].engine = engine;
> >+		waiters[n].flags = 0;
> >+
> >+		waiters[n].tsk = kthread_run(igt_wakeup_thread, &waiters[n],
> >+					     "i915/igt:%d", n);
> >+		if (IS_ERR(waiters[n].tsk))
> >+			goto out_waiters;
> >+
> >+		get_task_struct(waiters[n].tsk);
> >+	}
> >+
> >+	for (step = 1; step <= max_seqno; step <<= 1) {
> >+		u32 seqno;
> >+
> >+		/* The waiter threads start paused as we assign them a random
> >+		 * seqno and reset the engine. Once the engine is reset,
> >+		 * we signal that the threads may begin their, and we wait
> >+		 * until all threads are woken.
> >+		 */
> >+		for (n = 0; n < count; n++) {
> >+			GEM_BUG_ON(!test_bit(WAITING, &waiters[n].flags));
> 
> Looks like a race condition between thread startup and checking this
> bit. I think the assert is just unnecessary.

I liked it for the document that we only update the waiters state and
reset the engine whilst the threads are idle. You're right about the
startup race, but we can start with the flag set. Maybe s/WAITING/IDLE/
to try and avoid reusing "wait" too often.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation
  2017-02-01 11:34     ` Chris Wilson
@ 2017-02-02 12:41       ` Tvrtko Ursulin
  2017-02-02 13:38         ` Chris Wilson
  0 siblings, 1 reply; 73+ messages in thread
From: Tvrtko Ursulin @ 2017-02-02 12:41 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 01/02/2017 11:34, Chris Wilson wrote:
> On Wed, Feb 01, 2017 at 11:17:39AM +0000, Tvrtko Ursulin wrote:

>>> +
>>> +			for (npages = npages_funcs; *npages; npages++) {
>>> +				prandom_seed_state(&prng,
>>> +						   i915_selftest.random_seed);
>>> +				if (!alloc_table(&pt, sz, sz, *npages, &prng))
>>> +					return 0; /* out of memory, give up */
>>
>> You don't have skip status? Sounds not ideal to silently abort.
>
> It runs until we use all physical memory, if left to its own devices. It's
> not a skip if we have already completed some tests. ENOMEM of the test
> setup itself is not what I'm testing for here, the test is for the
> iterators.

But suppose you mess up the test so the starting condition asks for 
impossible amount of memory but the test claims it passed. I don't think 
that is a good behaviour.

>>> +static int igt_sg_trim(void *ignored)
>>> +{
>>> +	IGT_TIMEOUT(end_time);
>>> +	const unsigned long max = PAGE_SIZE; /* not prime! */
>>> +	struct pfn_table pt;
>>> +	unsigned long prime;
>>> +
>>> +	for_each_prime_number(prime, max) {
>>> +		const npages_fn_t *npages;
>>> +		int err;
>>> +
>>> +		for (npages = npages_funcs; *npages; npages++) {
>>> +			struct rnd_state prng;
>>> +
>>> +			prandom_seed_state(&prng, i915_selftest.random_seed);
>>> +			if (!alloc_table(&pt, prime, max, *npages, &prng))
>>> +				return 0; /* out of memory, give up */
>>> +
>>> +			err = 0;
>>> +			if (i915_sg_trim(&pt.st)) {
>>> +				if (pt.st.orig_nents != prime ||
>>> +				    pt.st.nents != prime) {
>>> +					pr_err("i915_sg_trim failed (nents %u, orig_nents %u), expected %lu\n",
>>> +					       pt.st.nents, pt.st.orig_nents, prime);
>>> +					err = -EINVAL;
>>> +				} else {
>>> +					prandom_seed_state(&prng,
>>> +							   i915_selftest.random_seed);
>>> +					err = expect_pfn_sgtable(&pt,
>>> +								 *npages, &prng,
>>> +								 "i915_sg_trim",
>>> +								 end_time);
>>> +				}
>>> +			}
>>
>> Similar to alloc_table failures above - no log or action when
>> i915_sg_trim fails due out of memory?
>
> No, simply because that's an expected and acceptable result. The
> question should be whether we always want to check after sg_trim.

Same as above really, I think that creates a big doubt in the test output.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation
  2017-02-02 12:41       ` Tvrtko Ursulin
@ 2017-02-02 13:38         ` Chris Wilson
  0 siblings, 0 replies; 73+ messages in thread
From: Chris Wilson @ 2017-02-02 13:38 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Thu, Feb 02, 2017 at 12:41:42PM +0000, Tvrtko Ursulin wrote:
> 
> On 01/02/2017 11:34, Chris Wilson wrote:
> >On Wed, Feb 01, 2017 at 11:17:39AM +0000, Tvrtko Ursulin wrote:
> 
> >>>+
> >>>+			for (npages = npages_funcs; *npages; npages++) {
> >>>+				prandom_seed_state(&prng,
> >>>+						   i915_selftest.random_seed);
> >>>+				if (!alloc_table(&pt, sz, sz, *npages, &prng))
> >>>+					return 0; /* out of memory, give up */
> >>
> >>You don't have skip status? Sounds not ideal to silently abort.
> >
> >It runs until we use all physical memory, if left to its own devices. It's
> >not a skip if we have already completed some tests. ENOMEM of the test
> >setup itself is not what I'm testing for here, the test is for the
> >iterators.
> 
> But suppose you mess up the test so the starting condition asks for
> impossible amount of memory but the test claims it passed. I don't
> think that is a good behaviour.

Returing ENOMEM when the failure is intentional is not an option either.

diff --git a/drivers/gpu/drm/i915/selftests/scatterlist.c b/drivers/gpu/drm/i915/selftests/scatterlist.c
index fa5bd09c863f..5eb732231749 100644
--- a/drivers/gpu/drm/i915/selftests/scatterlist.c
+++ b/drivers/gpu/drm/i915/selftests/scatterlist.c
@@ -245,6 +245,7 @@ static int igt_sg_alloc(void *ignored)
        const unsigned long max_order = 20; /* approximating a 4GiB object */
        struct rnd_state prng;
        unsigned long prime;
+       int alloc_error = -ENOMEM;
 
        for_each_prime_number(prime, max_order) {
                unsigned long size = BIT(prime);
@@ -260,7 +261,7 @@ static int igt_sg_alloc(void *ignored)
                                prandom_seed_state(&prng,
                                                   i915_selftest.random_seed);
                                if (!alloc_table(&pt, sz, sz, *npages, &prng))
-                                       return 0; /* out of memory, give up */
+                                       return alloc_error;
 
                                prandom_seed_state(&prng,
                                                   i915_selftest.random_seed);
@@ -270,6 +271,8 @@ static int igt_sg_alloc(void *ignored)
                                sg_free_table(&pt.st);
                                if (err)
                                        return err;
+
+                               alloc_error = 0;
                        }
                }
        }

-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2017-02-02 13:38 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-19 11:41 More selftests Chris Wilson
2017-01-19 11:41 ` [PATCH v2 01/38] drm: Provide a driver hook for drm_dev_release() Chris Wilson
2017-01-25 11:12   ` Joonas Lahtinen
2017-01-25 11:16     ` Chris Wilson
2017-01-19 11:41 ` [PATCH v2 02/38] drm/i915: Provide a hook for selftests Chris Wilson
2017-01-25 11:50   ` Joonas Lahtinen
2017-02-01 13:57     ` Chris Wilson
2017-01-19 11:41 ` [PATCH v2 03/38] drm/i915: Add some selftests for sg_table manipulation Chris Wilson
2017-02-01 11:17   ` Tvrtko Ursulin
2017-02-01 11:34     ` Chris Wilson
2017-02-02 12:41       ` Tvrtko Ursulin
2017-02-02 13:38         ` Chris Wilson
2017-01-19 11:41 ` [PATCH v2 04/38] drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove Chris Wilson
2017-01-19 11:41 ` [PATCH v2 05/38] drm/i915: Add unit tests for the breadcrumb rbtree, completion Chris Wilson
2017-01-19 11:41 ` [PATCH v2 06/38] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups Chris Wilson
2017-02-01 11:27   ` Tvrtko Ursulin
2017-02-01 11:43     ` Chris Wilson
2017-02-01 13:19     ` [PATCH v3] " Chris Wilson
2017-02-01 16:57       ` Tvrtko Ursulin
2017-02-01 17:08         ` Chris Wilson
2017-01-19 11:41 ` [PATCH v2 07/38] drm/i915: Mock the GEM device for self-testing Chris Wilson
2017-01-19 11:41 ` [PATCH v2 08/38] drm/i915: Mock a GGTT " Chris Wilson
2017-01-19 11:41 ` [PATCH v2 09/38] drm/i915: Mock infrastructure for request emission Chris Wilson
2017-01-19 11:41 ` [PATCH v2 10/38] drm/i915: Create a fake object for testing huge allocations Chris Wilson
2017-01-19 13:09   ` Matthew Auld
2017-01-19 11:41 ` [PATCH v2 11/38] drm/i915: Add selftests for i915_gem_request Chris Wilson
2017-01-19 11:41 ` [PATCH v2 12/38] drm/i915: Add a simple request selftest for waiting Chris Wilson
2017-01-19 11:41 ` [PATCH v2 13/38] drm/i915: Add a simple fence selftest to i915_gem_request Chris Wilson
2017-01-19 11:41 ` [PATCH v2 14/38] drm/i915: Simple selftest to exercise live requests Chris Wilson
2017-02-01  8:14   ` Joonas Lahtinen
2017-02-01 10:31     ` Chris Wilson
2017-01-19 11:41 ` [PATCH v2 15/38] drm/i915: Test simultaneously submitting requests to all engines Chris Wilson
2017-02-01  8:03   ` Joonas Lahtinen
2017-02-01 10:15     ` Chris Wilson
2017-01-19 11:41 ` [PATCH v2 16/38] drm/i915: Add selftests for object allocation, phys Chris Wilson
2017-01-19 11:41 ` [PATCH v2 17/38] drm/i915: Add a live seftest for GEM objects Chris Wilson
2017-01-19 11:41 ` [PATCH v2 18/38] drm/i915: Test partial mappings Chris Wilson
2017-01-19 11:41 ` [PATCH v2 19/38] drm/i915: Test exhaustion of the mmap space Chris Wilson
2017-01-19 11:41 ` [PATCH v2 20/38] drm/i915: Test coherency of and barriers between cache domains Chris Wilson
2017-01-19 13:01   ` Matthew Auld
2017-01-19 11:41 ` [PATCH v2 21/38] drm/i915: Move uncore selfchecks to live selftest infrastructure Chris Wilson
2017-01-19 11:41 ` [PATCH v2 22/38] drm/i915: Test all fw tables during mock selftests Chris Wilson
2017-01-19 11:41 ` [PATCH v2 23/38] drm/i915: Sanity check all registers for matching fw domains Chris Wilson
2017-01-19 11:41 ` [PATCH v2 24/38] drm/i915: Add some mock tests for dmabuf interop Chris Wilson
2017-01-19 11:41 ` [PATCH v2 25/38] drm/i915: Add initial selftests for i915_gem_gtt Chris Wilson
2017-01-19 11:41 ` [PATCH v2 26/38] drm/i915: Exercise filling the top/bottom portions of the ppgtt Chris Wilson
2017-01-31 12:32   ` Joonas Lahtinen
2017-01-19 11:41 ` [PATCH v2 27/38] drm/i915: Exercise filling the top/bottom portions of the global GTT Chris Wilson
2017-01-19 11:41 ` [PATCH v2 28/38] drm/i915: Fill different pages of the GTT Chris Wilson
2017-01-19 11:41 ` [PATCH v2 29/38] drm/i915: Exercise filling and removing random ranges from the live GTT Chris Wilson
2017-01-20 10:39   ` Matthew Auld
2017-01-19 11:41 ` [PATCH v2 30/38] drm/i915: Test creation of VMA Chris Wilson
2017-01-31 10:50   ` Joonas Lahtinen
2017-02-01 14:07     ` Chris Wilson
2017-01-19 11:41 ` [PATCH v2 31/38] drm/i915: Exercise i915_vma_pin/i915_vma_insert Chris Wilson
2017-01-19 11:41 ` [PATCH v2 32/38] drm/i915: Verify page layout for rotated VMA Chris Wilson
2017-02-01 13:26   ` Matthew Auld
2017-02-01 14:33   ` Tvrtko Ursulin
2017-02-01 14:55     ` Chris Wilson
2017-02-01 15:44       ` Tvrtko Ursulin
2017-01-19 11:41 ` [PATCH v2 33/38] drm/i915: Test creation of partial VMA Chris Wilson
2017-01-31 12:03   ` Joonas Lahtinen
2017-01-19 11:41 ` [PATCH v2 34/38] drm/i915: Live testing for context execution Chris Wilson
2017-01-25 14:51   ` Joonas Lahtinen
2017-01-19 11:41 ` [PATCH v2 35/38] drm/i915: Initial selftests for exercising eviction Chris Wilson
2017-01-19 11:41 ` [PATCH v2 36/38] drm/i915: Add mock exercise for i915_gem_gtt_reserve Chris Wilson
2017-01-25 13:30   ` Joonas Lahtinen
2017-01-19 11:41 ` [PATCH v2 37/38] drm/i915: Add mock exercise for i915_gem_gtt_insert Chris Wilson
2017-01-25 13:31   ` Joonas Lahtinen
2017-01-19 11:41 ` [PATCH v2 38/38] drm/i915: Add initial selftests for hang detection and resets Chris Wilson
2017-02-01 11:43   ` Mika Kuoppala
2017-02-01 13:31     ` Chris Wilson
2017-01-19 13:54 ` ✗ Fi.CI.BAT: failure for series starting with [v2,01/38] drm: Provide a driver hook for drm_dev_release() Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.